text
stringlengths 16
1.79M
| label
int64 0
10
|
---|---|
generic macroscopic traffic node model for general road junctions via a dynamic system approach jul matthew wright and roberto horowitz july abstract this paper addresses an open problem in traffic modeling the macroscopic node problem a macroscopic traffic model in contrast to a model allows for variation of driving behavior across subpopulations of vehicles in the flow the models are thus more descriptive they have been used to model variable mixtures of traffic like traffic traffic etc but are much more complex the node problem is a particularly complex problem as it requires the resolution of discontinuities in traffic density and mixture characteristics and solving of throughflows for arbitrary numbers of input and output roads to a node in other words this is an arbitrarydimensional riemann problem with two conserved quantities we propose a solution to this problem by making use of a dynamic system characterization of the node model problem which gives insight and intuition as to the dynamics implicit in node models we use this intuition to extend the dynamic system node model to the setting we also extend the generic class of node model constraints to the second order and present a simple solution algorithm to the node problem this node model has immediate applications in allowing modeling of traffic flows of contemporary interest like flows in arbitrary road networks introduction the macroscopic approximation of vehicle traffic has proven a valuable tool for the study of traffic s nonlinear dynamics and the design of methods for mitigating and controlling undesirable outcomes like congestion this macroscopic theory describes the dynamics of vehicles along roads with partial differential equations pdes inspired by fluid flow the most basic macroscopic formulation is the kinematic wave or lwr due to which describes traffic with a conservation equation where z t is the density of vehicles t is time z is the lineal direction along the road and v is the flow speed the total flow is often expressed in terms of a flux function f the flux function on a long straight road is often called the fundamental diagram the formulation is a simple nonlinear model and can not capture many characteristics of real traffic flows for example a flux function f of only does not admit the phenomenon of accelerating and decelerating flows tracing a hysteresis loop in the v plane one extension of the lwr model that can express a richer variety of dynamics is the arz family of models these models fit into the generic second order or extended arz class of traffic models which can be written as where v v w as seen in the model actually consists of two partial differential equations that is they only contain first derivatives in a case of overloaded mathematical terminology the name second order here comes from a view where a system is one that has two state variables in this case and w or equivalently and v where w x t is a property or invariant that is conserved along trajectories the property w in can be described as a characteristic of vehicles that determines their relationship members of the generic second order model gsom family are differentiated by the choice of w and its relationship on the behavior examples of chosen w s include the difference between vehicles speed and an equilibrium speed driver spacing or the flow s portion of autonomous vehicles an intuitive way of describing the effect of the property w in is that it parameterizes a family of flow models f w w with different flow models for different values of w for application of macroscopic traffic simulation road networks are often modeled as directed graphs edges that represent individual roads are called links and junctions where links meet are called nodes typically the flow model f on links is called the link model and the flow model at nodes is called the node development of accurate link and node models have been areas of much research activity in transportation engineering for many years this paper focuses on node models for and macroscopic models the node model resolves the discontinuities in w between links and determines a neumann boundary for nodes with merges diverges or both this riemann problem becomes multidimensional through this the node model determines how the state of an individual link affects and is affected by its connected links their own connected links and so on through the network as a result it has recently been recognized that the specific node model used can have a very large role in describing the congestion dynamics that emerge in complex and large networks for more on this see the discussions in the introduction sections of and in we introduced a novel characterization of node models as dynamic systems traditional studies of node models see usually present the node model as an optimization problem where the node flows are found by solving this problem or in algorithmic form where an explicit set of steps are performed to compute the flows across the node in contrast the dynamic system characterization describes the flows across the node as themselves evolving over some period of time in application this means that the dynamic system characterization presents dynamics that are said to occur during the simulation timesteps of the link pdes the dynamic system characterization can be thought of as making explicit the behavior of the flows at nodes of many algorithmic node models it was shown in that the dynamic system characterization produces the same solutions as the algorithm introduced in which also reduces to the one introduced in as a special case the dynamic system characterization has proven useful in imparting an intuition as to what physical processes over time are implicit in these algorithmic node models see the discussions referring to in for some examples in this paper we develop a dynamic system characterization of a node model and use it to solve the general node problem for models this paper has several main contributions the first is an extension of the dynamic system characterization of firstorder node models as introduced in to a simple solution algorithm this represents the completion of an argument began in section of that reference the second contribution is the extension of the dynamic system characterization to the generic models as we will see the dynamic system characterization lends itself to an intuitive incorporation of the second pde in that is not obvious in the traditional presentation of node models the third contribution and the principal contribution of this paper parallels the first by using the dynamic system node model to derive an intuitive algorithm for computing node flows for flow models for general nodes to the best of our knowledge this represents the first proposed generic applicable to nodes node model for traffic flow the remainder of this paper is organized as follows section reviews the node flow problem the dynamic system characterization introduced in and presents the aforementioned solution algorithm contribution one in the above paragraph section reviews the link discretization of the gsom as presented in which produces the inputs to our node model and the standard flow problem and its solution section presents the extension of the flow problem to the case the dynamic system characterization to the gsom family and the solution algorithm for the general node problem contributions two and three finally section concludes and notes some open problems a note on naming as we will see in section we build off the generic class of node models to develop our node model given that the relevant model used is itself called the generic second order model it might be accurate to describe this paper s results as the genericization of the generic class of node models to the generic model but this description likely loses in comprehensibility what it might gain in accuracy node model in this section we review the general node problem and a particular node model and its solution algorithm this node model will be extended to the node problem in section the traffic node problem is defined on a junction of m input links indexed by i and n output links indexed by j we further define c classes sometimes called commodities of vehicle indexed by the node problem c takes as inputs the incoming links demands sic split ratios j which define the portion of vehicles of class c in link i that wish to exit to link j and outgoing links supplies rj and gives as outputs the set of flows from c c c i to j for class c fi j we denote as a shorthand the directed demand si j j sic nodes are generally infinitesimally small and have no storage so all the flow that enters the node must exit the node the rest of this section is organized as follows section defines our node problem as an optimization problem defined by explicit requirements following the example set by section reviews the dynamic system of whose executions produce solutions to the node problem finally section uses the dynamic system formulation as a base to develop a node model solution algorithm this algorithm represents the completion of an argument began in generic class of node model requirements the node problem s history begins with the original formulation of macroscopic discretized traffic flow models there have been many developments in the node model theory since but we reflect only some more recent results we can divide the node model literature into and epochs they drew from the literature several node model requirements to develop a set of conditions for nodel models they call the generic class of node models gcnm these set of conditions give an excellent starting point for our discussion of the mathematical technicalities of node models and have been used as a starting point by many subsequent papers such as in the following list we present the variant of gcnm requirements used in which includes a modification of the fifo requirement item below to s partial fifo requirement applicability to general numbers of input links m and output links n in the case of flow this also extends to general numbers of classes p c maximization of the total flow through the node mathematically this may be expressed as max i j c fi j according to this means that each flow should be actively restricted by one of the constraints otherwise it would increase until it hits some when a node model is formulated as a constrained optimization problem its solution will automatically satisfy this requirement however what this requirement really means is that constraints should be stated correctly and not be overly simplified and thus overly restrictive for the sake of convenient problem formulation see the literature review in for examples of node models that inadvertently do not maximize node throughput by oversimplifying their requirements c of all flows mathematically fi j for all i j p flow conservation total flow entering the node must be equal to total flow exiting the node mathematically p c c f for all f j i j i i j p c p c rj sic and i fi j satisfaction of demand and supply constraints mathematically j fi j satisfaction of the partial fifo constraint if a single destination j for a given i is not able to accept all demand from i to j then all other flows from i are constrained by the queue of j c is partially defined vehicles that builds up in i the degree to which this queue restricts the other flows fi j i by the restriction intervals j y z this interval means that a queue in the i j movement will block the portion of i lanes in i with leftmost extent y and rightmost extent z if i j is a through movement that uses two lanes and i j is a movement that uses the right of those two lanes then j the traditional full fifo behavior where any queue in i blocks all of i s lanes can be recovered by setting all j continuing this example we will have i since the only lane in i that serves movement i j the right lane will be blocked j j by a queue for the through movement which will queue on both lanes to help keep the meaning of i clear we find it helpful to read it as the restriction interval of j onto j for i j j another item that defines the partial behavior is the amount of time that a restriction interval j is active that is if j is a link with relatively high demands and j is a link with relatively low demands it should be the case that j is active on a greater portion of the directed demand c si j than j we will see how this effect of time is captured in the dynamic system formulation of section finally we require that we consider the cumulative effect of restriction intervals suppose that a movement i j has an active restriction from a queue for movement i j then say that another downstream link j exhausts its supply and vehicles begin queueing for the movement i j then the new restriction on i j after this second queue forms is j j this requirement is stated mathematically as c c fi j si j fi j j si j si j s i j j c where a denotes the area of a object denotes a cartesian product fi j p c si j c si j p c fi j and the formulation in is complex in order to state it as an optimization constraint and not as a consequence of the queue formation intuition outlined in the third paragraph of this item a major contribution of the dynamic system approach to node modeling is the explicit encoding of this more intuitive description see sections or section for a much more discussion of this requirement satisfaction of the invariance principle if the flow from some input link i is restricted by the available output supply this input link enters a congested regime this creates a queue in this input link and causes its demand si to jump to capacity fi in an infinitesimal time and therefore a node model should yield solutions that are invariant to replacing si with fi when flow from input link i is supply restrictions on a flow from any given input link arepimposed on class p components of this flow proporc c c c tionally to their demands mathematically fi j c fi j j sic c j sic this assumes that the classes are mixed isotropically this means that all vehicles attempting to take movement i j will be queued in roughly random order and not for example having all vehicles of commodity c queued in front of all vehicles of c in which case the c vehicles would be disproportionally affected by spillback we feel this is a reasonable assumption for situations where the demand at the node is dependent mainly on the vehicles near the end of the link in a small cell at the end in addition to the above numbered requirements two other elements are needed to define a node model the first is a rule for the portioning of output link supplies rj among the input links following in it was proposed to allocate supply for incoming flows proportionally to input link capacities which we will denote fi in this paper we allocate supply proportionally to the links priorities pi in the spirit of in the dynamic system view priorities represent the relative rate at which vehicles exit each link i to claim downstream space one reasonable formulation might be to follow the example pi fi if as in it assumed that vehicles exit a link at rate fi the second necessary element is a redistribution of leftover following the initial partitioning of supplies rj if one or more of the input links does not fill its allocated supply some rule must redistribute the difference to other input links who may still fill it this second element is meant to model the selfish behavior of drivers to take any space available and ties in closely with requirement above referred to these two elements collectively as a supply constraint interaction rule scir for some discussion of choices of scirs in recent papers see section in this paper we consider a scir of the form x x fi j sic wi i j c this requirement is encoded in the right part of the cartesian product in this requirement appears as a component of a cartesian product in the rectangles that appear in section however it is much more intuitive to understand this as an explicit temporal property as it appears in the dynamic system characterization so we will not discuss section s derivation here pi j rj i wi fi j pm j x c c where wi j j si wi c j fi j pi j j i where p c si j pij pc c pi c si is the oriented priority which distributes input priority proportionally to the actual vehicles using that p links c as before priority to claim downstream supply and fi j c fi j the set wi denotes all output links that restrict the flow from p cthe cconditions for membership in wi can be read as there is some nonzero demand for the movement i j c j si and i claims at least its priorityproportional allocation of j s supply j fi j pi j j i note that if a link j is in wi then j fi j pi j j by construction constraint says that if a link i is not able to fill its demand then there is at least one output link in wi that restricts i and that i s movements claim at least as much as their allocation of supply constraint captures the reallocation of leftover supply which states that a link i that can not fulfill all of its demand to the links in wi will continue to send vehicles after links j have fulfilled their demands to the j wi this concludes the setup of our generic node model problem a solution will have flows that are constrained by at least one of the constraints outlined above an algorithm to solve this problem and proof of optimality is given in other node model requirements note that the list of node requirements presented in section which is the particular node problem of interest for the remainder of this paper is not an exhaustive list of all reasonable node model requirements since the statement of the gcnm requirements in several authors have proposed extensions or modifications as we have in the partial fifo relaxation beyond what we have covered here one of the most discussed are nodal supply constraints these supply constraints as their name suggests describe supply limitations at the node rather than in one of the output links they are meant to describe restrictions on traffic that occur due to interference between flows in the junction rather than vehicles being blocked in the input link or the exhaustion of some shared resource such as green light time at a signalized intersection each movement through the node may or may not consume an amount of a node supply proportional to its throughflow the node supply constraints in the gcnm framework were originally proposed by in in it was noted that these node supplies may lead to solutions very recently revisited the node supply constraints mostly in the context of distribution of green time to address s critique of of solutions and proposed a generalization of the objective that still enforces that drivers will take any available space they can we do not explicitly include the node supply constraints in the dynamic system node models and resulting solution algorithms in this paper the path towards their inclusion in the and cases is straightforward but notationally cumbersome and somewhat beyond this paper s scope of fusing the gcnm and link models review of node dynamic system this section reviews the node dynamic system characterization of node models presented in this dynamic system is a hybrid system which means that it contains both continuous and discrete states also called discrete modes here the continuous states evolve in time according to differential equations the differential equations themselves change between discrete states and the discrete state transitions are activated when conditions on the continuous states are satisfied let there be n m c continuous states xci j t each representing the number of vehicles of class c that have taken movement i j through the node the continuous state space is denoted x let j be the set of all output links j let there be discrete states recall refers to the power set of j the index representing the set of downstream links that have become congested a downstream p p link j is said to become congested at time t if i c xcij t rj the discrete state space is denoted q init q x defines the set of permissible initial states of the system at t dom q x denotes the domain of a discrete state which is the space of permissible continuous states while the discrete state is active q x q x is a reset relation which defines the transitions between discrete states and the conditions for those transitions the hybrid system execution begins at time t p c when i j has a each link is given a time limit ti c sic this is necessary to ensure that xci j si j partial fifo constraint active which appears in the dynamic system as a flow rate attenuation our hybrid system q x init j dom is q x init j q xci j t c si j p p ij sc c i j j c j j c c xci si j c if xcij t sij and t ti otherwise xx x xci j rj c i dom xx xci j rj c i x x if xx xci j rj c i where j when for all i j c the execution is complete and fijc xcij it was shown in that the hybrid system produces the same solutions as s algorithm in the following section we show how to quickly compute executions of the hybrid system which since it is based on the continuoustime dynamics of presents a more intuitive algorithm than the one in execution of the node dynamic system as a simple algorithm evaluating or hybrid systems typically involves forward integration of the differential equation s with fixed or varying step sizes however in the case of evaluation can be performed in a much simpler manner this is due to the particular dynamics of the system since the dynamics and the condition for discrete mode switching are very simple the time that the next mode switch will occur can be found in closed form equations and say that a mode switch where link j enters will occur when xx xcij rj i c say we are currently at time combining with we can find the time that the mode switch occurs which we denote tj z tj x x xx dt rj xcij i c i c solving the integral in z tj z xx i tj j dt c xx i tj c x i then plugging into c si j i pi j p c j dt c si j j c c xci si j i j pij j c c xci si j p p rj i c xci j tj s i p p j j i i j j c c xci si j this value can be computed for each output link j then the j with the smallest tj will be the first link to fill and join we had used j for this output link so let tj min tj however one of the input links may have its time limit ti expire this would also change the dynamics as it stops sending vehicles at that time therefore evaluation of the system trajectory beginning from can be done by i evaluating for each output link ii identifying and iii checking whether any of the time limits ti occur before tj this is an simulation it is only necessary to determine when the next event will occur the equations for over min ti tj can then be evaluated in closed form under note that the j s for an i may change to zero from nonzero without a change in the discrete state if the c conditional of xci j t si j in is broken this can be understood as the i running out of vehicles that it is able to send this may happen if pi si for that i and some partial fifo constraint becomes active on i in the following algorithm we introduce a new set that was not present in the dynamic system definition and contains the i s that either exhaust their supply or have their time limits expire those i s whose j become zero without j necessarily entering these steps are summarized in algorithm this algorithm represents the completion of an argument began in review of flow modeling introduction the formulation of the gsom seen in has been called the advective form in this form the property w is advected with the vehicles at speed that is it is constant along trajectories this form makes the statement that the property w is a property of vehicles and is easy to understand conceptually however to apply a discretization it is useful to consider the total property and rewrite in conservative form where v v w we will review the relevant discretization using the godunov scheme of in the next section for a deeper analysis on the physical properties of see we make one note on constraints imposed on the form of v w in it has been stated that to apply the godunov discretization to one is restricted to choices of v w for which there is a unique for every v w and a unique w for every v that is v w must be invertible in both its arguments algorithm node model solution algorithm c input sic j rj pi j c output fi j c c ti si j fi j pi j j algorithm setup and initialization t begin main loop while i j c j do for all i j c i do c s i si j c j pi j p s c j compute dynamics c i j j c c xci si j end for dt algorithm for all i j c do c c fi j fi j j dt integrate forward in time end for c c for all i c si j fi j do i account for all emptied input links j j c end for for all j t dt tj do j account for all filled output links end for t t dt end while c return fi j algorithm setup and initialization for all i p do ti c sic for all j do for all c do c c si j j sic c fi j end for p p c c sic pi j pi c si j end for end for for all i sic do j c end for for all j rj do end for c c return ti si j fi j pi j j algorithm computing the time to integrate forward case for all j do p p c j compute the filling time for every output link tj t rj i c fi j end for for all i j i and j do c c ti j si j fi j j for any c by construction section item all c s fulfill their demands at the same time end for dt min ti j j tj j ti return dt godunov discretization of the gsom the godunov discretization of the lwr model first introduced as the cell transmission model is the godunov scheme discretizes a conservation law into small cells each cell has a constant value of the conserved quantity and fluxes are computed by solving riemann problems at each boundary the godunov scheme is a method so it is useful for simulating solutions to pdes with no or derivatives like the lwr formulation in the ctm the riemann problem is stated in the form of the demand and supply functions since is also a conservation law with no or derivatives the godunov scheme is applicable as well however due to the second pde for an intermediate state arises in the riemann problem and its solution this intermediate state has not always had a clear physical meaning and this lack of clarity likely inhibited the extension of the godunov discretization to the node case in our following outline of the discretized flow problem we make use of a physical interpretation of the intermediate state due to a final note in the node model we were able to ignore the demand and supply functions that generated the supplies rj and demands sic that is we were agnostic to the method by which they were computed and to the input and output link densities as they did not change during evaluation of the node problem as we will see shortly this is not the case for the flow problem due to the intermediate state and its interactions with the downstream link therefore our explanation below makes use of the demand and supply functions s w and r w respectively preliminaries in this paper we say that each vehicle class c has its own property value wc the net averaged over vehicle classes property of a link denoted is p c c w c p where c is the total density of link in the model the fundamental diagram of a link is a function of both net density and net property as defined above this carries over to the demand and supply functions in the godunov discretization that means that the supply and demand are defined at the link level with the net quantities and for an input link i vi if wi si s wi f wi if wi where wi is the critical density for property value wi and f wi is the capacity for property value wi the demand from is split among the classes and movements proportional to their densities and split ratios c j sic sic si c si j the oriented priorities pi j are computed according to as before computing supply solving for an output link s supply is a much more complicated problem we will begin our discussion with a review of the case sections the supply r of the output link in this case where i is the input link and j is the output link is f wi if wi rj r wi vm if wi we see that the supply of the downstream link is actually a function of the upstream link s vehicles property and the density and speed of some middle state m the middle state is given by wm wi v wi if v wi vj vm vj otherwise vm v wm where vj v wj is the velocity of the downstream link s vehicles and v is the velocity function as given by the fundamental diagram in the intuition behind the meaning of the middle state is given as follows the middle state vehicles are actually those that are leaving the upstream link i and entering the downstream link j as they leave i and enter j they clearly carry their own property but their velocity is by the velocity at which that the downstream vehicles exit link j and free up the space that the vehicles enter the middle density and therefore the downstream supply r is then determined by both the upstream vehicles characteristics wi and the downstream link s flow characteristics through vj in other words the number of vehicles that can fit into whatever space is freed up in the downstream link is a function of the drivers willingness to pack together defined by wi since the meaning of supply rj is the number of vehicles that j can accept this means that rj is dependent on wi note that is also the equation by which congestion spills back from j to i if j is highly congested then vj will be low this then makes large in which in turn leads to a small rj in now that we have reviewed the case we can consider how to generalize this to a node when we determine supply for several links node model case we saw that the reasoning behind the dependence of rj on wi was that the spacing tendencies of i s vehicles determine the number of vehicles that can fit in j therefore in generalizing to a node it makes sense to define a link j s middle state as being dependent on the vehicles actually entering link j that is if wj the w just upstream of j is the middle state of link j then we say p p c i c wi j p wj p c i c j the middle state velocity and density vj and are then v wj if v wj vj vj vj otherwise vj v wj and the supply rj is rj r wj f wj if wj vj if wj note that in we defined wj as a function of recall from the node model that the s can change as i upstream links i exhaust their demand or ii downstream links j run out of supply these two events correspond to discrete state changes in our hybrid system this of course carries over to the node model this means that the j quantities and thus the supply rj change as s change therefore at each discrete state transition we need to determine the new supply for each output link j for the new mixture of vehicles that will be entering j in the next discrete state we will explain how this is done through the following example suppose that at time we compute some wj vj and rj with then at time one of the j for that j changes at that point we recompute and wj wj x c p x c x lj i ij wc c where lj is the length of j then we recompute all the middle state variables and rj using critically note that in this recomputation the new vj at is vj v wj this means that vj will also be different than vj this will carry through to create a rj that is different from rj and takes into account both the vehicles that have moved into j between and and the difference in properties wj and wj note that if wj leads to significantly tighter packing smaller spacing than wj it is conceivable that we will have rj rj especially if is not that much smaller than of course the description above assumes isotropic mixing of all vehicle classes in the link j recall we stated this assumption for input links i in item of the gcnm requirements in section unlike supply demand does not need to be recomputed since we assume the mixture of vehicles demanding each movement remains the same due to our isotropic mixture assumption in summary we state the generalization of the gcnm requirements as the same as the requirements p stated in section p c with the addition of a constraint enforcing the conservation of property via the c w and the modification of the supply constraint such that the supply is computed wc j fi j second pde i fi j from the fundamental diagram using the property of its incoming flows this second point where the supply constraint is also dependent on the flow solution only worsens the nonconvexity of the node problem indeed we are drifting away from a setting where the makes the most sense and it may be more helpful to understanding to consider the physical dynamics encoded by the solution methods in any case we now have all the ingredients necessary to extend our hybrid system node model to the formulation dynamic system definition we state the node dynamic system as an extension to the one presented in most of the symbols remain the same however we make a few changes let where i is the set of all input links i be the set of all exhausted input links this set was introduced in the algorithm in section this is necessary to state the recalculations of supply according to the steps in section when a link exhausts its demand and the net property of a j changes paralleling j let denote an exhausted input link an input link is said to be exhausted at time t if c si j xci j t that the formula for the time of demand exhaustion remains the same as in the case ti c sic to accommodate recomputing of supply using we will add more continuous states the n quantities j which will denote the flow of movement i j for class c for movement c since the last time that supplies have been recalculated and the m c densities of the output links this is necessary because following the new supplies rj for the new wj will also take into account the vehicles xci j that have already made the movement so when determining when a link j is filled with its new supply we will need a fresh counter of vehicles that have entered it we assume we have the initial for all j our hybrid system q x init j ci j dom is q x c j c xi j t j c init q j t c t c c si j p p ij c c si j if i i c j j j c c xci si j otherwise ci j j x j i c x t ti c xci j si j c t ti c xci j si j xx q j rj dom c i xx q j rj i c x x p p c if i c j rj where j if t ti c c xci j si j where i c c c where i j j xi j p c c q c w and rj from with wj p c c when for all i j c the execution is complete and fijc xcij unsurprisingly the dynamic system is more complicated than the one the reader will note that the discrete dynamics as discussed before are triggered by links j filling and links emptying the filling of a j and its entering into remains the same as the system the emptying or of input links rather than being encoded in the continuous dynamics as was done in the system s is now in the discrete dynamics in while it was possible to reduce the number of discrete states in the system by including in the continuous dynamics in the second order system any change in the continuous dynamics changes the output links wj so all continuous dynamics changes must trigger a recomputation of rj which in we do when or change thankfully although the system seems much more complex than the system the secondorder solution algorithm is not that much more complicated than the solution method of the system we will see why in the next section solution algorithm note that just as in the system the second order system has constant continuous dynamics in each discrete state this means that just as in the case we can easily compute the time that the next discrete state transition occurs like in section pthis is the smallest of the tj s and ti s as we said the input link time limits remain the same as before ti c sic the time that an output link runs out of supply and is filled under the discrete state if is the time that the discrete state switched to and j s supply was recomputed is similar to q rj tj p p p i j q c c j r s i p p i j j j j c c xci si j q but differs in two key ways first the term for supply is the recomputed rj from this also accounts for why the numerator in does not have a subtracted quantity as in as that subtraction of supply is accounted for in the recomputed supply second is that the denominator is summed over i rather than all i as the set is not in the definition of the dynamic system as stated in section we now state solution algorithm for the dynamic system it follows the same logic as the case identifying the next ti or tj to occur finding the constant dynamics that the system will evolve under until that time integrating forward in time a new step of recomputing supply and repeating algorithm node model solution algorithm c pi j rj rj is only the initial rj and will be input sic wc j c output fi j c c ti si j fi j pi j j wi j algorithm t begin main loop while i j c j do for all i j c i do c s i si j c j compute dynamics j pi j p s c c i j j c c xci si j end for dt algorithm for all i j c do c c fi j fi j j dt c c j j dt end for wj algorithm c c for all i c si j fi j do i account for all emptied input links j j c end for for all j t dt tj do j account for all filled output links end for t t dt end while c return fi j algorithm setup and initialization case c c ti si j fi j pi j j algorithm for all ipdo c c w wi i end for for all i j c do j end for c c return ti si j fi j pi j j wi j algorithm computing the time to integrate forward case for all j do rj algorithm end for for all j do tj t rj j compute the filling time for every output link end for for all i j i and j do c c ti j si j fi j j for any c end for dt min ti j j tj j ti return dt algorithm computation of supply wj p p c i p c wi j p c i c j if v wj vj then vj v wj else vj vj end if vj v wj if wj then rj r wj else rj f wj end if return rj algorithm recomputing the downstream links density and property p i xcij p c pc w c wj j return wj extension of the gcnm requirements in solving the node problem the fact that the supply must be continually recalculated can be interpreted as indicating that the use of the supply and demand quantities is not as natural as in the case we see that demand s w and supply r w alone not including the and w is not enough to solve the node problem in the case the link and w quantities are required this is not unnatural the node problem is after all a riemann problem to resolve discontinuities in and in the case the node problem is often stated in terms of supply and demand instead of the actual conserved quantity because i they have a more intuitive physical meaning and ii since link densities are not needed beyond their use in s and r for the problem beginning with s and r simplifies the problem by removing one step however we have seen that using the framework in the case does not simplify the problem along the lines of ii as we still need to make use of and therefore in the future it may make more sense to state the node problem as taking inputs of and w for all links rather than its inputs being s and r that would remove the unintuitive nature of needing to recompute conclusion this paper presented a generalization of the generic class of node model macroscopic traffic junction models to the general second order model flow model this paper s results allow the extension of macroscopic modeling of flows based on different mixtures of driving behavior to complex general networks many of these flows and networks had been only able to be modeled by microscopic models that consider the behavioral variability on a level but macroscopic models that can capture the aggregate features as a more granular model can greatly increase the scale of problems that we are able to study as stated before the flow models have been used to represent flows of great contemporary interest such as mixtures of and autonomous vehicles researchers and practitioners will need to use every tool available to understand and predict the changes that will arise from the traffic demand changing not just in size but in characteristics some immediate avenues for future refinement of macroscopic models presented themselves during this paper as mentioned in section we do not address node supply constraints in this paper s node models however the immediate application of a general node model macroscopic simulation of traffic on complex networks is of particular concern in scheduling problems involving green light timing future work then should incorporate the node supply constraints into the general node problem so that they may be used in signal optimization and the potential that connected and automated vehicles bring to traffic control references aw and rascle resurrection of second order models of traffc flow siam journal on applied mathematics corthout viti and flows in macroscopic intersection models transportation research part b methodological mar daganzo the cell transmission model a dynamic representation of highway traffic consistent with the hydrodynamic theory transportation research part b methodological daganzo the cell transmission model part ii network traffic transportation research part b methodological fan y sun piccoli seibold and b work a collapsed generalized model and its model accuracy arxiv preprint and rohde operational macroscopic modeling of complex urban road intersections transportation research part b methodological july gentile meschini and papola spillback congestion in dynamic traffic assignment a macroscopic flow model with bottlenecks transportation research part b methodological jabari node modeling for congested urban road networks transportation research part b methodological lebacque and khoshyaran macroscopic traffic flow models intersection modeling network modeling in the international symposium on transportation and traffic theory isttt pages lebacque mammar and the and zhang s model vacuum problems existence and regularity of the solutions of the riemann problem transportation research part b methodological lebacque mammar and salem generic second order traffic flow modelling in transportation and traffic theory papers selected for presentation at pages lighthill and whitham on kinematic waves i flow movement in long rivers ii a theory of traffic flow on long crowded roads proc royal society of london part a ni and leonard a simplified kinematic wave model at a merge bottleneck applied mathematical modelling richards shock waves on the highway operations research smits bliemer pel and van arem a family of macroscopic node models transportation research part b methodological apr corthout cattrysse and immers a generic class of first order node models for dynamic macroscopic simulation of traffic flows transportation research part b wang li and b work comparing traffic state estimators for mixed human and automated traffic flows transportation research part c emerging technologies may wright horowitz and a kurzhanskiy a dynamic system characterization of road network node models in proceedings of the ifac symposium on nonlinear control systems volume pages august wright gomes horowitz and a kurzhanskiy on node and route choice models for highdimensional road networks submitted to transportation research part b zhang a traffic model devoid of behavior transportation research part b methodological
| 3 |
from the equation to the inverse rfec sensor model sep raphael falque teresa gamini dissanayake and jaime valls miro university of technology sydney australia emails this paper we tackle the direct and inverse problems for the rfec technology the direct problem is the sensor model where given the geometry the measurements are obtained conversely the inverse problem is where the geometry needs to be estimated given the field measurements these problems are particularly important in the field of testing ndt because they allow assessing the quality of the structure monitored we solve the direct problem in a parametric fashion using least absolute shrinkage and selection operation lasso the proposed inverse model uses the parameters from the direct model to recover the thickness using least squares producing the optimal solution given the direct model this study is restricted to the axisymmetric scenario both direct and inverse models are validated using a finite element analysis fea environment with realistic pipe profiles keywords remote field eddy current rfec direct problem inverse problem non destructive evaluation nde i ntroduction the remote field eddy current rfec technology allows inspection of ferromagnetic pipelines tools based on this technology are usually composed of an exciter coil and one or several receivers the exciter coil driven by a lowfrequency alternative current generates an electromagnetic field that flows outside the pipe near the exciter coil and flows back inward the pipe at a remote area as shown in fig a the receivers are located in the remote part and record the magnetic field as shown in the figure the magnetic field passes twice the pipe s wall this phenomenon is commonly referred as the double through wall in the literature when the magnetic field flows through a ferromagnetic medium the pipe the amplitude of the magnetic field is attenuated and the phase is delayed due to the double through wall penetration the magnetic field recorded by the receiver has been modified by different areas of the pipe when it flows outward the pipe near the exciter coil and when it flows backwards the pipe in the remote area hence inferring the geometry of the pipe from the signal information is a challenging task since a single measurement is correlated with different areas of the geometry inferring the pipe s geometry from the tool signal corresponds to solving the inverse problem of the rfec this problem has been studied in the literature for the axisymmetrical case of a perfect pipe with a single crack the problem is then formulated as recovering the shape size and fig representation of the rfec phenomenon from the global phenomenon a we propose a parametric direct model that consider independently the flow of the magnetic field in the air b and the local attenuation due to the magnetic field flowing through the pipe thickness c width of the single defect these approaches solve the problem using techniques and bypass the problem of recovering the full pipe s geometry these solutions fit the case of steel material where most pipe bursts are due to cracks in the case of pipes the material is more sensitive to corrosion hence the geometry of the pipe has a more organic shape rather than a single isolated crack therefore for castiron pipes recovering the full pipe s geometry is critical some other approaches from the literature consist of modifying the tool design the use of several receivers located at different axial locations from the exciter coil allows using redundancy of the information provided by passing through the same location to recover the full pipe s geometry however this approach leads to longer tools and require more electrical power to operate the multiple sensors exciter coils due to the nature of the rfec tools the mobility and the battery consumption have to be optimised particularly for this work to allow simple hardware design we consider the case of an elementary rfec tool composed of a single exciter coil and a single receiver the aim of this paper is to obtain an inverse sensor model of the rfec phenomenon which given a set of continuous magnetic field measurements allows to recover the full pipe s geometry for a axisymmetric scenario the remainder of the paper is organised as follows in sec ii we give conceptual ideas about the behaviour of the magnetic field we then propose a direct model solved using least absolute shrinkage and selection operation lasso from the direct model we derive an inverse model formulated in a form the dataset generated with finite element analysis fea and the experimental results are given in sec iii we finally discuss the performance and limitations of the proposed model in sec iv ii m odelling of the rfec phenomenon the direct problem of the rfec phenomenon consists of mapping the pipe s geometry to the sensor measurement through a sensor model conversely the inverse or indirect problem consists of finding the model that maps the sensor measurements into the pipe s geometry the main goal is to solve the inverse problem however solving the direct problem provides qualitative and quantitative information on the form of the inverse model before to consider the direct and inverse problem we discuss some insight of the rfec technology particular attention is dedicated to understanding how the geometry near to the exciter coil impacts the sensor measurements qualitative descriptions of the overall rfec phenomenon have been broadly studied and in depth descriptions are available in the literature a background information as shown in it is possible to consider a defect in the pipe s geometry as an anomalous source model the defect is then replaced by an independent source of magnetic field superposed to the pipe see fig in knowing that the magnetic field gets attenuated while travelling through a ferromagnetic medium the idea is to replace the lack of attenuation from the defect by a source of magnetic field superposed to a perfect pipe following the same idea one could consider the pipe s thickness as an attenuation of the signal let us consider a pipe with an organic geometry a corroded pipe defined by a piecewise constant profile as shown in fig each piece is then considered as a local source of attenuation we then dissociate the global rfec phenomenon shown in fig a in two part i the attenuation due to the magnetic field flowing in the air and ii the local attenuation due to the magnetic field flowing through the pipe the former one is shown in fig b and the latter in fig c the global attenuation of the magnetic field propagating in the air i mostly due to field radiating from the coil is a constant term for a given excitation and global geometry the definition of this value is however complex since it involves many parameters dimensions and excitation of the coil diameter of the pipe distance between the exciter and the receiver the local interaction of the electromagnetic wave with the pipe ii can be described as a plane wave propagating through a homogeneous isotropic and conductive medium the pipe this phenomenon can be described by deriving the skin depth equation from the maxwell equations and can be written as follow s t e b t e z z phase contribution amplitude with b the magnetic field the initial value of the magnetic field the frequency the magnetic permeability of the medium the electrical conductivity and t the distance travelled by the wave the amplitude and the are usually the measurements recorded by the rfec tools since they have a or linear relationship with the thickness of the conductive medium r t r t ln b local ln in this paper we model the direct and inverse problem uniquely with the amplitude however a similar study could be done with the b direct problem we now consider the direct problem which consists of finding a function h such that h t y where y is the sensor measurements and t a set of thickness values that describe the pipe s geometry around the rfec tool let us first consider the case of a single measurement using the wave superposition principle we can then add i and ii as follow k x y wi ti with the constant term described in i ti the ith thicknesses of the pipe piece the pipe s geometry is approximated as a piecewise constant profile a shown in fig p a and wi the unknown parameters that embeds both from ii and a location weight since this approach is an approximation of the actual phenomenon we consider the noise contribution that contains both the actual sensor noise and an unmodelled given enough independent measurements the optimal values for the weights can be found using a least square formulation let us now consider a set of m measurements where each measurement is associated to k local average thicknesses that are regularly spaced over the length of the tool this can be seen as moving the tool within the pipe simultaneously to gathering pipe thickness information in a sliding window the sliding window approximates the geometry as a piecewiseconstant profile as describe in fig a we then formulate eq in a matrix form to combine the m set of measurement and thickness values together y t w the constant term from eq is unknown it depends on the excitation the number of turn in the coil the electromagnetic properties of the air and the distance between the exciter and the sensor it is however possible to estimate from the measurements therefore we include it into the vector of the model parameters w which is defined as w wk t is the matrix that contains the local average thickness information t tmk and y the vector with all the sensor measurements y ym in order to select parameters that reflects the attenuation of the magnetic field through its path there is a need for an optimisation method that sets the weights of the thicknesses to zero it can be obtained by learning the model parameters with lasso using this parameter selection also allows avoiding the irrelevant parameter that would be performed by a closed form solution more formally lasso corresponds to the least square formulation with regularisation as min k t y k with the regularisation parameter which is learned with an iterative process finally the direct problem is solved by estimating as z t h t with the proposed model h t inverse problem after estimating the parameters of the direct model we now consider the inverse problem more formally we want to find the inverse function such that y due to the wall phenomenon h can not be simply inverted as the geometry under the exciter coil and the receiver are convoluted in the measurements instead having the direct problem expressed as a linear model allows formulating the inverse problem in a closed form solution which can be obtained with least squares we consider here solving the inverse problem for a long pipe section as one system recovering the thickness of the full pipe at the same time to solve the optimisation problem through least squares the degree of freedom which is equal to the number of equations minus the number of parameters of the system has to be positive or null as a rule of thumb to avoid the degree of freedom should be superior to ten let us consider the inspection of a long pipe section using a rfec tool during the inspection a set of m discrete measurements are collected at regular intervals along the pipe we approximate the pipeline geometry as a profile with n steps of average thickness n is chosen to be ten times smaller than we then eq so it can be formulated as a global optimisation problem where all the sensor measurements y are related to all the piecewise thicknesses ti as y w with y and defined in eq and is the set of the all the thickness estimates for each value of the piecewiseconstant pipeline profile defined as tn w is an m n matrix that contains the relationship between thickness values and sensor measurements and is defined by the parameters learned from the direct model in practice each line of w contains the weights w for the local thickness values and is set to for the others thickness values since there are multiple measurements between the ith and i th values spatial weights ai and bi are used to define the influence of the piece proximity as w wk bj aj bj wk wk am wk fig representation of the axisymmetric simulation the air box present all around the pipe the exciter coil as a rectangular cross copper coil the receiver simplified as a point measurement and the pipe with ai and bi defined as follow bi ai bi ai ai bi where ai is the distance from the point measurement to the centre of the ith step and bi the distance from the point measurement to the centre of the th step we then obtain the thickness estimates t by solving the linear least squares in closed form w t w w t y z which is simplified to a point measurement it could simulate a hall effect sensor and the pipe that has its geometry defined from pipe segments extracted from the decommissioned pipeline a schematic of the global system is shown in fig with the thickness gains corresponding to bell and spigot b s joints that link pipe segments together all medium are approximated as homogeneous and isotropic the air and copper material properties are defined using materials from the comsol library to get a realistic axisymmetric modelisation of the pipe the pipe s magnetic properties are obtained by analysing a pipe sample with a superconducting quantum interference device squid we then have both the geometry and the material properties that come from a real pipeline the material properties used in the model are displayed in tab i the conductivity of the air is set to a value to avoid computational singularities the stability of the simulation has been validated for different meshing sizes air box sizes and other parameters the meshing size is defined according to the wavelength of the magnetic field in each material at least five times smaller than the wavelength with defined as r y iii r esults fea simulations with a geometry have been used to validate the proposed methods in a controlled environment we look here at the performance of both the direct and inverse model applied to a long pipe section with a known geometry note that although the validation has been done for a scenario the proposed models can be adapted for any rfec axisymmetric tool fea environment this section describes how the data for validation was obtained in the context of our particular research project that motivates this paper we used data from a pipeline which has been decommissioned and is currently dedicated for research purposes this particular pipeline was laid more than hundred years ago and some parts of the pipe are significantly corroded some pipes section have been exhumed and analysed the material properties of the have been measured and the corrosion s profile have been captured with a laser scanner using the process described in we generated a m long profile based on the geometry of exhumed pipe segments once incorporated into a fea simulation environment this realistic profile has provided sufficient data for validation the fea used here is done using comsol multiphysics in a scenario the fea geometry is composed of four different components the air box defining the limits of the fea scenario the exciter coil which is modelled as a rectangular cross copper coil the receiver using eq with the magnetic properties of each material we can define the minimum size for the meshing at each part of the scenario the minimum size of each element in the meshing is given in tab i the pipeline s inspection has been simulated using a parameter sweep for the position of the rfec tool within the pipe for the length the amplitude and the of the electromagnetic phase have been recorded for each position of the parameter sweep b application of the direct model we now consider the direct problem applied to a dataset generated from the fea environment described previously the aim here is to learn the parameters defined in eq as shown in fig note that to make it more realistic the simulated thickness profile contains b s joints the thickness of the b s joints are much larger than the other parts of the pipe hence due to the linear nature of proposed model these data that relate to the b s are expected to perform poorly we solve the direct model for three datasets a the first dataset include the complete set of data b the data with a b s joint located near the receiver have been removed in the second dataset and c the data near both the exciter and receiver have been removed in the third dataset table i properties of each material material air copper coil l m a b c fig due to the presence of the b s joints inducing a sort of in the data the model described in eq is not longer valid therefore we remove data where the b s joint has an impact on the exciter coil a and where it has an impact on the receivers b the model learned from the filtered data is shown in c we compared the estimated and the actual sensor measurement y in fig with each dedicated to each dataset in fig a we set the colour information to reflect the impact of the b s joints located near the receiver the yellow points are more influenced by the b s in fig a the blue points represent the estimation with the b s located on top of the receiver the third dataset shown in fig c shows a better regression since the simulations are done in a controlled environment the locations of the b s joints are known thus removing these particular data is a trivial task in the case of an unknown environment one could classify the construction features of the pipeline which can be done using a support vector machine svm classifier such as in an alternative would consist of automating the data selection with methods such as peirce s or chauvenet s criterion the parameter from eq is chosen using crossvalidation the estimated parameters w and measurements fig evolution of the mean square error mse versus the value of the parameter using the indicated in blue corresponds to the sparsest solution within one standard error of the mse it is the chosen one of the goodness of fitting mean square error mse and the coefficient of determination are available in tab ii as expected the constant is a positive term and the attenuation coefficients are negative terms moreover we can see that the geometry near to the receiver and near to the exciter coil have a more important role which is reflected by higher weights application of the inverse problem after solving the direct problem all the parameters required for the inverse problem are known we consider recovering the metres of pipe thickness as a global problem and the full geometry is recovered from the set of all the measurement using the formulation established in eq the inverse problem relies on the parameters learnt for the direct problem in the case where all the parameters from the rfec tool and the magnetic properties of the pipe specimen are known it is possible to obtain direct and inverse models through fea simulation otherwise multiple thickness measurements have to be collected from the studied pipe these thickness measurements at specific locations are needed to learn the parameters in practice collecting such measurements is a feasible task considering the few parameters that are present in the proposed model a of the pipe profile reconstructed is shown in fig the estimation is shown in blue and the ground truth is shown in orange the spikes in the correspond to the b s joints as predicted the proposed inverse model can not recover these thicknesses due to the nonlinear behaviour of the magnetic field in these regions the estimation error for the thickness of the pipe is of for the mse and for rmse with the average thickness of the pipe being around if we remove the areas with the b s joints the rmse falls to table ii output of the least absolute shrinkage and selection operation the localised increase of thickness b s joints lead to the spread the weights this is visible by comparing the lines of the table coef dataset a dataset b dataset c cst exciter iv d iscussion in this paper we tackle the direct and inverse problems for a rfec tool composed of a single exciter coil and a single receiver we have shown using fea that both direct and inverse model are accurate for recovering pipe sections with organic geometry which is often the case for corroded pipes the fea model used to generate the dataset is based on a realistic geometry and material properties obtained from old pipes the proposed direct model is solved using lasso the allows selecting automatically the important thickness areas for the model while reducing the number of parameters this result into a simplistic model with most important thicknesses located next to the exciter coil and the receiver the inverse problem relies on the parameters from the direct problem and is solved using least squares for training the proposed inverse model thickness measurements have to be collected from the pipe in practice collecting such measurements is a feasible task considering the few parameters of the proposed model the main limitation of the proposed method lies in the form of the proposed model the linear model allows solving the inverse problem in a the model gives accurate results apart for the b s joints for these extremely thick thicknesses the magnetic field would flow through the path of least resistance which can not be captured by a linear model furthermore to the outstanding thicknesses the magnetic properties are considered constant for the full pipeline in practice pipes can have a variation of magnetic properties this case is not studied here in future work we are planning to apply this method for a tool with a sensor array case it can be shown that thickness mm estimation receiver distance m fig thickness estimated in the estimation is shown in blue and the in orange goodness of fit mse the attenuation from the exciter behaves as a circumferential offset therefore it is possible to deconvolute the signal in a similar fashion acknowledgment this publication is an outcome from the critical pipes project funded by sydney water corporation water research foundation of the usa melbourne water water corporation wa uk water industry research ltd south australia water corporation south east water hunter water corporation city west water monash university university of technology sydney and university of newcastle the research partners are monash university lead university of technology sydney and university of newcastle r eferences atherton remote field eddy current inspection ieee transactions on magnetics vol no pp davoust brusquet and fleury robust estimation of flaw dimensions using remote field eddy current inspection measurement science and technology vol no pp nov davoust brusquet and fleury robust estimation of hidden corrosion parameters using an eddy current technique journal of nondestructive evaluation vol no pp jun tao zhang wang and luo design on forward modeling of rfec inspection for cracks proceedings international conference on information science electronics and electrical engineering iseee vol pp cardelli esposito and raugi electromagnetic analysis of rfec differential probes ieee transactions on magnetics vol no pp skarlatos pichenot lesselier lambert and electromagnetic modeling of a damaged ferromagnetic metal tube by a volume integral equation formulation ieee transactions on magnetics vol no pp lord sun udpa and nath a finite element study of the remote field eddy current phenomen ieee transactions on magnetics vol no pp sun cooley han udpa and lord efforts towards gaining a better understanding of the remote field eddy current phenomenon and expanding its applications ieee transactions on magnetics vol no pp may tibshirani regression selection and shrinkage via the lasso journal of the royal statistical society vol no pp skinner valls miro bruijn and falque point cloud upsampling for accurate reconstruction of dense thickness maps point cloud acquisition in australasian conference on robotics and automation acra miro and mart automatic detection and verification of pipeline construction elements with data in iros peirce criterion for the rejection of doubtful observations the astronomical journal vol no pp william a manual of spherical and practical astronomy philadelphia j lippincott london trubner falque valls miro lingnau and russell background segmentation to enhance remote field eddy current signals in australasian conference on robotics and automation acra pp
| 3 |
sep reduction of local uniformization to the case of rank one valuations for rings with zero divisors josnei novacoski and mark spivakovsky abstract this is a continuation of a previous paper by the same authors in the former paper it was proved that in order to obtain local uniformization for valuations centered on local domains it is enough to prove it for rank one valuations in this paper we extend this result to the case of valuations centered on rings which are not necessarily integral domains and may even contain nilpotents introduction for an algebraic variety x over a field k the problem of resolution of singularities is whether there exists a proper birational morphism x x such that x is regular the problem of local uniformization can be seen as the local version of resolution of singularities for an algebraic variety for a valuation of k x having a center on x the local uniformization problem asks whether there exists a proper birational morphism x x such that the center of on x is regular this problem was introduced by zariski in the s as an important step to prove resolution of singularities zariski s approach consists in proving first that every valuation having a center on the given algebraic variety admits local uniformization then one has to glue these local solutions to obtain a global resolution of all singularities zariski succeeded in proving local uniformization for valuations centered on algebraic varieties over a field of characteristic zero see he used this to prove resolution of singularities for algebraic surfaces and threefolds over a field of characteristic zero see abhyankar proved see that local uniformization can be obtained for valuations centered on algebraic surfaces in any characteristic and used this fact to prove resolution of singularities for surfaces see and he also proved local uniformization and resolution of singularities for threefolds over fields of characteristic other than and see very recently cossart and piltant proved resolution of singularities and in particular local uniformization for threefolds over any field of positive characteristic as well as in the arithmetic case see and they proved it using the approach of zariski however the mathematics subject classification primary secondary key words and phrases local uniformization resolution of singularities reduced varieties during the realization of this project the first author was supported by a grant from the program sem fronteiras from the brazilian government josnei novacoski and mark spivakovsky problem of local uniformization remains open for valuations centered on algebraic varieties of dimension greater than three over fields of positive characteristic since local uniformization is a local problem we can work with local rings instead of algebraic varieties a valuation centered on a local integral domain r is said to admit local unifomization if there exists a local local ring r dominated by and dominating r such that r is regular let n be the category of all noetherian local domains and m n be a subcategory of n which is closed under taking homomorphic images and localizing any finitely generated birational extension at a prime ideal we want to know for which subcategories m with these properties all valuations centered on objects of m admit local uniformization in section of grothendieck proved that any category of schemes closed under passing to subschemes and finite radical extensions in which resolution of singularities holds is a subcategory of schemes it is known that the category of schemes is closed under all the operations mentioned above he conjectured see remark of that resolution of singularities holds in this most general possible context that of schemes translated into our local situation this conjecture says that the subcategory of n which optimizes local uniformization is the category of all local rings this subcategory has the properties above for a discussion on and excellent local rings see section of however this conjecture is widely open in most of the successful cases including those mentioned above local uniformization was first proved for rank one valuations then the general case was reduce to this a priori weaker one in we prove that this reduction works under very general assumptions namely we consider a subcategory m of the category of all noetherian local integral domains closed under taking homomorphic images and localizing any finitely generated birational extension at a prime ideal the main result of is that if every rank one valuation centered on an object of m admits local uniformization then all the valuations centered on objects of m admit local uniformization the main goal of this paper is to extend this result to rings which are not necessarily integral domains and in particular may contain nilpotent elements the importance of and schemes in modern algebraic geometry is well known even if one were only interested in reduced schemes to start with one is led to consider ones as they are produced by natural constructions for example in deformation theory therefore it appears desirable to study the problem of local uniformization for such schemes and in particular to extend our earlier results on reducing the problem to the rank one case to this more general context if r is not reduced we can not expect in general to make r be regular by blowings up the natural extension to this case is to require r red to be regular n and i to be an r red module for every n n here i denotes the nilradical of r for more precise definitions see section let n be the category of all noetherian local rings and m n be a subcategory of n which local uniformization for is closed under taking homomorphic images and localizing any finitely generated birational extension at a prime ideal our main result is the following theorem assume that for every noetherian local ring r in ob m every rank one valuation centered on r admits local uniformization then all the valuations centered on objects of m admit local uniformization the proof of theorem consists of three main steps the first step is to prove that for every local ring r and every valuation centered on r there exists a local blowing up see definition r r such that r has only one associated prime ideal then we consider a decomposition of such that rk rk and rk rk using induction we can assume that both and admit local uniformization the second main step consists in using this to prove that there exists a local blowing up r r such that r red is regular the third and final step is to prove that there exists a further local blowing n up r r such that r red is regular and i is an r red module for every n n here i denotes the nilradical of r this paper is divided as follows in section we present the basic definitions and results that will be used in the sequel sections and are dedicated to prove the results related to the first second and third steps respectively in the last section we present a proof of our main theorem preliminaries let r be a noetherian commutative ring with unity and an ordered abelian group set and extend the addition and order from to as usual definition a valuation on r is a mapping r with the following properties ab a b for every a b r a b min a b for every a b r and the support of which is defined by supp a r a is a minimal prime ideal of take a multiplicative system s of r such that supp r then the extension which we call again of to rs given by a s is again a valuation indeed the three first axioms are easily checked the minimality of supp as a prime ideal of rs follows from the fact that the prime ideals of rs are in a bijective correspondence to the prime ideals of r contained in r from now on we will freely make such extensions of to rs without mentioning it explicitly a valuation on r is said to have a center if a for every a in this case the center of on r is defined by r a r a moreover if r is a local ring with unique maximal ideal m in which case we say the local ring josnei novacoski and mark spivakovsky r m then a valuation on r is said to be centered at r if a for every a r and a for every a we observe that if is a valuation having a center on r then is centered on r the value group of denoted by is defined as the subgroup of generated by a a r the rank of is the number of proper convex subgroups of for an element b r supp we consider the canonical map r rb given by a let j b ker annr bi we have a natural embedding b rb take ar r such that ai b for each i i consider the subring b ar of rb then the restriction of to has a center in we set r definition the canonical map r r will be called the local blowing up of r with respect to along the ideal b ar for a valuation having a center on r we will say that r r is if b r and ai r for every i i lemma the composition of finitely many local blowings up is again a local blowing up moreover if each of these local blowings up is then their composition is again proof it is enough to prove that for two local blowings up r r and r r with respect to there exists a local blowing up r r with respect to such that r r we write r for b ar for some ar b r and r for r for some r then there exist r such that for each i i consider the local blowing up r r given by r for ar it is straightforward to prove that r r in view of lemma we will freely use the fact that the composition of finitely many local blowings up is itself a local blowing up without mentioning it explicitly for simplicity of notation we denote the nilradical of r by i i nil r a r al for some l n local uniformization for definition we say that spec r is normally flat along spec rred if i n is an rred module for every n since r is noetherian there exists n n such that i n for every n n hence the condition in definition is equivalent to the freeness of the finitely many modules i n n i n definition for a local ring r a valuation centered on r is said to admit local uniformization if there exists a local blowing up r r with respect to such that r red is regular and spec r is normally flat along spec r red let be a fixed decomposition of for simplicity of notation we set p r and for a local blowing up r r we set p r we need to guarantee that the main structure of rp and are preserved under local blowings up more precisely we have to prove the following proposition let r r be a local blowing up then the canonical maps rp rp and r induced by are isomorphisms in order to prove proposition we need the following basic lemma lemma let s be a multiplicative system of r contained in r r then the canonical map r rs rs given by is an isomorphism proof for an element rs rs we have b c d consequently bc and r then suppose that this means that there exists rs rs such that in rs thus there exists s s such that sac moreover since rs we also have that c r this and the fact that s s r imply that sc r hence in r which is what we wanted to prove proof of proposition applying lemma to r with s b and with s and the valuation we obtain that the canonical maps rp rb rb and rp respectively are isomorphisms hence in order to prove the first assertion it is enough to show that the canonical map rb rb is an isomorphism since rb and rb we have that rb rb is injective on the other hand any element in rb rb can be written as abm cbn which is the image of abm hence the map r rb rb josnei novacoski and mark spivakovsky is surjective and consequently it is an isomorphism set b and consider the induced map r since the canonical map r is surjective in order to prove the surjectivity of r r it is enough to show that r is surjective for an element r we write where p p ar and q q ar for some p xr q xr xr set p and q then p r x ai pi ar r x ai qi ar and q for some pi qi xr i since ai we obtain that and this implies that q and therefore q p it remains to prove that since also hence q and consequently is a unit in therefore to finish our proof it is enough to show that the kernel of r r is this follows immediately from the definition of p and p as the centers of on r and r respectively lemmas and below are generalizations of lemma and corollary of respectively the proofs presented there can be adapted to our more general case we present sketches of the proofs for the convenience of the reader e with respect to there exists lemma for each local blowing up rp r e r a local blowing up r r with respect to such that r p e given by proof we consider the local blowing up rp r e r r c r e rp for r choose ar b r such that for each i i r we have ai b where r rp is the canonical map if ai b for some i i r local uniformization for then we have choose i so as to minimize the value ai in other words so that ai aj for all j r set rp r then r r c r hence after a suitable permutation of the set ar b we may assume that ai b for every i i consider the local blowing up r for b ar e with respect to it is straightforward to prove that rp r lemma for each local blowing up r exists a local blowing up r r and rp rp with respect to there with respect to such that r r proof for an element a r we denote its image under the canonical map r by a then r with r b ar for some ar b r since ai b we have ai b for every i i then we can consider the local blowing up r with b ar with respect to it is again straightforward to prove that r r rp rp and associated prime ideals of r let r be a local ring and a valuation centered on the main result of this section is the following proposition there exists a local blowing up r r with respect to such that nil r is the only associated prime of r in order to prove proposition we need the following result lemma let b ar rb for some b ar r with b ai for every i i then for every the ideal can be written as for some c moreover if is prime then annr bn c is a prime ideal of r for some n josnei novacoski and mark spivakovsky proof choose c r such that for some l fix and write for some m n and a then we have acbn for some n n now assume that is prime and set b then is also prime moreover annr bn c where r b is the canonical epimorphism indeed a in rb bn ac in r for some n n a annr bn c since r is noetherian and we have that annr bc annr c annr bn c annr bn c for some n annr bn c by and we conclude that annr bn c is a prime ideal of corollary for a local blowing up r r if nil r is the only associated prime ideal of r then nil r is the only associated prime ideal of r proof let r for some as in lemma theorem of gives us that ass r ass spec r this and lemma guarantee that r r consequently r has only one associated prime ideal say q the primary decomposition theorem now gives us that q nil r which is what we wanted to prove we will use corollary throughout this paper without always mentioning it explicitly proof of proposition since supp is a minimal prime ideal there exists at most one associated prime ideal of r contained in hence equal to supp we will prove that if r then there exists a local blowing up r r such that r r take an associated prime ideal q of r such that q supp write q b ar local uniformization for with b ai for every i i blowing up r with respect to along q gives us a local ring r where b ar observe that this is indeed a local blowing up because b ai for every i and q supp implies that b supp since ass r ass spec r see theorem of it remains to show that r by lemma we obtain that has at most r many associated prime ideals moreover for the chosen associated prime ideal q annr c of r and for every r n the ideal is not prime in indeed since q b ar annr c we have bc in this means that in r and consequently which is not prime therefore r remark if i is the only associated prime ideal of r then for every b i we have j b in this particular case we can eliminate the ideal j b in the definition of a local blowing up we will use this throughout this paper without mentioning it explicitly making rred regular let r be a local ring and a valuation centered on assume that and denote by p the center of on as usual we denote by i the nilradical of r and for a local blowing up r r we denote the nilradical of r by i assume that i is the only associated prime ideal of the main goal of this section is to prove the following proposition proposition assume that rp red and are regular then there exists a local blowing up r r such that r red is regular moreover for every local blowing up r r along an ideal b ar with b p and ar i we have that r is regular red in order to prove proposition we will need a few lemmas lemma assume that rp red is regular then there exists a local blowing up r r such that the r p p i is are elements of whose images in free moreover yr p p i form a basis of p p i then their images in form a regular system of parameters of rp rp red red josnei novacoski and mark spivakovsky lemma let r r be a local blowing up along an ideal b ar with b p and ar i if i is a free then i p p is a free r lemma take yr p and xt m p whose images form a regular system of parameters of rp red and respectively if i is an module with basis i yr i then rred is regular proof of proposition assuming lemmas and we apply lemma to obtain a blowing up yr r such r r and that their images in p p i form an r red basis and their images form a regular system of parameters moreover by proposition in rp red r is regular also by lemma and proposition for every local blowing up r r along an ideal b ar with b p and ar i the hypotheses of lemma are satisfied for r hence we obtain that r red and r red are regular we now proceed with the proofs of lemmas and lemma take generators yr of p and b let r r be the local blowing up along the ideal b yr set yi yi for i r and for k then p is generated by proof obviously yi p for every i i r take an element p this implies that p p yr for some p xr r xr see remark if we set p then yr p pr for some pr b b this implies that hence there exist r such that thus s r p x bai pi x yi r q q q this concludes our proof proof of lemma since rp red is regular there are elements yr p such that their images in rp red form a regular system of parameters the first step is to reduce to the case when yr generate local uniformization for assume that yr do not generate choose p such that yr generate for each k k s we can find bk r p brk r and hk yr such that bk brk yr hk i consider the local blowing up r r along yr it follows that yr i and bk brk yr hk i for k s where yi yi for i r and and some hk yr for i since i is prime and i we obtain that yr i consequently yr r yr r we proceed inductively to obtain a local blowing up r r s such that s s s s yr s r s yr s r s s s s s by lemma we have p s yr r s and by lemma s s s form a regular system of parameters the images of yr in rp s s s red this means that yr generate p s thus we have reduced the problem to the case when yr generate p and will make this assumption from now on now the only fact that remains to be checked is that the images of yr in i are independent take ar r such that ar yr i since the images of yr in rp red form a regular system of parameters their images in prp i rp form an rp of prp i rp this implies that ar prp and consequently ar this completes the proof of the lemma proof of lemma take ys p such that their images form an of i we claim that the images of ys form an r basis of p p i take an element p then where p q r ar with p and q set p and write ar p pr for some pr b b josnei novacoski and mark spivakovsky this implies that by our assumption there exist cs p g and h i such that cs yr g consequently cs g h ar p r ys q q q q b q b q since ar h i we have that ar p r h i q b q b q this and the fact g p imply that the images of yr that generate p p i now assume that there exists ai ci r i r such that yr p i then there exists n n such that bn ar bn yr i this implies that ai bn p for every i i since b p this implies that ar therefore p which concludes our proof proof of lemma set a i rred a p since the images of the yi s in i form a basis of i we conclude that yr i applying nakayama s lemma corollary of theorem of we conclude that yr i p and consequently i yr i generate since the images of yr xt in rred generate a i rred a m we conclude that r t dim rred also since r dim rp red ht and t dim ht ht we have dim rred ht ht ht r t dim rred therefore r t dim rred and hence rred is regular making i n free let r be a local ring and a valuation centered on assume that and denote by p the center of on as usual we set i nil r and ip k k nil rp also for a local blowing up r r we set i nil r and k k ip k nil rp k assume that i is the only associated prime ideal of the main goal of this section is to prove the following proposition local uniformization for proposition assume that ipn is an rp red module for every n then there exists a local blowing up r r with respect to along an ideal n b ar with b p and ar i such that the r red i is free for every n in order to prove proposition we will need some preliminary results lemma take elements i n such that their images in i n generate i n as an rred consider the local blowing up r r along the ideal b yr for some b r i set yi yi for i r and for k n then the images of in i form a set of generators of this module n proof take an element i as in proof of the lemma we can write p yr pr for some pr b b with i n this means that there exists r such that i consequently r x ai p x bai pi yi yi i q q q q this concludes our proof lemma under the same assumptions as in the previous lemma if the images of yr in i n are rred independent then the images of n ar in i are r red independent proof take elements r such that y y i we have to show that i for each i i r we write ai ci for some ai ci r and ri si then equation implies that there exists l n and c r p such that bl ar bl cyr i since i yr i are rred independent this implies that ai b l c i for every i i since i is prime this is a consequence of the fact that it is the only associated prime ideal of r and b c r i we obtain that ar consequently i which concludes our proof josnei novacoski and mark spivakovsky proof of proposition by assumption we have that ipn is rp red for every n hence by proposition for local blowing up n for every n therefore it r r we have that ip is rp red is enough to show that for a fixed n n there exists a local blowing up r r n along an ideal b ar with b p and ar i such that i is r red take elements yr ipn yr r and br r p such that yr form a basis of ipn we observe first that since i is prime and yi ipn we have yi i n for each i i we claim that if i yr i generate i n as an rred then this module is free indeed if there exists ai i rred such that ar yr i then ar br yr ar yr this implies that for each i i r ai bi ip and consequently ai bi ci i for some ci r p since i is prime and br cr r i we conclude that ar i which is what we wanted to prove if i yr i do not generate i n as an rred then we take i n such that i i generate i n for each k k s since i n there exist bk r p such that bk brk yr i for some brk consider now the local blowing up along the ideal yr set yi yi r for each i i r and r for each k k from equation we obtain that yr i and bk br brk yr i is generated in the r for every k k consequently i n moreover yr i module i by i n obtain that i is generated as an rred by the yr red using lemma we images of local uniformization for n also by lemma the images of yr in i are r red independent we proceed inductively to obtain a local blowing up r r s such that the s s n s is generated by the images of yr and the imr s red i s s s n s are r s red independent ages of yr in i s proof of the main theorem in this section we present the proof of our main theorem proof of theorem we will prove the assertion by induction on the rank since all rank one valuations admit local uniformization by assumption we fix n n and will prove that if all valuations of rank smaller than n admit local uniformization then also valuations of rank n admit local uniformization let be a valuation centered in the local ring r ob m such that rk by lemma there exists a local blowing up r r with respect to such that nil r is the only associated prime ideal of r hence replacing r by r we may assume that the only associated prime ideal of r is nil r decompose as for valuations and with rank smaller than by assumption we know that and admit local uniformization since admits local uniformization by use of lemma there exists a local blowing up n r r with respect to such that rp is regular and ip is rp red free for every n replacing r by r we may assume that rp red is regular and ipn is rp red for every n since admits local uniformization we can use lemma to obtain that there and exists a local blowing up r r with respect to such that rp red for every n replacing r are regular and ipn is rp red r by r we can assume that rp red and are regular and that ipn is rp red for every n since rp red and are regular we apply proposition to obtain a compatible local blowing up r r such that r red is regular using for every n proposition we have that ipn rp is a free red by proposition there exists a p local blowing up r r n such that i is an r red module for every n moreover since this local blowing up is along an ideal b ar with b p and ar i we conclude using proposition that r is regular this concludes our red proof references abhyankar local uniformization on algebraic surfaces over ground fields of characteristic p ann of math abhyankar on the valuations centered in a local domain amer j math josnei novacoski and mark spivakovsky abhyankar resolution of singularities of embedded algebraic surfaces pure and applied mathematics academic press new york and london abhyankar simultaneous resolution for algebraic surfaces amer j math cossart and piltant resolution of singularities of threefolds in positive characteristic i reduction to local uniformization on and purely inseparable coverings algebra no cossart and piltant resolution of singularities of threefolds in positive characteristic ii algebra no grothendieck de iv locale des et des morphismes de ii inst hautes sci publ math matsumura commutative ring theory cambridge university press novacoski and spivakovsky reduction of local uniformization to the rank one case proceedings of the second international conference on valuation theory ems series of congress reports zariski and samuel commutative algebra vol ii new zariski local uniformization theorem on algebraic varieties ann of math zariski reduction of singularities of algebraic three dimensional varieties ann of math josnei novacoski capes foundation ministry of education of brazil brazil address mark spivakovsky institut de de toulouse and cnrs paul sabatier route de narbonne toulouse cedex france address
| 0 |
feb efficient batchwise dropout training using submatrices ben graham jeremy reizenstein leigh robinson february abstract dropout is a popular technique for regularizing artificial neural networks dropout networks are generally trained by minibatch gradient descent with a dropout mask turning off some of the different pattern of dropout is applied to every sample in the minibatch we explore a very simple alternative to the dropout mask instead of masking dropped out units by setting them to zero we perform matrix multiplication using a submatrix of the weight hidden units are never calculated performing dropout batchwise so that one pattern of dropout is used for each sample in a minibatch we can substantially reduce training times batchwise dropout can be used with and convolutional neural networks independent versus batchwise dropout dropout is a technique to regularize artificial neural prevents overfitting a fully connected network with two hidden layers of units each can learn to classify the mnist training set perfectly in about training the test error is quite high about increasing the number of hidden units by a factor of and using dropout results in a lower test error about the dropout network takes longer to train in two senses each training epoch takes several times longer and the number of training epochs needed increases too we consider a technique for speeding up training with can substantially reduce the time needed per epoch consider a very simple fully connected neural network with dropout to train it with a minibatch of b samples the forward pass is described by the equations xk dk wk k here xk is a b nk matrix of units dk is a b nk matrix of independent bernoulli pk random variables pk denotes the probability of dropping out units in level k and wk is an nk matrix of weights connecting level k with level k we are using for hadamard multiplication and for matrix multiplication we have forgotten to include functions the rectifier function for the hidden units and softmax for the output units but for the introduction we will keep the network as simple as possible the network can be trained using the backpropagation algorithm to calculate the gradients of a cost function negative with respect to the wk xk dk t t wk dk with dropout training we are trying to minimize the cost function averaged over an ensemble of closely related networks however networks typically contain thousands of hidden units so the size of the ensemble is much larger than the number of training samples that can possibly be seen during training this suggests that the independence of the rows of the dropout mask matrices dk might not be terribly important the success of dropout simply can not depend on exploring a large fraction of the available dropout masks some machine learning libraries such as allow dropout to be applied batchwise instead of this is done by replacing dk with a row matrix of independent bernoulli random variables and then copying it vertically b times to get the right shape to be practical it is important that each training minibatch can be processed quickly a crude way of estimating the processing time is to count the number of floating point multiplication operations needed naively to evaluate the matrix multiplications specified above x b nk nk b b nk z z z forwards backwards however when we take into account the effect of the dropout mask we see that many of these multiplications are unnecessary the i j element of the wk weight matrix effectively of the calculations if unit i is dropped in level k or if unit j is dropped in level k applying dropout in levels k and k renders of the multiplications unnecessary if we apply dropout independently then the parts of wk that disappear are different for each sample this makes it effectively impossible to take advantage of the is slower to check if a multiplication is necessary than to just do the multiplication however if we apply dropout batchwise then it becomes easy to take advantage of the redundancy we can literally redundant parts of the calculations see function apply dropout in time saving epoch training time seconds no dropout batchwise minibatch size minibatch size figure left mnist training time for three layer networks log scales on an nvidia geforce gtx graphics card right percentage reduction in training times moving from no dropout to batchwise dropout the time saving for the network with minibatches of size increases from to if you instead compare batchwise dropout with independent dropout the binary nk batchwise dropout matrices dk naturally define submatrices of the weight and matrices let xdropout xk dk denote the submatrix k of xk consisting of the hidden units that survive dropout let wkdropout wk dk denote the submatrix of wk consisting of weights that connect active units in level k to active units in level k the network can then be trained using the equations xdropout xdropout wkdropout k xdropout t k dropout dropout t wk k the redundant multiplications have been eliminated there is an additional benefit in terms of memory needed to store the hidden units xdropout needs less space than k xk in section we look at the performance improvement that can be achieved using code running on a gpu roughly speaking processing a minibatch with batchwise dropout takes as long as training a smaller network on the same data this explains the nearly overlapping pairs of lines in figure we should emphasize that batchwise dropout only improves performance during training during testing the full wk matrix is used as normal scaled by a factor of pk however machine learning research is often constrained by long training times and high costs of equipment in section we show that all other things being equal batchwise dropout is similar to independent dropout but faster moreover with the increase in speed all other things do not have to be equal with the same resources batchwise dropout can be used to increase the number of training epochs increase the number of hidden units increase the number of validation runs used to optimize or to train a number of independent copies of the network to form a committee these possibilities will often be useful as ways of improving test error in section we look at batchwise dropout for convolutional networks dropout for convolutional networks is more complicated as weights are shared across spatial locations a minibatch passing up through a convolutional network might be represented at an intermediate hidden layer by an array of size samples the output of convolutional filters at each of spatial locations it is conventional to use a dropout mask with shape we will call this independent dropout in contrast if we want to apply batchwise dropout efficiently by adapting the submatrix trick then we will effectively be using a dropout mask with shape this looks like a significant change we are modifying the ensemble over which the average cost is optimized during training the error rates are higher however testing the networks gives very similar error rates fast dropout we might have called batchwise dropout fast dropout but that name is already taken fast dropout is very different approach to solving the problem of training large neural network quickly without overfitting we discuss some of the differences of the two techniques in the appendix implementation in theory for n n matrices addition is an o operation and multiplication is o by the algorithm this suggests that the bulk of our processing time should be spent doing matrix multiplication and that a performance improvement of about should be possible compared to networks using independent dropout or no dropout at all in practice sgemm functions use strassen s algorithm or naive matrix multiplication so performance improvement of up to should be possible we implemented batchwise dropout for and convolutional neural networks using we found that using the highly optimized cublassgemm function to do the bulk of the work with cuda kernels used to form the submatrices wkdropout and to update the wk using worked well better software available at http performance may well be obtained by writing a matrix multiplication function that understands submatrices for large networks and minibatches we found that batchwise dropout was substantially faster see figure the approximate overlap of some of the lines on the left indicates that batchwise dropout reduces the training time in a similar manner to halving the number of hidden units the graph on the right show the time saving obtained by using submatrices to implement dropout note that for consistency with the left hand side the graph compares batchwise dropout with networks not with networks using independent dropout the need to implement dropout masks for independent dropout means that figure slightly undersells the performance benefits of batchwise dropout as an alternative to independent dropout for smaller networks the performance improvement is issues result in the gpu being under utilized if you were implementing batchwise dropout for cpus you would expect to see greater performance gains for smaller networks as cpus have a lower to bandwidth ratio efficiency tweaks if you have n hidden units and you drop out p of them then the number of dropped units is approximately np but with some small variation as you standard deviation is p are really dealing with a binomial n p random np p the sizes of the submatrices wkdropout and xdropout are therefore k slightly random in the interests of efficiency and simplicity it is convenient to remove this randomness an alternative to dropping each unit independently with probability p is to subset of exactly np of the hidden units uniformly at random from the set n of all np such subsets it is still the case that each unit is dropped out with probability however within a hidden layer we no longer have strict independence regarding which units are dropped out the probability of dropping out the first two hidden units changes very slightly from to np np n also we used a modified form of minibatch gradient descent after each minibatch we only updated the elements of wkdropout not all the element of wk with vk and vkdropout denoting the momentum corresponding to wk and wkdropout our update was vkdropout wkdropout wkdropout vkdropout the momentum still functions as an autoregressive process smoothing out the gradients we are just reducing the rate of decay by a factor of pk test train errors after epochs number of dropout patterns used figure dropout networks trained using a restricted the number of dropout patterns each is from an independent experiment the blue line marks the test error for a network with half as many hidden units trained without dropout results for networks the fact that batchwise dropout takes less time per training epoch would count for nothing if a much larger number of epochs was needed to train the network or if a large number of validation runs were needed to optimize the training process we have carried out a number of simple experiment to compare independent and batchwise dropout in many cases we could have produced better results by increasing the training time annealing the learning rate using validation to adjust the learning process etc we choose not to do this as the primary motivation for batchwise dropout is efficiency and excessive use of is not efficient for datasets we used the set of pixel handwritten digits the dataset of pixel color pictures an artificial dataset designed to be easy to overfit following for mnist and we trained networks with dropout in the input layer and dropout in the hidden layers for the artificial dataset we increased the dropout to as this reduced the test error in some cases we have used relatively small networks so that we would have time to train a number of independent copies of the networks this was useful in order to see if the apparent differences between batchwise and independent dropout are significant or just noise http mnist our first experiment explores the effect of dramatically restricting the number of dropout patterns seen during training consider a network with three hidden layers of size trained for epochs using minibatches of size the number of distinct dropout patterns is so large that we can assume that we will never generate the same dropout mask twice during independent dropout training we will see million different dropout patterns during batchwise dropout training we will see times fewer dropout patterns for both types of dropout we trained independent networks for epochs with batches of size for batchwise dropout we got a mean test error of range and for independent dropout we got a mean test errors of range the difference in the mean test errors is not statistically significant to explore further the reduction in the number of dropout patterns seen we changed our code for pseudo randomly generating batchwise dropout patterns to restrict the number of distinct dropout patterns used we modified it to have period n minibatches with n see figure for n this corresponds to only ever using one dropout mask so that of the network s hidden weights are never actually trained and of the input features are ignored during training this corresponds to training a network with half as many hidden test error for such a network is marked by a blue line in figure the error during testing is higher than the blue line because the untrained weights add noise to the network if n is less than thirteen is it likely that some of the networks hidden units are dropped out every time and so receive no training if n is in the range thirteen to fifty then it is likely that every hidden unit receives some training but some pairs of hidden units in adjacent layers will not get the chance to interact during training so the corresponding connection weight is untrained as the number of dropout masks increases into the hundreds we see that it is quickly a case of diminishing returns artificial dataset to test the effect of changing network size we created an artificial dataset it has classes each containing training samples and test samples each class is defined using an independent random walk of length in the discrete cube for each class we generated the random walk and then used it to produce the training and test samples by randomly picking points along the length of walk giving binary sequences of length and then randomly flipping of the bits we trained three layer networks with n hidden units per layer with minibatches of size see figure looking at the training error against training epochs independent dropout seems to learn slightly faster however looking at the test errors over time there does not seem to be much difference between the two forms of dropout note that the is the number of training epochs not the training time the batchwise dropout networks are learning much faster in terms of real time independent batchwise test error train error independent batchwise epoch epoch figure artificial dataset classes each corresponding to noisy observations of a one dimensional manifold in learning using a fully connected network is rather difficult we trained three layer networks with n hidden units per layer with minibatches of size we augmented the training data with horizontal flips see figure convolutional networks dropout for convolutional networks is more complicated as weights are shared across spatial locations suppose layer k has spatial size sk sk with nk features per spatial location and if the operation is a convolution with f f filters for a minibatch of size b the convolution involves arrays with sizes layer k b nk sk sk weights wk nk f f dropout is normally applied using dropout masks with the same size as the layers we will call this independent decisions are mode at every spatial location in contrast we define batchwise dropout to mean using a dropout mask with shape nk each minibatch each convolutional filter is either on or across all spatial locations these two forms of regularization seem to be doing quite different things consider a filter that detects the color red and a picture with a red truck in it if dropout is applied independently then by the law of averages the message red will be transmitted with very high probability but with some loss of spatial information in contrast independent batchwise independent batchwise test error train error epoch epoch figure results for using networks of different sizes with batchwise dropout there is a chance we delete the entire filter output experimentally the only substantial difference we could detect was that batchwise dropout resulted in larger errors during training to implement batchwise dropout efficiently notice that the nk dropout masks corresponds to forming subarrays wkdropout of the weight arrays wk with size pk nk f the is then simply a regular convolutional operation using wkdropout that makes it possible for example to take advantage of the highly optimized cudnnconvolutionforward function from the nvidia cudnn package mnist for mnist we trained a type cnn with two layers of filters two layers of and a fully connected layer there are three places for applying dropout the test errors for the two dropout methods are similar see figure with varying dropout intensity for a first experiment with we used a small convolutional network with small filters the network is a scaled down version of the network from there are four places to apply dropout p p p p test errror independent batchwise epochs figure mnist test errors training repeated three times for both dropout methods the input layer is we trained the network for epochs using randomly chosen subsets of the training images and reflected each image horizontally with probability one half for testing we used the centers of the images in figure we show the effect of varying the dropout probability the training errors are increasing with p and the training errors are higher for batchwise dropout the curves both seem to have local minima around p the batchwise test error curve seems to be shifted slightly to the left of the independent one suggesting that for any given value of p batchwise dropout is a slightly stronger form of regularization with many convolutional layers we trained a deep convolutional network on without data augmentation using the notation of our network has the form f m p output it consists of convolutions with filters in the layer layers followed by two fully connected layers the network has million parameters we used an increasing amount of dropout per layer rising linearly from dropout after the third layer to dropout after the even though the amount of dropout used in the middle layers is small batchwise dropout took less than half as long per epoch as independent dropout this is because applying small amounts of independent dropout in large creates a bandwidth as the network s operation is stochastic the test errors can be reduced by repetition batchwise dropout resulted in a average test error of down to with testing independent dropout resulted in an average test error of reduced to with testing independent testing batchwise training error batchwise testing independent training p figure results using a convolutional network with dropout probability p batchwise dropout produces a slightly lower minimum test error conclusions and future work we have implemented an efficient form of batchwise dropout all other things being equal it seems to learn at roughly the same speed as independent dropout but each epoch is faster given a fixed computational budget it will often allow you to train better networks there are other potential uses for batchwise dropout that we have not explored yet restricted boltzmann machines can be trained by contrastive divergence with dropout batchwise dropout could be used to increase the speed of training when a fully connected network sits on top of a convolutional network training the top and bottom of the network can be separated over different computational nodes the fully connected of the network typically contains of the the nodes synchronized is difficult due to the large size of the matrices with batchwise dropout nodes could communicate instead of and so reducing the bandwidth needed using independent dropout with recurrent neural networks can be too disruptive to allow effective learning one solution is to only apply dropout to some parts of the network batchwise dropout may provide a less damaging form of dropout as each unit will either be on or off for the whole time period dropout is normally only used during training it is generally more accurate use the whole network for testing purposes this is equivalent to averaging over the ensemble of dropout patterns however in a setting such as analyzing successive frames from a video camera it may be more efficient to use dropout during testing and then to average the output of the network over time nested dropout is a variant of regular dropout that extends some of the properties of pca to deep networks batchwise nested dropout is particularly easy to implement as the submatrices are regular enough to qualify as matrices in the context of the sgemm function using the lda argument dropconnect is an alternative form of regularization to dropout instead of dropping hidden units individual elements of the weight matrix are dropped out using a modification similar to the one in section there are opportunities for speeding up dropconnect training by approximately a factor of two references ciresan meier and schmidhuber deep neural networks for image classification in computer vision and pattern recognition cvpr ieee conference on pages ben graham fractional http hinton and salakhutdinov reducing the dimensionality of data with neural networks science science alex krizhevsky learning multiple layers of features from tiny images technical report alex krizhevsky one weird trick for parallelizing convolutional neural networks http le cun bottou bengio and haffner learning applied to document recognition proceedings of ieee november oren rippel michael gelbart and ryan adams learning ordered representations with nested dropout http nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever and ruslan salakhutdinov dropout a simple way to prevent neural networks from overfitting journal of machine learning research ilya sutskever james martens george dahl and geoffrey hinton on the importance of initialization and momentum in deep learning in icml volume of jmlr proceedings pages li wan matthew zeiler sixin zhang yann lecun and rob fergus regularization of neural networks using dropconnect jmlr w cp sida wang and christopher manning fast dropout training jmlr w cp wojciech zaremba ilya sutskever and oriol vinyals recurrent neural network regularization http a fast dropout we might have called batchwise dropout fast dropout but that name is already taken fast dropout is an alternative form of regularization that uses a probabilistic modeling technique to imitate the effect of dropout each hidden unit is replaced with a gaussian probability distribution the fast relates to reducing the number of training epochs needed compared to regular dropout with reference to results in a of training a network on the mnist dataset with input dropout and dropout fast dropout converges to a test error of after epochs of this appears to be substantially better than the test error obtained in the preprint after epochs of regular dropout training however this is a dangerous comparison to make the authors of used a scheme designed to produce optimal accuracy eventually not after just one hundred epochs we tried using batchwise dropout with minibatches of size and an annealed learning rate of we trained a network with two hidden layers of rectified linear units each training for epochs resulted in a test error of after epochs the test error has reduced further to moreover per epoch is faster than regular dropout while is slower assuming we can make comparisons across different the epochs of batchwise dropout training take less time than the epoch of fast dropout training http using our software to implement the network each batchwise dropout training epoch take times as long as independent dropout in a figures of is given for the ratio between and independentdropout when using minibatch sgd when using to train networks the training time per epoch will presumably be even more than times longer as use requiring additional forward passes through the neural network
| 9 |
matching while learning ramesh vijay yash june jun abstract we consider the problem faced by a service platform that needs to match supply with demand but also to learn attributes of new arrivals in order to match them better in the future we introduce a benchmark model with heterogeneous workers and jobs that arrive over time job types are known to the platform but worker types are unknown and must be learned by observing match outcomes workers depart after performing a certain number of jobs the payoff from a match depends on the pair of types and the goal is to maximize the rate of accumulation of payoff our main contribution is a complete characterization of the structure of the optimal policy in the limit that each worker performs many jobs the platform faces a for each worker between myopically maximizing payoffs exploitation and learning the type of the worker exploration this creates a multitude of bandit problems one for each worker coupled together by the constraint on availability of jobs of different types capacity constraints we find that the platform should estimate a shadow price for each job type and use the payoffs adjusted by these prices first to determine its learning goals and then for each worker i to balance learning with payoffs during the exploration phase and ii to myopically match after it has achieved its learning goals during the exploitation phase keywords matching learning platform bandit capacity constraints introduction this paper considers a central operational challenge faced by platforms that serve as matchmakers between supply and demand such platforms face a fundamental on the one hand efficient operation involves making matches that generate the most value exploitation on the other hand the platform must continuously learn about newly arriving participants so that they can be efficiently matched exploration in this paper we develop a structurally simple and nearly optimal approach to resolving this in the model we consider there are two groups of participants workers and jobs the terminology is inspired by online labor markets upwork for remote work handy for housecleaning thumbtack and taskrabbit for local tasks etc however our model can be viewed as a stylized abstraction of many other matching platforms as well time is discrete and new workers and jobs arrive at the beginning of every time period workers depart after performing stanford university rjohari stanford university vjkamble columbia business school ykanoria a specified number of jobs each time a worker and job are matched a random payoff is generated and observed by the platform where the payoff distribution depends on the worker type and the job type as our emphasis is on the interaction between matching and learning our model has several features that focus our analysis in this paper first we assume that the platform centrally controls matching at the beginning of each time period the platform matches each worker in the system to an available job second strategic considerations are not modeled this remains an interesting direction for future work finally we focus on the goal of maximizing the rate of payoff we now describe the learning challenge faced by the platform in most platforms more is known about one side of the platform than the other accordingly we assume job types are known while the type of a new worker is unknown the platform learns about workers types through the payoffs obtained when they are matched to jobs however because the supply of jobs is limited using jobs to learn can reduce immediate payoffs as well as deplete the supply of jobs available to the rest of the marketplace thus the presence of capacity constraints forces us to carefully design both exploration and exploitation in the matching algorithm in order to optimize the rate of payoff generation our main contribution in this paper is the development of a matching policy that is nearly payoff optimal our algorithm is divided into two phases in each worker s lifetime exploration identification of the worker type and exploitation optimal matching given the worker s identified type we refer to our policy as deem decentralized for matching to develop intuition for our solution consider a simple example with two types of jobs easy and hard and two types of workers expert and novice experts can do both types of tasks well but novices can only do easy tasks well suppose that there is a limited supply of easy jobs more than the mass of novices available but less than the total mass of novices and experts in particular to maximize payoff the platform must learn enough to match some experts to hard jobs deem has several key features each of which can be understood in the context of this example first deem has a natural decentralization property it determines the choice of job type for a worker based only on that worker s history this decentralization is arguably essential in online platforms where matching is typically carried out on an individual basis rather than centrally in order to accomplish this decentralization it is essential for the algorithm to account for the externality to the rest of the market when a worker is matched to a given job for example if easy jobs are relatively scarce then matching a worker to such a job makes it unavailable to the rest of the market our approach is to price this externality we find shadow prices for the capacity constraints and adjust payoffs downward using these prices second our algorithm design specifies learning goals that ensure an efficient balance between exploration and exploitation in particular in our example we note that there are two kinds of errors possible while exploring misclassifying a novice as an expert and vice versa occasionally mislabeling experts as novices is not catastrophic some experts need to do easy jobs anyway and so the algorithm can account for such errors in the exploitation phase thus relatively less effort can be invested in minimizing this error type however mistakenly labeling novices as experts can be catastrophic in this case novices will be matched to hard jobs in the exploitation this is a reasonable proxy for the goal of a platform that say takes a fraction of the total surplus generated through matches more generally we believe that this is a benchmark problem whose solution informs algorithmic design for settings with other related objectives such as revenue maximization phase causing substantial loss of payoff thus the probability of such errors must be kept very small a major contribution of our work is to precisely identify the correct learning goals in the exploration phase and to then design deem to meet these learning goals while maximizing payoff generation third deem involves a carefully constructed exploitation phase to ensure that capacity constraints are met while maximizing payoffs a naive approach during the exploitation phase would match a worker to any job type that yields the maximum payoff corresponding to his type label it turns out that such an approach leads to significant violations of capacity constraints and hence poor performance the reason is that in a generic capacitated problem instance one or more worker types are indifferent between multiple job types and suitable is necessary to achieve good performance in our theoretical development we achieve this by modifying the solution to the static optimization problem with known worker types whereas our practical implementation of deem achieves appropriate via simple but dynamically updated shadow prices our main result theorem shows that deem achieves essentially optimal regret as the number of jobs n performed by each worker during their lifetime grows where regret is the loss in payoff accumulation rate relative to the maximum achievable with known worker types in our setting a lower bound on the regret is c log o for some c that is a function of system parameters deem achieves this level of regret to leading order when c while it achieves a regret of o log log if c situations where c are those in which there is an inherent tension between the goals of learning and payoff maximization to develop intuition consider an expanded version of the example above where each worker can be either an expert or novice programmer as well as an expert or novice graphic designer suppose the supply of jobs is such that if worker types were known only expert graphic designers who are also novice programmers would be matched to graphic design but if we are learning worker types then expert graphic designers must be matched to approximately o log n programming jobs to learn whether they are novice or expert programmers and in turn whether they should be matched to graphic design or programming jobs respectively thus o log average regret per period is incurred relative to the optimal solution with known types deem precisely minimizes the regret incurred while these distinctions are made thus achieving the lower bound on the regret our theory is complemented by a practical heuristic that we call which optimizes performance for small values of n and an implementation and simulation that demonstrates a natural way of translating our work into practice in particular our simulations reveal substantial benefit from jointly managing capacity constraints and learning as we do in deem and the remainder of the paper is organized as follows after discussing related work in section we present our model and outline the optimization problem of interest to the platform in section in section we discuss the three key ideas above in the design of deem and present its formal definition in section we present our main theorem and discuss the optimal regret scaling in section we present a sketch of the proof of the main result in section discuss practical implementation of deem and present the heuristic in section we use simulations to compare the performance of deem and with bandit algorithms we conclude in section all proofs are in the appendices this would be the case if programming jobs are both in high demand and more valuable conditional on successful completion than graphic design jobs related literature a foundational model for investigating the tradeoff is the stochastic bandit mab problem the goal is to find an adaptive policy for choosing among arms with unknown payoff distributions where regret is measured against the expected payoff of the best arm the closest work in this literature to our paper is by agrawal et al in their model they assume the joint vector of arm distributions can only take on one of finitely many values this introduces correlation across different arms depending on certain identifiability conditions the optimal regret is either or log in our model the analog is that job types are arms and for each worker we solve a mab problem to identify the true type of a worker from among a finite set of possible worker types our work is also related to recent literature on mab problems with capacity constraints we refer to these broadly as bandits with knapsacks the formulation is the same as the classical mab problem with the modification that every pull of an arm depletes a vector of resources which are limited in supply the formulation subsumes several related problems in revenue management under demand uncertainty and budgeted dynamic procurement there have been a variety of extensions with recently a significant generalization of the problem to a contextual bandit setting with concave rewards and convex constraints there is considerable difference between our model and bandits with knapsacks bandits with knapsacks consider a single mab problem over a fixed time horizon our setting on the other hand can be seen as a system with an ongoing arriving stream of mab problems one per worker these mab problems are coupled together by the capacity constraints on arriving jobs indeed as noted in the introduction a significant structural point for us is to solve these problems in a decentralized manner to ease their implementation in online platforms we conclude by discussing some other directions of work that are related to this paper there are a number of recent pieces of work that consider efficient matching in dynamic twosided matching markets a related class of dynamic resource allocation problems online bipartite matching is also well studied in the computer science community see for a survey similar to the current paper fershtman and pavan also study matching with learning mediated by a central platform relative to our model their work does not have constraints on the number of matches per agent while it does consider agent incentives finally a recent work studies a pure learning problem in a setting similar to ours with capacity constraints on each type of while there are some similarities in the style of analysis that paper focuses exclusively on learning the exact type rather than balancing exploration and exploitation as we do in this paper the model and the optimization problem in this section we first describe our model in particular we describe the primitives of our platform workers and jobs and givea formal specification of the matching process we study we conclude by precisely defining the optimization problem of interest that we solve in this paper preliminaries workers and jobs for convenience we adopt the terminology of workers and jobs to describe the two sides of the market we assume a fixed set of job types j and a fixed set of worker types i a key point is that the model we consider is a continuum model and so the evolution of the system will be described by masses of workers and in particular at each time step a mass i of workers of type i and a mass j of jobs of type j arrive in what follows we model the scenario where type uncertainty exists only for workers the platform will know the types of arriving jobs exactly but will need to learn the types of arriving workers we also assume for now that the arrival rates of jobs and workers are known to the platform later in section we discuss how the platform might account for the possibility that these parameters are unknown matching and the payoff matrix if a mass of workers of type i is matched to a mass of jobs of type j we assume that a fraction a i j of this mass of matches generates a reward of per unit mass while a fraction a i j generates a reward of zero per unit mass this formal specification is meant to capture a model in a setting where matches between type i workers and type j jobs generate a bernoulli a i j payoff we do not concern ourselves with the division of payoffs between workers and employers in this paper instead we assume that the platform s goal is to maximize the total rate of payoff we call the matrix a the payoff matrix throughout we assume that no two rows of a are a key assumption in our work is that the platform knows the matrix a in particular we are considering a platform that has enough aggregate information to understand compatibility between different worker and job types however for any given worker newly arriving to the platform the platform does not know the worker s type thus from the perspective of the platform there will be uncertainty in payoffs in each period because although the platform knows that a given mass of workers of type i exist in the platform the identity of the workers of type i is not known we define an empty job type such that all worker types matched to generate zero reward a i for all i we view as representing the possibility that a worker goes unmatched and thus assume that an unbounded capacity of job type is available worker lifetimes we imagine that each arriving worker lives in the system for n time and has the opportunity to be matched to a job in each time step so each job takes one unit of time to complete we assume the platform knows n note that we have i i n as the total mass of workers of type i in the system at each time step for our theoretical analysis we later consider a scaling regime where n and i while i remains fixed in this regime worker lifetimes grow to infinity and arrival rates scale down but the total mass of workers of each type available in each time period remains fixed generalized imbalance throughout our technical development we make a mild structural assumption on the problem instance defined by the tuple a this is captured by the following definition we say that arrival rates i and j satisfy the generalized imbalance condition if there is no pair of nonempty subsets of worker types and job formally this can be seen as a continuum scaling of a discrete system see this would be the case in a platform where the operator takes a fixed percentage of the total payoff generated from a match this mild requirement simply ensures that it is possible in principle to distinguish between each pair of worker types our analysis and results generalize to random worker lifetimes that are across workers of different types with mean n and any distribution such that the lifetime exceeds n with high probability types i j such that the total worker arrival rate of i exactly matches the total job arrival rate of j formally x x i j i j j i the generalized imbalance condition holds note that this condition does not depend on the matrix a worker history to define the state of the system and the resulting matching dynamics we need the notion of a worker history a worker history is a tuple hk jk xk where jm is the job type this worker was matched to at her time step in the system for m k and xm is the corresponding reward obtained note that since workers live for n jobs the histories will have k n we let denote the empty history for k system dynamics our goal is to model the following process the operator observes at any point in time the distribution of histories of workers in the platform and also knows the job arrival rate the matching policy of the platform amounts to determining what mass of workers of each type of history will be matched to which type of jobs ultimately for this process to generate high payoffs over time the platform must choose jobs to learn worker types in order to optimize payoffs with this intuition in mind we now give a formal specification of our system dynamics system profile a system profile is a joint measure over worker histories and worker types hk i is the mass of workers in the system with history hk and type i the evolution of the system is a dynamical system where each is a system matching policy to describe the dynamics we assume that the platform uses a matching policy to match the entire mass of workers to jobs in each time step we think of unmatched workers as being matched to the empty job type we assume that any mass of jobs left unmatched in a given period disappears at the end of that period our results do not depend on this assumption suppose that the system starts at time t with no workers in the system before this a matching policy for the system specifies at each time t given a system profile the mass of workers with each history that is matched to jobs of each type in particular let hk denote the fraction of workers with history hk matched to jobs of type j at time p t given a system profile thus j hk for all t hk and note that the matching policy acts on each worker s history not on the true type of each worker this is because the platform is assumed to not know worker types except as learned through the history itself dynamics these features completely determine the evolution of the system profile observe that hk i hk is the total mass of workers of type i with history hk who are the set for which the condition holds is open and dense in where are the strictly positive real numbers the platform can not directly observepthe system profile but can infer it the platform observes the mass of workers with each possible history h i it can then infer hk i s individually by using k hk knowledge of arrival rates i s and the a matrix which allows it to calculate the likelihood of seeing the sequence of outcomes in hk under the worker type i together with bayes rule in what follows we ultimately consider a analysis of the dynamical system and initial conditions will be irrelevant as long as the initial mass of workers is bounded matched to jobs of type j at time t given policy and system profile for all i j and t we have i i hk j i hk i hk a i j k n hk j i hk i hk a i j k n decentralization through who policies note that in general policies may be and may have complex dependence on the system profile we consider a much simpler class of policies that we call who policies these are policies where there exists a such that hk hk j in other words in a who policy the fraction of workers with history hk who are matched to jobs of type j does not depend on either time or on the full system profile thus who policies are decentralized an obvious concern at this point is that a policy can not allocate more jobs of type j than there are we formalize this capacity constraint in below in particular a who policy does not exceed the capacity of any job type in any period if and only if it satisfies let denote the class of who policies for a given n in section in appendix d we establish that it suffices to restrict attention to policies in that satisfy remark for any feasible policy there exists a who policy satisfying capacity constraints that achieves a payoff accumulation rate arbitrarily close to that of the former policy in particular who policies satisfying capacity constraints suffice to achieve the highest possible payoff accumulation rate steady state of a who policy first suppose that there are no capacity constraints and consider the system dynamics assuming the system initially starts empty the dynamics yields a unique steady state that can be inductively computed for k i i hk j i hk i hk j a i j k n hk j i hk i hk j a i j k n we refer to the measure as the steady state induced by the policy routing matrix of a who policy if the system is in steady state then at any time period induces a fraction p i j of the mass p of workers of type i that are assigned h i h j h i h j to type j jobs we have i j h i h i we call i j h the routing matrix achieved by the policy this is a row stochastic matrix each row sums to observe that the mass of demand for jobs of type j from workers of type i in any time p period is i i j and the total mass of demand for jobs of type j in any time period is i i j let x n be the set of routing matrices achievable when each worker does n jobs by who policies again we note that capacity constraints are ignored in the definition of x n in appendix d we show that x n is a convex polytope see proposition the optimization problem our paper focuses on maximization of the rate of payoff accumulation subject to the capacity constraints this leads to the following optimization problem x x maximize w n i i j a i j subject to x i i j j j x n the objective is the rate of payoff accumulation per time period expressed in terms of the routing matrix induced by a who policy the constraint is the capacity constraint the system will be stable if and only if the total demand for jobs of type j is not greater than the arrival rate of jobs of type j since x n is a convex polytope this is a linear program albeit a complex one the complexity of this problem is hidden in the complexity of the set x n which includes all possible routing matrices that can be obtained using who policies the remainder of our paper is devoted to solving this problem and characterizing its value by considering an asymptotic regime where n the benchmark known worker types we evaluate our performance relative to a natural benchmark the maximal rate of payoff accumulation possible if worker types are perfectly known upon arrival in this case any stochastic matrix is feasible as a routing matrix let d denote the set of all stochastic matrices x x i j x i j note that any routing matrix in d is implementable by a simple policy under known worker types given a desired routing matrix x route a fraction x i j of workers of type i to jobs of type j thus with known worker types the maximal rate of payoff accumulation is given by the solution to the following optimization problem x x maximize i x i j a i j subject to x i x i j j j x we let v denote the maximal value of the preceding optimization problem and let denote the solution this linear program is a special case of the static planning problem that arises frequently in the operations literature see the problem can also be viewed as a version of the assignment problem due to shapley and shubik in which the resources are divisible regret we evaluate the performance of a given policy in terms of its regret relative to v in particular given n and a who policy satisfying we define the regret of as v w n we focus on the asymptotic regime where n and try to find policies that have small regret in this regime this asymptotic regime allows us to identify structural aspects of policies that perform well in appendix d see proposition we show that it is relatively easy to design policies that achieve a vanishing regret and even regret that is within a constant factor of the smallest possible the idea is straightforward informally when n is large policies that explore for a vanishing fraction of worker lifetimes will be able to learn the worker s true type sufficiently well to yield a rate of payoff accumulation such that regret converges to zero in the limit for this reason our analysis focuses on a more refined notion of asymptotic optimality in particular we focus on developing policies that achieve a nearly optimal rate at which the regret v w n approaches zero this is formalized in theorem below a note on terminology note that intuitively who policies have the feature that decisions are taken on the basis of the history of a given worker not on the basis of the system profile as a whole in the sequel we will typically refer to hk j as the probability that a worker of history hk is matched to a job of type we use this terminology to make the presentation more intuitive since the intention is that our algorithms be implemented at the level of each individual worker s history however to formalize all our arguments we emphasize that our proofs translate hk j as the fraction of workers of history hk matched to a job type j this correspondence applies throughout the technical development decentralized for matching deem a policy in this section we present the design of a sequence of policies that achieves a nearly optimal rate of convergence of we refer to our policy design as deem gret v w n decentralized for matching our main result stated in the next section is theorem there we exactly quantify the regret performance of deem an upper bound on its regret and characterize it as nearly optimal a lower bound on the regret of any feasible who policy to begin to understand the challenges involved consider the example in figure in this example there are two types of workers novice and expert with a mass of of each present in steady state there are two types of jobs easy and hard each arriving at rate jobs workers easy hard expert novice figure an example we make several observations regarding this example that inform our subsequent work the benchmark in this example the optimal solution to the benchmark problem with known types routes all novices to easy jobs a mass of experts to easy jobs and a mass of experts to hard jobs of course our problem is that we do not know worker types on arrival capacity constraints affect an optimal who policy s need to learn if easy and hard jobs are in infinite supply then the who policy that matches all workers to easy jobs is optimal however with the finite supply of available easy jobs some workers must do hard jobs but which workers clearly for payoff optimality an optimal policy should aim to match experts to hard jobs but this is only possible if it first learns that a worker is an expert because of the structure of a the type of a worker can only be learnt by matching it to hard jobs those who perform well on these jobs are experts and those who fail are novices minimizing regret requires learning up front assigning workers of unknown type to hard jobs necessarily incurs regret relative to the benchmark indeed novices unknowingly matched to hard jobs lead to a regret of per unit mass of such workers in each period minimizing this regret therefore requires that the algorithm not only learn worker types but also do so relatively early in their lifetime so that workers identified as experts can be assigned many hard jobs in our work this leads to a structure where we separate our policy into exploration and exploitation phases the policy first tries to learn a worker s type and then exploits by assigning this worker to jobs while assuming that the learned type is correct the exploration phase will be of length o log n which is short relative to the worker s lifetime some mistakes in the exploration phase are worse than others there are two kinds of mistakes that the policy can make while learning it can mistakenly identify novices as experts and it can mistakenly identify experts and novices these mistakes differ in their impact on regret suppose that at the end of the exploration phase the algorithm misclassifies a novice as an expert this has a dire impact on regret the novice is then assigned to hard jobs in the exploitation phase and as noted above this incurs a regret of per unit mass of workers misclassified this way per unit time thus we must work hard in the exploration phase to avoid such errors on the other hand suppose that at the end of the exploration phase the algorithm misclassifies an expert as a novice this mistake is far less consequential workers misclassified in this way will be assigned to easy jobs but a mass of experts must be assigned to easy jobs even in the benchmark solution with known types therefore as long as this misclassified mass is not too large we can adjust for it in the exploitation phase this discussion highlights the need to precisely identify the learning goals of the algorithm to minimize regret how strongly does each worker type need to be distinguished from others a major contribution of our work is to demonstrate an optimal construction of learning goals for regret minimization as noted above the capacity constraints fundamentally influence the learning goals of the algorithm in the remainder of the section we describe key ideas behind the construction of our policy highlighted by the issues raised in the preceding example we formally describe deem in section we state our main theorem in section key idea use shadow prices as an externality adjustment to payoffs we begin by first noticing an immediate difficulty that arises in using who policies in the presence of capacity constraints who policies are decentralized they act only on the history of the worker as such they can not use aggregate state information about the system that conveys whether capacity constraints are being met or not in order to solve therefore we need to find a way to adjust for capacity constraints despite the fact that our policy acts only at the level of worker histories our key insight is to use shadow prices for the capacity constraints to adjust payoffs we then measure regret with respect to these adjusted payoffs recall that is a linear program let pn be the optimal shadow prices dual variables for the capacity constraints then by standard duality results it follows that the policy that is optimal for is also optimal for the following unconstrained optimization problem x x n i i j a i j pn j thus one may attempt to account for capacity constraints using shadow pn j the challenge here is that the set x n is quite complex and thus characterizing the optimal shadow prices of is not a reasonable path forward instead we use the optimal shadow prices in the benchmark linear program with known types to adjust payoffs we then measure regret with respect to these adjusted payoffs the practical heuristic we implement uses a different approach to estimate shadow prices see section we let denote the vector of optimal shadow prices for the capacity constraint in the problem with known types using the generalized imbalance condition we show that these prices are uniquely determined see proposition in appendix although j pn j for large n the platform should be able to learn the type of a worker type early in her lifetime leading to small j pn j this motivates an analog of x x n i i j a i j j we develop a algorithm for problem such that constraints on job capacities are not violated and complementary slackness conditions are satisfied if j then the job type j is fully utilized we then show this leads to the upper bound in the main result key idea meet required learning goals while minimizing regret as noted in our discussion of the example in figure we must carefully define the learning goals of the algorithm which worker types need to be distinguished from which others and with what level of confidence a key contribution of our work is to formalize the learning goals of our algorithm in this section we define the learning goals of the algorithm and outline the exploration phase that meets these goals let the set of optimal job types for worker type i be defined by j i arg a i j a standard duality argument demonstrates that in any optimal solution of the benchmark a worker type i is assigned only to jobs in j i j further effort is needed to ensure the policy does not violate capacity constraints and that complementary slackness holds recall that in the example in figure it is far more important not to misclassify a novice as an expert than to misclassify an expert as a novice we formalize this distinction through the following definition definition we say that a type i needs to be strongly distinguished from a type if j i j for each worker type i let str i be the set of all types from which i needs to be strongly distinguished str i j i j in words this means that i needs to be strongly distinguished from if it has at least one optimal job type that is not optimal for whereas it needs to be only weakly distinguished from if all optimal job types for i are also optimal for this definition is most easily understood through the example in figure and our subsequent discussion in particular note that for that example the benchmark shadow prices are easy and hard and thus j novice easy while j expert easy hard thus experts need to be strongly distinguished from novices since hard jobs are optimal for experts but not for novices on the other hand novices need to be only weakly distinguished from experts since easy jobs are optimal for experts as well in the exploration phase of our algorithm our goal is to classify a worker s type as quickly as possible the preceding definition is what we use to formalize the learning goals in this phase in particular consider making an error where the true type is but we misclassify it as i if is not in str i any probability of an error of o for such misclassification error is tolerable as n grows large as in the example in figure we choose log n as the target error probability for this kind of error on the other hand for any str i the optimal target error probability is much smaller in particular the optimal target error probability can be shown to be approximately if we choose a larger target we will incur a relatively large expected regret during exploitation due to misclassification if we choose a smaller target the exploration phase is unnecessarily long and we thus incur a relatively large regret in the exploration phase with the learning goals defined the exploration phase of deem operates in one of two subphases either guessing or confirmation as follows at every job allocation opportunity we check whether the posterior probability of the maximum a posteriori map estimate of the worker type is sufficiently high if this probability is low we say the policy is in the guessing subphase of the exploration phase and a job type is chosen at random for the next match on the other hand if it is high in particular greater than log n times the posterior probability of any other worker type then we say that the policy is in the confirmation subphase of the exploration phase in this regime the policy works to confirm the map estimate specifically in the confirmation subphase the policy focuses only on strongly distinguishing the map from all other types in str i the is that this must be done with minimum regret we frame this as an optimization problem see below essentially the goal is to find a distribution over job types that minimizes the expected regret until the confirmation goals are met in the confirmation subphase the policy allocates the worker to jobs according to this distribution until the type is confirmed we conclude by briefly explaining the role of the guessing phase in minimizing regret informally guessing is necessary so that confirmation minimizes regret for the correct worker type with high probability in particular suppose that there are two worker types i and that have the same optimal job types j i j and with a i j a j for all j j i in this case payoff maximization does not require distinguishing i from nevertheless it is possible that the confirmation policies for i and differ without necessarily distinguishing i from in this case i first needs to be distinguished from with probability of error o to achieve optimal regret to leading order concretely if there is no guessing phase and the map is i early in the worker s lifetime the policy will never discover its mistake and ultimately confirm using the wrong policy incurring an additional leading order regret of log key idea optimally allocate in the exploitation phase while meeting capacity constraints when the algorithm completes the exploration phase it enters the exploitation phase in this phase the algorithm aims to match a worker to jobs that maximize the rate of payoff generation given the confirmed type label a naive approach would match a worker labeled type i to any job type in j i since these are the optimal job types for worker type i after externality adjustment this approach turns out to fail spectacularly and generically leads to regret this occurs for any set of fixed shadow prices to see why we need the following fact fact under generalized imbalance as long as there is at least one capacity constraint that is binding in some optimal solution to the benchmark problem with known types there is at least one worker i such that i is supported on multiple job types this fact implies that appropriate between multiple optimal job types is necessary during exploitation for one or more worker types in order to achieve vanishing regret in order to implement appropriate suppose that we assign jobs during the exploitation phase using the routing matrix that solves the benchmark problem in this case each worker with confirmed type i is matched to job type j with probability i j however this naive approach needs further modification to overcome two issues first some capacity is being used in the exploration phase and the effective routing matrix during the exploration phase does not match second the exploration phase can end with an incorrectly classified worker type our policy in the exploitation phase chooses a routing matrix y that resembles but addresses the two concerns raised in the preceding paragraph crucially the chosen y should ensure that only job types in j i are assigned with positive probability and satisfy the complementary slackness conditions we show in proposition using fact that such a y indeed exists for an n large enough under the generalized imbalance condition and we show how to compute it note that as y is a fixed routing matrix it can be implemented in a decentralized manner we comment here that y is largely a theoretical device used to obtain the provable regret optimality of our policy in our implementation of deem see section we propose a far simpler solution we use dynamically updated shadow prices to automatically achieve appropriate the shadow prices respond in a manner based on the currently available supply of different job types the price of job type j rises when the available supply falls in particular fluctuations in these shadow prices naturally lead to the necessary tiebreaking for efficient exploitation formal definition of deem based on the discussion above in this section we provide a formal definition of the policy first for each i define the maximal utility u i a i j j then choose i such that u i a i j j p i kl i p i a i arg min j where kl i is the between the distributions bernoulli a i j and bernoulli a j and j is the set of distributions over j the idea is that sampling job types from i allows the policy to distinguish i simultaneously from all str i while incurring the smallest possible regret in appendix b we show that can be written as a small linear program if the optimization problem in has multiple solutions we pick the one that has the largest denominator and hence the largest numerator as well thus maximizing learning rate subject to optimality we choose x i arg max min kl i i i i we discuss details in the appendix b for m n let the job type chosen at opportunity m be jm and the outcome be xm for any i i and j j let l x i j a i j a i j define q i and for k let i l xm i jm denote the likelihood of the observed history until the job under worker type i let mapk arg i i be the map estimate based on the history and define i i i i the ratio of the posterior probabilities of type i and for convenience we refer to i as the prior odds of i relative to and i as the posterior odds of i relative to after k jobs deem is defined as follows phase exploration suppose that i mapk a guessing subphase if i log n choose the next job type uniformly at random in j b confirmation subphase to strongly distinguish i from types in str i if we have i log n but i i n draw the next job type from the distribution i c exit condition for the exploration phase if i log n and i i n then the worker is labeled as being of type i and the policy moves to the exploitation phase the worker is never returned to the exploration phase phase exploitation for every job opportunity for a worker confirmed to be of type i choose a job in j i with probability y i j where y is a routing matrix specified in proposition in appendix a such that system capacity constraints are not violated in steady state main result our main result is the following theorem in particular we prove a lower bound on the regret of constructed in the preceding section essentially any policy and show the sequence of policies achieves this lower bound the kl divergence between a bernoulli q and a bernoulli q distribution is defined as q log q log q theorem fix a such that a no two rows of a are identical and b the generalized imbalance condition holds then there is a constant c c a such that lower bound for any n and any who policy that is feasible for v w n c log n o and n is feasible for for each n with upper bound the sequence of policies v w n log log n c log n o o n n the constant c that appears in the theorem depends on the primitives of the problem a it is defined as follows p x u i a i j p j p c i min c i c i i kl i j note that c i captures the regret per unit mass of service opportunities from workers of type i informally instances in which there is a conflict between exploration learning worker type and exploitation maximizing payoffs have larger values of the case c corresponds to instances where the goals of learning and regret minimization are aligned learning does not require regret of log in this case our result establishes that our chosen policies are nearly asymptotically optimal to within o log log on the other hand the instances with c are those instances with a tension between learning and payoffs for these instances our result establishes that our chosen policies n achieve asymptotically optimal regret upto leading order in n the constant c is best understood in terms of the definition of in the exploration phase cf note jobs that for a fixed for workers of true type i the smallest easy workers hard value of the log posterior odds i p log i i at an expected rate of i kl i i expert during confirmation thus when n is large the time taken to confirm i against worker types in str i is app proximately log i kl i hence novice the regret incurred until confirmation is complete per p unit mass of workers of type i is approximately u i a i j p figure an example where p j log i kl i optimizing log regret is unavoidable over results in an expected regret of nearly c i log n that must be incurred until the strong distinguishing goals are met for a unit mass of workers of type i this translates to an expected regret of nearly i c i log n i c i log owing to workers of type i per time unit this reasoning forms the basis of our lower bound formalized in proposition in appendix a now a regret of log is unavoidable when c i for some i to develop some intuition for this case consider the same example as before but with a modified payoff matrix shown in figure it can be shown that in this case a regret of log is unavoidable in the event that the true type of the worker is novice the problem is the following to distinguish novices from experts the policy must allocate workers to hard jobs but hard jobs are strictly suboptimal for novices and so if the true type of the worker is novice some regret is unavoidable in particular to develop intuition for the magnitude of this regret imagine a policy that assigns workers hard jobs for the first k steps leading to k absolute regret per unit mass of workers and based on the realized payoffs estimates the worker type with confidence exp k if the worker is estimated to be a novice the policy can choose to assign only easy jobs to the worker however this means that there will be no further learning about the worker type and the expected contribution to absolute regret is about n times the probability that the worker is truly an expert n exp k per unit mass of workers combining we see the total absolute regret is at least k n exp k log n over the lifetime of a unit mass of workers here k log n is needed to achieve log n absolute regret we then divide by n to obtain the regret per unit mass of service opportunities by workers this discussion motivates the following definition definition consider a worker type i suppose that there exists another type such that a i j a j for all j j i and j i j then we say that the ordered pair i is a difficult type pair a similar definition also appears in the modification here is that the sets j i are defined with respect to payoffs to account for capacity constraints the constant c i if and only if there is some other such that i is a difficult type pair in general if none of the job types in j i allow us to distinguish between i and and all the jobs in j i are strictly suboptimal for then any policy that achieves small regret must distinguish between i and and must assign the worker to jobs outside j i to make this distinction this leads to a regret of log n per unit mass of workers of type i over the lifetime of the workers on the other hand if there is no difficult type pair then there is no conflict between learning and regret minimization here one can show that c i for each i and this value is attained by some distribution i that is supported on j i to see this note that if is fully supported on j i for all j j i then the numerator is however if there is no type such that i is a difficult type pair then the denominator is strictly positive and thus c i in this case c and our main result says that our algorithm achieves a regret of o log log this regret basically results from the uniform sampling of the job types during the guessing phase which accounts for o log log fraction of the lifetime of the worker proof sketch the proof of theorem can be found in appendix a here we present a sketch the critical ingredient in the proof is the following relaxed optimization problem in which there are no capacity constraints but capacity violations are charged with prices from the optimization problem with known worker types hx i x x x max i x i j a i j j i x i j j n in fact our proof demonstrates that this regret can be brought down to any o fn such that fn o by choosing a different threshold in the guessing phase lower bound on regret if c if there is at least one difficult pair of worker types cf section there is an upper bound on the performance of any policy in this problem expressed relative to v this result follows directly from v c log n o n where c is precisely the constant appearing in by a standard duality argument we know that w n and hence this bound holds for w n as well see proposition yielding the lower bound on regret on our original problem is feasible for problem upper bound on regret there are two key steps in proving that v c log o and w n with an arbitrary routing matrix first we show that our policy supported on j i for each i i achieves near optimal performance for the single bandit problem formally if with some abuse of notation we let denote the value attained by a policy in problem x i x i j a i j x hx i i i j j j then v c log o o log log w n o log is this is shown proposition thus we have n in problem in the next part of the proof we show that we can design a routing matrix y that such that the following conditions depends on n for the exploitation phase of the policy are satisfied p i j j for all j such that j a complementary slackness i and p i j j for all other j j b feasibility i with this choice of y in the exploitation this is shown in proposition we deduce that phase is feasible for problem and the complementarity slackness property implies that w n yielding our upper bound on regret w n n the correct label of the worker construction of y at the end of the exploration phase of is learned with a confidence of at least o this fact coupled with the generalized imbalance condition leading to flexibility in modifying cf fact is sufficient to ensure an appropriate and feasible choice of y o will correct the deviations from in terms of capacity utilizations of job types with j arising because of the short exploration phase and because of the infrequent cases in which exploitation is based on an incorrect worker label coming out of the exploration phase proves this result for a similar policy practical considerations and a heuristic our theoretical analysis of deem focused on an asymptotic regime where n in this section we focus on a number of practical considerations that arise when considering implementation of a policy like deem first we discuss a practical approach to managing capacity constraints via dynamic shadow prices second we discuss two modifications to the algorithm that improve performance when n is finite and suggest a modified heuristic that we call in the next section we simulate both deem and and evaluate their performance dynamic shadow prices a key step in making deem practical is to use dynamic shadow prices based on imbalances in the market in our mathematical model we had assumed that the masses of new workers and jobs arrive instantaneously at the beginning of each period after which they are instantaneously matched and further each job either gets matched immediately on arrival or disappears at the end of the period however in real platforms the arrivals departures and matchings of workers and jobs occur sequentially in continuous time in these settings it is common for platforms to maintain a queue of jobs of each type that grows when new jobs arrive in continuous time and shrinks when existing jobs are matched in this scenario the queue length at any time can be leveraged to compute an instantaneous shadow price for the job type that can be utilized for externality adjustment of the payoffs a reasonable approach is to set the shadow price on each job type via a decreasing function of the corresponding queue length one natural way to do this is as follows assume that in practice the arriving jobs accumulate in queues for the different types each with a finite capacity b if the capacity is exceeded jobs are lost if the queue length of job type j at any instant is q j we set the price of j at that instant to pq j b q j thus the price lies in note that pq j changes every time a job is assigned to a worker or a new job arrives because the queue length changes we implement a analog of this approach in our simulated marketplace in the next section computing the prices in this fashion obviates the need to explicitly compute y in the exploitation phase of our policy instead the exploitation phase can be implemented by allocating optimally for each worker given the current prices still a fully decentralized solution the natural fluctuation in these prices ensures appropriate in allocation cf fact these prices can be incorporated in the implementation of deem in the following way modifying the exploration and exploitation phases while computing i for each i i in the confirmation phase of deem replace j by the instantaneous shadow prices pq j in eq similarly in the exploitation phase instead of explicitly computing the routing matrix y use these prices to decide assignments in the following manner define the sets j i as j i arg max a i j pq j if an assignment has to be made in the exploitation phase for some worker who has already been labeled as being of type i then a job type j j i is chosen note that typically j i will be a singleton for learning goals the platform can also determine the strong distinction requirements see definition the set str i for each i based on the sets j i induced by the instantaneous prices defined above but this approach suffers from the drawback that random fluctuations of the shadow prices around their mean values can result in changes in the sets j i and hence in the learning goals which could be detrimental to the performance of our policy on the other hand these fluctuations are essential for appropriate across multiple optimal job types in the exploitation phase thus we propose the following modification we utilize an average of recent prices within a fixed recent window of time and modify the definition of j i to incorporate a small tolerance so that the set str i remains unaffected by the fluctuations in the prices to be precise for a window size w let i be the unweighted average of the queue length based prices seen over the past w epochs of changes in the price again note that pq i changes every time a job is assigned to a worker and also when new jobs arrive next for a tolerance we define i j j a i j j max a i j j j then the set str i for each i see definition is defined based on i improving performance in the finite n regime we propose two changes that improve performance in the finite n regime first recall that if a worker type i has an optimal job type j that is not optimal for some worker type then deem tries to achieve a probability of misclassifying a worker of type as type i for small n however we can do better the desired probability of error should explicitly depend on how much regret type incurs by performing job j if this regret is very small then it isn t worth trying to make this distinction with high precision in particular for each str i define r i max i j u a j j r i is the highest regret incurred if type is matched to a suboptimal job that is optimal for type i a reasonable approach is to aim for a probability of misclassification of as i of r i instead of thus accounting for the fact that if r i is small then we can tolerate a higher probability of error the second change we propose is to explicitly incorporate the posterior into the exploration phase recall that in deem we guess and then confirm in the exploration phase and guessing is not optimized but rather involves exploration uniformly at random when n is finite we can gain by instead leveraging the posterior at each round to appropriately allocate confirmation effort across the different types until the learning goals are met for some type i in principle this approach can subsume the guessing and confirmation phases into a uniformly defined exploration phase the challenge is to precisely describe how the posterior is used to guide the exploration phase in practice we can continue to benefit from learning during the exploitation phase instead of optimizing the payoff for the confirmed worker label we can optimize for the current map estimate thus accounting for the possibility that we may have confirmed incorrectly clearly doing so can only improve performance a practical heuristic for finite n in this subsection we incorporate the two suggestions of the preceding subsection into a formal heuristic we refer to as it will be convenient to define r i for all str i and define i as follows if str i then i i i if str i then i i i if i i log n and i n otherwise next after k matches for each type i define lk i i i n lk i is the set of types that i remains to be distinguished from after k opportunities in case the true worker type is i effort should ideally be directed towards these distinctions in order to speed up confirmation next define the posterior probability of the worker being of type i after opportunity k as i i gk i p i i then is defined as follows phase exploration is in this phase as long as there is no i such that i n after k allocations choose a job from a distribution that satisfies p p j g i u i a i j p j k i p a arg min j i i kl i i log r i i log n can be computed as a solution to a linear program as shown in appendix b exit from exploration if at some opportunity k there is a worker type i such that i i n then label the worker as being of type i and enter the exploitation phase phase exploitation exploitation is the same as defined in deem shares the same structure as deem but with changes to the exploration phase to optimize learning at finite n as we will see in our simulations these optimizations allow to substantially outperform deem at small n although an exact analysis is beyond the scope of this work we conjecture that inherits the asymptotic performance bounds that hold for deem informally consider the periods in where i log n for the map estimate i for some type i these periods are analogous to the guessing phase in deem similar to deem one can argue that this phase accounts for at most o log log fraction of the worker lifetime the stages where i log n for all i but i n for some are similarly analogous to the confirmation subphase of deem here we can informally argue that on the event that i is the true type the posterior distribution quickly concentrates sufficiently on i and the policy defined by the adjustment in the denominator precisely accounts for differences in regret incurred in making each distinction asymptotically achieves the same regret until confirmation in the leading order term as the stationary randomized policy i defined in deem to see this observe that as gk i for some i the objective function in converges to the objective function for computing i in modulo the log r i log n factors that simply capture the fact that the learning goals have been adjusted for small n just like deem can be analogously implemented using shadow prices instead of j with the sets str i computed using smoothed prices simulations in this section we simulate deem and in a market environment with shadow prices we compare performance of these policies against a greedy policy as well as benchmark mab approaches we consider instances with types of workers and types of jobs we assume that n and i for each i so that i we generated instances where for each instance independently j is sampled from a uniform distribution on and each entry of the expected payoff matrix is sampled from a uniform distribution on given an instance n a our simulated marketplace is described as follows arrival process time is discrete t t where we assume t at the beginning of each time period t mt i number of workers of type i and lt j jobs of type j arrive such that i i and i i are sequences for a scaling constant we assumed that mt i i for all t recall that i for all i in all our instances mt i is deterministic we generated lt j from a binomial distribution with mean j j each worker stays in the system for n periods and then leaves each job requires one period to perform queues we assume that the arriving jobs accumulate in queues for the different types each with a finite buffer of capacity b where we choose b if the buffer capacity is exceeded for some job type then the remaining jobs are lost matching process in the beginning of each period once all the new workers and jobs have arrived the platform sequentially considers each worker in the and generates an assignment based on the history of the worker and the chosen policy if a job of the required type is unavailable then the worker remains unmatched for each match a random payoff is realized drawn from the distribution specified by a and the tuple is added to the history of the worker prices the platform maintains prices for the jobs in the following way if the queue length of job type j at any instant is q j the price of j at that instant is set to be pq j b q j the prices thus change when either new jobs arrive at the beginning of each period or a job gets matched to a worker a remark on the choice of instances in all our test instances all the entries of the expected payoff matrix a are distinct we conjecture that this would typically be the case in many settings in practice exact indistinguishability of different worker types using a particular a binomial distribution has two parameters n p where n is the number of trials and p is the probability of success at each trial we chose n and p j for generating each lt j note that since j we have p this consists of the new workers and all the workers who have arrived in the past n periods or from the beginning of time if t n policy deem deem performance ratio avg perf ratio std error table average performance ratios of different policies across instances along with standard errors figure the empirical cdf of the performance ratios of the different policies job type wouldn t be commonly encountered for such instances as we discussed in section there is no conflict between learning and regret minimization in the confirmation subphase of deem and it incurs a regret of o log log where the leading order term results entirely from the regret incurred due to uniform sampling of the job types while guessing in fact in this case we can show that a greedy policy that maximizes the payoff for the map estimate throughout the exploration phase and enters exploitation after strong distinction requirements are met for some worker type incurs a regret of o it would thus appear that this greedy policy which is attractive for its simplicity would be a reasonable solution in such cases however our simulations show that when n is small can lead to significant gains over the greedy approach we discuss this result further below results we implemented five policies deem and versions of ucb thompson sampling ts and greedy and compared their performance all algorithms measure payoffs adjusted by the shadow prices in this way they all effectively account for capacity constraints we have already described this implementation for the deem variants earlier greedy simply chooses the job type that maximizes the instantaneous shadow price adjusted payoff for the map estimate throughout the lifetime of the ucb and ts are well known algorithms for the standard stochastic bandit problem and the details of their implementation in the presence of shadow prices which we will denote and can be found in appendix figure shows the cumulative distribution function over the instances of the ratio of the payoff generation rate attained by a policy and the optimal payoff generation rate if the worker types are known for the five candidate policies the average of these ratios over the sample space for each policy is given in table as one can observe significantly outperforms deem and deem and perform considerably better than ucb on average presumably benefiting from the knowledge of the informally since every job can make every possible distinction between worker types the probability that the true worker type is not identified as the map estimate at opportunity t decays as exp where is an instance dependent constant thus the total expected regret over the lifetime of a worker is bounded as o in expectation with shadow prices during exploitation is rendered unnecessary moreover we allow the algorithm to continue to benefit from learning during exploitation by optimizing for the current map estimate rather than for the confirmed type thus the distinction between exploration and exploitation disappears pected payoff matrix a in contrast to both deem and actively experiment in order to learn quickly deem experiments during its guessing phase where it uniformly samples job types and experiments due to sampling from the posterior see appendix c for details especially in the early stages when the posterior is not sufficiently concentrated while this experimentation is desirable neither of them efficiently trade off between payoff maximization and learning resulting in a degraded performance in comparison to on the other hand suffers from excessive exploitation resulting in performance which although better than deem and is still significantly worse than we now focus on this latter difference we had discussed in our remark earlier that in instances without exactly indistinguishable type pairs is expected to perform reasonably well however in our simulations we see that on an average across all instances results in about reduction in regret as compared to in order to gain intuition for this gain observe that although exactly indistinguishable type pairs are rarely encountered it could frequently be the case that two type pairs i and have their expected payoffs a i j and a j under some job type j close enough that practically it would take too long to distinguish them with a reasonably small probability of error this results in approximately difficult type pairs two types i and that have different optimal job types where none of the optimal job types for i is able to distinguish between i and reasonably quickly in these situations if at any point the map estimate under the greedy policy is i when the true type is then exploiting for i may not allow the policy to recover from its bad estimate within a reasonable number of jobs thus incurring high regret there is high probability of encountering such a situation in the early stages of the algorithm when the confidence in the map estimate is not sufficiently high this is where s approach of appropriately allocating confirmation efforts depending on the posterior results in significant gains in performance over the greedy approach in particular there are situations where will appropriately prioritize learning and will actively explore instead of simply choosing the optimal job type for the map estimate for example suppose there are only two types i and where i is close to being a difficult pair the learning rate offered by the optimal job type for i towards the i distinction is close to in this case even if the current map estimate is i with high confidence although i n so that we are still in the exploration phase instead of choosing the optimal job type for i may choose some other job type that will quickly distinguish i from thus we expect to outperform more significantly in situations where there are approximately difficult type pairs in order to verify that this is indeed the case first we formally define a simple notion of approximate indistinguishability and difficulty we say that the type pair i is using a job type j if kl i otherwise we say that it is we say that type pair i is if kl i for all j j i and j i j for we picked those instances out of the in which there is at least one pair i such that i is and there is some j such that kl i instances with at least one type pair such that there exists a job type under which this pair is we will call such instances instances these are precisely the instances where measured exploration in cases where the map estimate is a worker type that forms a type pair with some other type can lead to significant gains in note that as increases the set of instances that satisfy note that if kl i the job j can distinguish i from with a misclassification error of for sample a b number of instances avg regret reduction table s percentage reduction in regret relative to greedy on average in sample a consisting of instances and in sample b consisting of instances that aren t these conditions grows progressively larger a instance is also a instance for we next considered two sets of samples sample a is the set of instances and sample b is the set of instances that are not based on the discussion above we should expect a substantial reduction of regret in the instances relative to those instances that are not indeed consistent with our intuition a one tailed two sample showed that the mean percentage reduction of regret in sample a is larger than that in sample b with a of the sample average percentage reduction of regret in the two samples is given in table conclusion this work suggests a novel and practical algorithm for learning while matching applicable across a range of online matching platforms several directions of generalization remain open for future work first while we consider a model a richer model of types would admit a wider range of applications workers and jobs may be characterized by features in a space with compatibility determined by the inner product between feature vectors second while our model includes only uncertainty in general a market will include uncertainty both supply and demand will exhibit type uncertainty we expect that a similar approach using externality prices to first set learning objectives and then achieve them while incurring minimum regret should be applicable even in these more general settings third recall that we assumed the expected surplus from a match between a worker type and a job type the a matrix is known to the platform this reflects a first order concern of many platforms where aggregate knowledge is available but learning individual user types quickly is challenging nevertheless it may also be of interest to study how a can be efficiently learned by the platform this direction may be related to issues addressed by the literature on a single bandit under capacity constraints we conclude by noting that our model ignores strategic behavior by participants a simple extension might be to presume that workers are less likely to return after several bad experiences this would dramatically alter the model forcing the policy to become more conservative the modeling and analysis of these and other strategic behaviors remain important challenges references rajeev agrawal demosthenis teneketzis and venkatachalam anantharam asymptotically efficient adaptive allocation schemes for controlled iid processes finite parameter space strong distinction in at most about jobs on average which is reasonably quick automatic control ieee transactions on shipra agrawal and nikhil r devanur bandits with concave rewards and convex knapsacks in proceedings of the fifteenth acm conference on economics and computation pages acm shipra agrawal and nikhil r devanur linear contextual bandits with global constraints and objective arxiv preprint shipra agrawal and navin goyal analysis of thompson sampling for the bandit problem arxiv preprint shipra agrawal nikhil r devanur and lihong li contextual bandits with global constraints and objective arxiv preprint mohammad akbarpour shengwu li and shayan oveis gharan dynamic matching market design available at ssrn ross anderson itai ashlagi david gamarnik and yash kanoria a dynamic model of barter exchange in proceedings of the annual symposium on discrete algorithms pages siam baris ata and sunil kumar heavy traffic analysis of open processing networks with complete resource pooling asymptotic optimality of discrete review policies the annals of applied probability audibert and munos introduction to bandits algorithms and theory in icml peter auer nicolo and paul fischer analysis of the multiarmed bandit problem machine learning moshe babaioff shaddin dughmi robert kleinberg and aleksandrs slivkins dynamic pricing with limited supply acm transactions on economics and computation mariagiovanna baccara sangmok lee and leeat yariv optimal dynamic matching available at ssrn ashwinkumar badanidiyuru robert kleinberg and yaron singer learning on a budget posted price mechanisms for online procurement in proceedings of the acm conference on electronic commerce pages acm ashwinkumar badanidiyuru robert kleinberg and aleksandrs slivkins bandits with knapsacks in foundations of computer science focs ieee annual symposium on pages ieee ashwinkumar badanidiyuru john langford and aleksandrs slivkins resourceful contextual bandits in proceedings of the conference on learning theory pages omar besbes and assaf zeevi dynamic pricing without knowing the demand function risk bounds and algorithms operations research omar besbes and assaf zeevi blind network revenue management operations research bubeck and nicolo regret analysis of stochastic and nonstochastic bandit problems machine learning jim g dai on positive harris recurrence of multiclass queueing networks a unified approach via fluid limit models the annals of applied probability pages ettore damiano and ricky lam stability in dynamic matching markets games and economic behavior sanmay das and emir kamenica bandits and the dating market in proceedings of the international joint conference on artificial intelligence pages morgan kaufmann publishers daniel fershtman and alessandro pavan dynamic matching experimentation and cross subsidization technical report citeseer john gittins kevin glazebrook and richard weber bandit allocation indices john wiley sons ming hu and yun zhou dynamic matching in a market available at ssrn sangram v kadam and maciej h kotowski matching technical report harvard university john kennedy school of government emilie kaufmann nathaniel korda and munos thompson sampling an asymptotically optimal analysis in algorithmic learning theory pages springer morimitsu kurino credibility efficiency and stability a theory of dynamic matching markets tze leung lai and herbert robbins asymptotically efficient adaptive allocation rules advances in applied mathematics constantinos maglaras and assaf zeevi pricing and capacity sizing for systems with shared resources approximate solutions and scaling relations management science constantinos maglaras and assaf zeevi pricing and design of differentiated services approximate analysis and structural insights operations research laurent massoulie and kuang xu on the capacity of information processing systems unpublished aranyak mehta online matching and ad allocation theoretical computer science daniel russo and benjamin van roy learning to optimize via posterior sampling mathematics of operations research denis and assaf zeevi optimal dynamic assortment planning with demand learning manufacturing service operations management lloyd s shapley and martin shubik the assignment game i the core international journal of game theory adish singla and andreas krause truthful incentives in crowdsourcing tasks using regret minimization mechanisms in proceedings of the international conference on world wide web pages international world wide web conferences steering committee zizhuo wang shiming deng and yinyu ye close the gaps a algorithm for revenue management problems operations research appendices a proof of theorem for the rest of this section let c be the quantity defined in we present it again for the convenience of the reader p x u i a i j p j p c i min i c i i kl i i j recall problem we will first show the following lower bound on the difference between v and w n which follows directly from agrawal et al proposition lim sup n n v w n log n proof consider the following relaxed problem x x x x max i x i j a i j j i x i j j n by a standard duality argument we know that w n the optimal policy in this problem is a solution to x i i j a i j j then from theorem in agrawal et al we know that lim sup n n v log n the result then follows from the fact that w n let be the value attained by deem in optimization problem same as assuming that the routing matrix y in the exploitation phase is such that y i is supported on j i we will prove an upper bound on the difference between v and note that the difference in these values of the two problems is the same as the difference in x x i i j a i j j and p i u i following is the result proposition consider the sequence of policies n n such that the routing matrix y used in the exploitation phase satisfies y i j i then lim sup n n v log n further suppose that there are no difficult type pairs then lim sup n n v k log log n where k k a is some constant in order to prove this proposition we need the following result that follows from theorem in lemma let be random variables where xi is the outcome of choosing a job type j j according to a distribution j suppose i i and b i i are such that x kl i for each b let k i lim sup n i then ei inf k k i f n p log f n kl i i a for some k n i a for any a next we also need the following result lemma let be random variables for each j k such that m and e xij mj let a b and k j be such that a k j b for each j let p snj k j xij and let b k let e be the event snj a for some j before snj b for all j b let t inf n snj a for some j then e t g for some g that does not depend on a b or k j for any j proof define k j a z j if we define ej snj a for some n and tj inf n snj a pk p then we have e ej and thus we have e t k e tj now we e t have e tj x x x x x np t n n x np xij n exp nmj zj n exp mj zj n exp g mj m where the second inequality results from the hoeffding bound taking g proves the result pk j g m m proof of proposition let x denote the type of the worker let r i denote the expected total regret over the lifetime of a worker on the event x i defined as x r i n max a i j j n i j a i j j here n i j is the expected total number of times a job of type j is allotted to a worker of type i under the policy n we will refer to the above quantity as just regret for the rest of the proof all the expectations are on the event x i the proof will utilize the fact that the log of the ratio of the posteriors log i for any i and is a random walk such that if is the probability distribution over job types chosen at opportunity k then log i log i log p xk i jk p xk jk p xk i jk where the random variables log p x k are independent random variables with a finite k i jk p support since xk and jk take finite values and with mean j kl i note here that if p k j kl i i then since kl i i it must be that kl i i for all j such that p xk i jk and in this case we must have a i j a i j for all such j thus log p x k i jk the if the drift of the random walk is at some k then the random walk has stopped recall that log i log i our goal is to compute an upper bound on r i to do so we first compute the expected regret incurred till the end of the exploration phase in our algorithm denote this by re i below we will find an upper bound on this regret assuming that the worker performs an unbounded number of jobs clearly the same bound holds on the expected regret until the end of exploration phase if the worker leaves after n jobs our strategy is as follows we will decompose the regret till the end of exploration into the regret incurred till the first time one of the following two events occurs event a log i log log n or i log n and event b log i log log n or i log n followed by the residual regret which will depend on which event occurred first note that one of these two events will occur with probability we will compute two different upper bounds depending on two different regimes of initial posterior distributions of the different types note that the posterior probabilities of the different types i under the observed history is a sufficient statistic at any opportunity under our policy first suppose that i is highest expected regret incurred over all possible starting posteriors that a do not satisfy the conditions of both a and b and b such that log i log i let be the set of starting posteriors that satisfy these conditions next suppose that i is the highest expected regret incurred where the supremum is taken over all possible starting posteriors that a do not satisfy the conditions of both a and b and b such that log i log i let be the set of posteriors that satisfy these conditions clearly re i i let g i denote the maximum expected regret incurred by the algorithm till one of a or b occurs where the maximum is taken over all possible starting posteriors that do not satisfy the conditions of both a and b for convenience we denote a b as the event that a occurs before b and vice versa similarly for any two events thus we have i g i sup p a e residual sup p b e residual and i g i sup p a e residual sup p b e residual first let us find a bound on g i this is easy because g i e inf k i log n o log log n from lemma since if neither condition a nor b is satisfied then the policy in the guessing phase and thus all job types are utilized with positive probability and hence the condition in the lemma of the requirement of a positive learning rate for each distinction is satisfied also from the second statement in lemma since the posteriors in are such that i i we have that p b p b ever occurs o log n finally we have p b w we thus have i o log log n sup e residual sup e residual and log n i o log log n sup e residual w sup e residual next consider suplr e residual lr this depends on which of the following two events happens next event log i log log n or i log n event i gets confirmed i log i log n or i i n again conditional on a one of the two events will occur with probability we have sup e residual lr sup e residual lr p lr lr lr e residual lr p lr now from lemma it follows that e residual lr p lr e residual regret i lr m i p lr for some constant m that does not depend on lr or n to see this note that is the event that starting from some values between log log n and log n some random walk i for i crosses the lower threshold log log n before all the random walks i for each str i cross the upper threshold log n now between these two thresholds the job distribution equals i for all hence the drift for any of the random walks i for each str i is strictly positive and finite further as we argued earlier if the drift for any of these random walks is then that random walk has stopped and such random walks can be ignored thus the conditions of lemma are satisfied and hence e time till i lr g since the regret per unit time is bounded the deduction follows moving on we have e residual lr e inf k min i n i x u i a i j j thus we have sup e residual lr o sup p lr i lr lr p lr e inf k min i n i x u i a i j j thus we have sup e residual lr o qk i lr qk e inf k min i n i x u i a i j j for some qk where qk since suplr p lr next consider suplr e residual lr this depends on which of the following two events occurs next event b log i log log n or i log n event b some i gets confirmed log log n or n again conditional on b one of the two events will occur with probability let k i be the maximum expected regret incurred till either b or b occurs given that b has occurred and the starting likelihoods were in lr note that if b b then the exploration phase ends and hence there is no residual regret although note that if is such that i str then p b b lr o from the second statement in lemma then we have sup e residual lr k i sup p b b lr i lr lr now we can show that if there is a type such that i i str then k i o log n where as if there is no such type then k i o we first show this let t i be the maximum expected time taken till either b or b occurs given that b has occurred and the starting likelihoods were in lr clearly k i t i since the price adjusted payoffs lie in now let be the time spent after b has occurred before b or b occurs while either a algorithm is in the guessing phase or b the algorithm is in the confirmation phase for some guessed type such that j for some j such that kl i under this case we will say that the algorithm is in state and let be the event that the algorithm is in state at time next let be the time spent after b has occurred before b or b occurs while the algorithm is in the confirmation phase for some guessed type such that j for all j such that kl i clearly this can happen only for such that i i str thus if such an doesn t exist then under this case we will say that the algorithm is in state and let be the event that the algorithm is in state at time now we clearly have t i supr e lr supr e lr let i log i then observe that e i i lr m for some m that depends only on the primitives of the problem and e i i lr when the algorithm is in state the drift in i is strictly positive where as when the algorithm is in state then i does not change now consider e lr let k be the opportunity that b occurred for the first time then clearly i log log n where is bounded by some constant m depending only on the problem instance thus p lr p i i lr for some k k such that the algorithm has been in state t times at opportunity k thus our observation above implies that p lr exp for some c by a standard application of a concentration inequality thus e lr o next consider e lr consider the successive returns of the algorithm to state conditional on the algorithm having entered state the expected time spent in that state is bounded by the expected time till the guessed type is confirmed which is o log n from lemma and the conditional probability that gets confirmed is some q thus the total expected number of returns to state is bounded by thus e lr o log n as well thus k i o log n and we have sup e residual lr o log n i lr and thus we finally have o log n i log n x e inf k min i n u i a i j j i o log log n i i i o log log n i wo log n i x e inf k min i n u i a i j j i combining the above two equations we deduce that o log log n log n x e inf k min i n u i a i j j re i i i o log log n o e inf k min i n i x u i a i j j now we observed earlier that p gets confirmed x i if str i thus the regret in the exploitation phase is in the worst case of order o n with probability and otherwise thus the total expected regret in the exploitation phase is o thus x r i o log log n e inf k min i n u i a i j j i thus lemma implies the result note that if there are no difficult type pairs then a i j j p u i next we prove that for a large enough n one can choose a routing matrix y in the exploitation phase of deem that will ensure that matches optimize payoffs and such that the capacity and complementary slackness conditions are satisfied proposition suppose that the generalized imbalance condition is satisfied consider any optimal routing matrix that is an optimal solution to problem then in the policy n for the n problem for any n large enough one can choose a routing matrix y such that y i j i and that satisfies p p i i j j for any j such that i i j j for any other p i i j j and we remark that the y we construct satisfies ky k o in order to prove this proposition we will need the following lemma lemma suppose that the generalized imbalance condition p is satisfied consider any feasible routing matrix x i j consider any job j such that i x i j j then there is a path on the complete bipartite graph between worker types i and job types j with the following properties one end point is job j the other end point is a job type whose capacity is it is permitted to be for every job type on the path in between they are operating at jobs are being served all worker types are fully utilized by definition since we formally consider an unassigned worker as being assigned to job type for every undirected edge on the path there is a positive rate of jobs routed on that edge in x proof consider a graph with jobs representing nodes on one side and workers on the other there is an edge between a job j and a worker i if x i j consider the connected component of job type j in this graph suppose it includes no job type that is underutilized then the arrival rate of jobs from the set of workers in the connected component exactly matches the total effective service rate of the sellers in connected component but this is a contradiction since generalized imbalance holds hence there exists an underutilized job type j that can be reached from j take any path from j to j traverse it starting from j and terminate it the first time it hits any underutilized job type proof of proposition recall that for a given routing matrix y i j i j is the resulting fraction of jobs of type j directed to worker type i in the course of this proof we will suppress the subscript clearly there exist i j for each i i i j j such that we have x x i j i j i j y i j i j y j i the s depend on the guessing and confirmation phases but not on y in particular arises from the overall routing contribution of the guessing and confirmation phases and s arise from the small likelihood that a worker who is confirmed as type i is actually some other type a key fact that we will use is that all s are uniformly bounded by o p p let s i i j j and j i i j j now we want to find a y such that y i j i for all i i call i j a permissible edge in the bipartite graph between workers and jobs if j j i and such that for each j we also have j ky k o note that the two p bullets together willpimply the proposition since k o from eq and this leads to i x i j i i j o j for all j j for large enough n the requirement in the first bullet can be written as a set of linear equations using eq here we write y and later also as a column vector with elements by j here we have o and matrix b can be written as b where has s in columns corresponding to dimensions s and s everywhere else and k o expressing y as y x z we are left with the following equation for z bz using the fact that j by definitions of and we will look for a solution to this underdetermined set of equations with a specific structure we want z to be a linear combination of flows along paths coming from lemma one path for each j each can be written as a column vector with s on the odd edges including the edge incident on j and s on the even edges let be the path matrix then z with the desired structure can be expressed as where is the vector of flows along each of the paths now note that bz i here we deduced i from the fact that is a path which has j as one end point and a worker or else a job not in as the other end point our system of equations reduces to i since k o the coefficient matrix is extremely well behaved being o different from the identity and we deduce that this system of equations has a unique solution that satisfies k o this yields us z that is also of size o and supported on permissible edges since each of the paths is supported on permissible edges lemma thus we finally obtain y z possessing all the desired properties notice that the permissible edges on which y differs from had strictly positive values in by lemma and hence this is also the case in y for large enough n finally we show that with the choice of y constructed in proposition in the exploitation phase the sequence of policies n asymptotically achieve the required upper bound on regret proposition suppose that the generalized imbalance condition is satisfied consider the sequence of policies n n with the routing matrix y proposed in proposition let w n be the value attained by this policy in optimization problem then lim sup n n v w n log n further suppose that there are no difficult type pairs then lim sup n n v w n k log log n where k k a is some constant proof from proposition it follows that the policy is feasible in problem and further x x x x i i j a i j j i i j j x x i i j a i j p where the second equality follows from the fact that if j then i x p i j j by complementary slackness and hence from proposition we obtain that i i j j as well for these j thus we have a policy that is feasible and that gives a rate of accumulation of payoff in problem thus the result follows from proposition b computation of the policy in the confirmation subphase denoting i lem is the same as p kl i as h where h is the optimization x u i a i j j h x kl i for all str i h min x and for all h h h h now redefine h and h to obtain the linear program min x u i a i j j x kl i for all str i for all j p p where at any optimal solution we have and thus i j note that a feasible solution exists to this linear program as long as kl i for some j for each when there are multiple solutions we choosepthe solution with the largest learning rate we choose a solution with the smallest one way to accomplish this is to p p modify the objective to minimize u i a i j j for some small for small can simply evaluate all the finite extreme points of the constrained p problems we for all str i for all j all the extreme points kl i i set j j such that for all j this is sufficient because we can show that there always exists a finite solution to the linear program to see this note that x i i kl i kl i i is feasible and finite further for any solution such that can be reduced to without loss in objective while maintaining feasibility for the practical heuristic can be computed as a solution to the following optimization problem x x min gk i u i a i j j h i x kl i gk i log r i log n for all i i lk i h x and for all h h h h we can again redefine h and h to obtain a linear program c practical implementation of other policies the upper confidence bound ucb algorithm is a popular bandit algorithm that embodies the well known approach of optimism in the face of uncertainty to solving these problems in its classical implementation one keeps track of highprobability confidence intervals for the expected payoffs of each arm and at each step chooses an arm that has the highest upper confidence bound the highest upper boundary of the confidence interval to be precise if rj t is the average reward seen for some arm j that has been pulled nj times until time t then the upper confidence bound for the mean reward for this arm is given by q uj t rj t log the algorithm chooses arm j arg maxj t in our context the arms are the job types and if k jobs have already been allotted to a worker and k is the average payoff obtained from past assignments of job j and nj is the number of these assignments we will define q uj k rj k log pq j where pq j is the current queue length based price for job j the algorithm then chooses job type j arg to be assigned to the worker next note that this algorithm does not require the knowledge of the instance primitives n a thompson sampling thompson sampling is another popular bandit algorithm employing a bayesian approach to the problem of arm selection the description of the algorithm is simple starting with a prior at every step select an arm with a probability equal to the posterior probability of that arm being optimal these posterior probabilities are updated based on observations made at each step one can incorporate information about correlation between the rewards of the different arms in computing these posteriors which makes it a versatile algorithm that exploits the reward structure in multiple settings it is known to give asymptotically tight regret guarantees in many bandit problems of interest in our simulations a version of ts is implemented as follows the prior probability of the worker being of type i is p i with this as the prior depending on i each worker s history a posterior distribution of the type of the worker is computed using the knowledge of the expected payoff matrix a then a worker type is sampled from this distribution suppose this type is i then a job type j arg maxj a i j pq j is assigned to the worker in contrast to the algorithm thompson sampling does utilize the knowledge of the expected payoff matrix a as well as the arrival rates the latter to construct a starting prior and the former for the posterior updates d other proofs sufficiency of policies we show that there is a who policy that achieves a rate of payoff accumulation that is arbitrarily close to the maximum possible we will think of n as being fixed throughout this section suppose the system starts at time t with no workers already present before the arrivals thereafter occur as described in section consider any arbitrary time varying policy and let t i j denote the derived quantity representing the fraction of workers of type i who are assigned jobs on type j in period t under then the largest possible rate of payoff accumulation under policy over long horizons is v limsupt vt where vt t t x x i x t i j a i j note that we have ignored the effect of less than i workers of type i being present for the first n periods but this does not change the limiting value v also note that randomization in can not increase the achievable value of v since one can always do as well by picking the most favorable sample path claim fix any policy and any then there is a who policy that achieves a steady state rate of payoff accumulation exceeding v proof we suppress dependence on by definition of v we know that there exists an increasing sequence of times such that vti v for all i we will construct a suitable who policy by using a sufficiently large time in this sequence let hk be the measure of workers in the system with history hk just before the start of time t and abusing notation let hk j be the measure of such workers who are assigned to job type j at time since the policy can not assign more jobs than have arrived in any period we have n x x hk j j for all t hk fix t which we think of as a large member of the sequence above the average measure of workers with history hk who are present is t hk hk t for all hk and k n the average measure of such workers who are assigned job j is similarly defined and denoted by hk j we immediately have that n x x hk j j hk by averaging eq over times until t now consider a worker with history hk assigned a job of type j using the known a matrix and arrival rates we can infer the posterior distribution of the worker type based on hk and hence the likelihood of the job of type j being successfully completed let p hk j denote the probability of success then the distribution of for the worker is simply given by hk j bernoulli p hk j the analysis would be very similar and produce the same results if the starting state is an arbitrary one with a bounded mass of workers already present barring the edge effect at time t caused by workers whose history was hk at time t this allows us to uniquely determine based on hk j s in particular for any if t i n we have that hk j hk j p hk j hk j hk j p hk j here a b represents the bound note that we have vt n x x hk j p hk j hk we are now ready to define our who policy for every hk such that hk this policy will attempt to assign a fraction hk j hk of workers with history hk to jobs of type j ignore capacity constraints for the present we will find that the capacity constraints will be almost satisfied leave the choice of for later we will choose small and then choose so as to achieve the desired value of below workers with rare histories histories such that hk will not be assigned jobs under note that the definition of rare histories refers to frequency of occurrence under then this uniquely specifies as well as the steady state mix of workers before any time in particular the steady state mass hk under of workers with history hk that are not rare is bounded as hk hk hk using eq and the fact that all subhistories of hk are also not rare it follows that hk hk where max exp n for all histories including rare histories using k n and hk violation of the constraint under is given by n x n x x x hk j j hk j j hk hk using eq and eq and the fact that there are n possible histories it follows that the sum of capacity constraint violations across j j is bounded by n pick an arbitrary set of workers to go unmatched to get rid of any capacity violations this can be done while remaining within the class of who policies in worst case this will cause payoff loss of for each period remaining in the worker s lifetime thus the loss caused by the need to remedy capacity violations is bounded by n n per period p ignoring capacity violations the steady state rate of accumulation of payoff under is n x x hk hk j p hk j n x x hk j p hk j vt hk where again using eq and the fact that there are p n possible histories let v denote the true steady state rate of accumulation of payoff under when capacity constraints are considered combining the above we deduce that v vt the time t will be chosen as a member of the sequence defined at the beginning of the proof ensuring vt v hence it will suffice to show v vt hence it suffices to have which can achieved using n and log and t a member of the sequence satisfying t i n this yields the required bound of v v uniqueness of prices under generalized imbalance proposition under the generalized imbalance condition the job shadow prices are uniquely determined proof of proposition the dual to problem can be written as x x minimize j p j i v i subject to p j v i a i j i j j p j j v i i the dual variables are p v where job prices p p j and worker values v v i we will prove the result by contradiction suppose there are multiple dual optima let d be the set of dual optima let j be the set of jobs such that the prices of those jobs take multiple values in formally j j j p j takes multiple values in d similarly let i be the set of workers such that the prices of those workers take multiple values in formally i i i v i takes multiple values in d for each j j we immediately deduce that there exists a dual optimum with p j and hence the capacity constraint of job type j is tight in all primal we deduce that for each i i worker type i is assigned a job in all periods x i j by assumption we have x x i j suppose the left hand side is larger than the right the complementary case can be dealt with similarly take any primal optimum the jobs in j do not have enough capacity to serve all workers in i hence there must be some worker i i and a job s j such that i s since s j we must have that p j has a unique optimum value in call this value j let the largest and smallest values of v i in d be v max i and v min i by complementary slackness we know that v max i j a i j v min i j v max i v min i but since i i we must have v max i v min i thus we have obtained a contradiction the proof of the next proposition shows that a very simple learn then exploit strategy achieves a regret of o log this follows from the fact that under an identifiability condition the sequence of sets x n converges to the set d in an appropriately defined distance proposition suppose that no two rows in a are identical then inf n o lognn proof it is clear that xn we will find an inner approximation n to x n such that n x n and n converges to d in an appropriate sense as n goes to infinity to define this approximation suppose that in the learning problem corresponding to a fixed n one starts off with a exploration phase of a fixed length o log n where each job j is presented to the worker os number of times where os o log n fixed a priori so that after this phase the type of the worker becomes known with a probability of error at most o this will then allow us to relate the problem to the problem in which the user type is known suppose after this phase the probability that a worker of type i is correctly identified is p i and the probability that she is as some other type is p i note that since no two rows in a are identical p i o for all i let d i j denote the expected number of times a worker that has been identified as being of type i correctly or incorrectly is directed towards job j after the exploration phase from job os till the n th job let j d i j then we can see that one can attain all x in the following set d i x n j x d i x ib s x os x i j p i d i j p i i d i s n i j and since p i o we can express the above set as now since d i x j j o log n n x d i x i j x i j d i n this in turn is the same as log n x n x i j o x i j n note that by construction n xn but we can now see that n converges to d in the sense that log n sup inf kx yk o n n and hence log n sup inf kx yk o n n as well proposition the set x n is a convex polytope proof for the purpose of this proof let x n n x x x n n we will show that x is a polytope from which the result will follow we will prove this using n an induction argument we will represent each point in x as a matrix x i j let worker types in i be labeled as and let job types in j be labeled as now clearly x which is a convex polytope we will show that if x n is n a convex polytope then x is one as well and hence the result will follow to do so we decompose the assignment problem with n jobs into the first job and the remaining n jobs a policy in the n jobs problem is a choice of a randomization over the jobs in j for the first job and depending on whether a reward was obtained or not with the chosen job a choice n of a point in x to be achieved for the remaining n jobs each such policy gives a point in the n n x suppose that j is the randomization chosen for job and let r j x n and r j x be the points chosen to be achieved from job onwards depending on the job j that was chosen and whether a reward was obtained or not r is a mapping from n j to the set x then this policy achieves the following point in the n jobs problem x j diag a j r j diag j r j where a j a i j diag a j and a j a j diag j a j a j and thus we have n x x j diag a j r j diag j r j n j r x let be the matrix with ones along column corresponding to job type j and all other entries then the set n j s diag a j r j diag j r j r s x is a convex polytope being a linear combination of two convex polytopes followed by an affine n shift it is easy to see that x is just a convex combination of the polytopes j s for j j n is a convex polytope as well and hence x
| 8 |
using matching to detect infeasibility of some integer programs mar abstract a novel matching based heuristic algorithm designed to detect specially formulated infeasible ips is presented the algorithm s input is a set of nested doubly stochastic subsystems and a set e of instance defining variables set at zero level the algorithm deduces additional variables at zero level until either a constraint is violated the ip is infeasible or no more variables can be deduced zero the ip is undecided all feasible ips and all infeasible ips not detected infeasible are undecided we successfully apply the algorithm to a small set of specially formulated infeasible ip instances of the hamilton cycle decision problem we show how to model both the graph and subgraph isomorphism decision problems for input to the algorithm increased levels of nested doubly stochastic subsystems can be implemented dynamically the algorithm is designed for parallel processing and for inclusion of techniques in addition to matching key words integer program matching permutations decision problem msc subject classifications introduction we present a novel matching based heuristic algorithm deigned to detect specially formulated infeasible ips it either detects an infeasible ip or exits undecided it does not solve an ip we call it the triple overlay matching based closure algorithm the algorithm input to the algorithm is an ip whose constraints are a set of nested doubly stochastic boolean subsystems together with a set e of instance defining variables set at zero level the ip s solution set is a subset of the set of n nxn permutation matrices p written as block permutation matrices q each with block structure p the algorithm is a polynomial time search that deduces additional variables at zero level via matching until either a constraint is violated in which case the ip is infeasible or we can go no further in which case the ip is undecided if the ip is decided infeasible a set of variables deduced to be at zero level can be used to test and display a set of violated constraints if the ip is undecided additional variables deduced zero can be added to e and nothing more can be concluded while some infeasible ips may fail to be detected infeasible not yet found feasible ips can only fall in the undecided category in section we present the generic ip required as input to the algorithm and we view the set of all solutions of the ip as an block permutation matrix q whose components are variables each nxn block u i is nxn permutation matrix p where block u i contains pu i in position u i an instance is modelled by setting certain variables of q to zero level in sections and we present the algorithm an application matching model of the hamilton cycle decision problem hcp empirical results and two conjectures in section we present generalizations of the algorithm matching models for both the graph and subgraph isomorphism decision problems and other uses we also propose more development its success effectiveness and practicality can then be evaluated in comparison to other algorithms we invite researchers to collaborate with us contact the corresponding author for fortran code the ideas presented in this paper originated from a polyhedral model of cycles not in graphs at that time we thought about how to recognize the birkhoff polytope as an image of a solution set of a compact formulation for graphs we ve accomplished part of that goal in this paper that is the convex hull of all excluded permutations for infeasible ips is the birkhoff polytope and its easy to build a compact formulation from in this paper over graphs ranging from vertices are correctly decided as infeasible ips none failed that are not reported although counterexamples surely exist we believe there is an insightful theory to be discovered that explains these early successes about specially constructed ips and terminology imagine a integer program modelled such that p is a solution if and only if the integer program is feasible matching also imagine an arbitrary set of instance defining constraints of the form pu i pv j it s not obvious how to apply matching to help in its solution now imagine that we create a compact formulation whose solution set is isomorphic equal under an orthogonal projection where we convert each linear constraint into all of its instantiated discrete states via creation of a set of discrete variables then it becomes easy to exploit matching hence the algorithm university of guelph canada email gismondi corresponding author british columbia canada email ted and i dedicate this paper to the late pal fischer friend colleague and mentor kelowna code the ip above so that each of the instance defining constraints is a set of two distinct components of p pu i pv j interchangeably playing the role of a variable for which pu i pv j if and only if pu i pv j or pu i and pv j or pu i and pv j create an instance of the ip by creating an instance of exclusion set e whose elements are the set of these pu i pv j if there exists p satisfying pu i pv j for all pu i pv j e then p is a solution of the ip otherwise p satisfies pu i pv j for at least one pu i pv j e and p is excluded from the solution set of the ip we view elements of e as coding precisely the set of permutation matrices excluded from the solution set of the ip that is e excludes the union of sets of n p each set satisfying pu i pv j for each pu i pv j an example of the modelling technique needed to create e is presented in section originally presented in we exclude these permutation matrices by setting pu i pv j for each pu i pv j the complement of exclusion set e with respect to all pu i pv j is called available set v the ip is feasible if and only if there exists p whose set of n distinct pairs of components pu i pv j that satisfy pu i pv j and define p are in v p is said to be covered by v if there exists a subset of n pu i pv j v such that pu i pv j defines p and each pu i pv j participates in p s cover definition clos e closed exclusion set e is the set of all pu i pv j not participating in any cover of any p note that pu i pv j e is code for the set of n permutation matrices for which pu i pv j clearly if clos e is such that all n permutation matrices are accounted then there is no p covered by v v is empty definition open v open available set v is the complement of clos e all pu i pv j the set of all pu i pv j participating in a cover of at least one p theorem p the ip is infeasible if and only if open v system p i pu i u n u pu i i n f or pall u i n pu i pv j pu i v n v pu i pv j pu i j n j i pu i pv j e assign pu i pv j pu i pu i pv j visualize system in the form of permutation matrix q blocks of p block u i contains pu i in position u i the remaining entries in row u and column i being zero the rest of the entries in block u i have the form pu i pv j v u j it s assumed variables in q have been initialized by henceforth we present the algorithm in terms of matrix q see figure an example of the general form of matrix q for n for e the set of n nxn permutation matrices each written as a q matrix in block form is the set of integer extrema of the solution set of system see figure an example of an integer solution to system in matrix q form for n fig general form of matrix q n an integer solution of system exists if and only if it s an nxn permutation matrix in block form fig an integer solution to system in matrix q form n about triple overlay matching based closure we first present an overview of the algorithm followed by the formal algorithm let e be given encode q and create v overview of the triple overlay matching based closure algorithm rather that search for the existence of p covered by v we attempt to shrink v so that pu i pv j v if and only if pu i pv j participates in a cover of at least one p the algorithm deduces which pu i pv j v do not participate in any cover of any p removes them from v and adds them to its success depends upon whether or not it s true that for infeasible ips when we initialize q via e it s sufficient to deduce open v while it s impossible for a feasible ip to yield open v infeasible ips cause the algorithm to either deduce infeasibility or exit undecided we say undecided because although we deduce some of these pu i pv j v that do not participate in any cover of any p it s not known if we deduce all of these pu i pv j brief details about how the algorithm deduces variables at zero level in every solution of the ip now follow the algorithm systematically tests a set of necessary conditions assuming a feasible ip each time a qu i v j is set at unit level that is if pu i pv j blocks u i and v j are assumed to cover a match a necessary condition for the existence of a block permutation matrix solution of the ip but rather than test for a match covered by these two blocks we exhaust all choices of a third variable common to these blocks set at unit level and test for the existence of a match covered by all three blocks after exhausting all possible choices of a variable if no match exists the given qu i v j variable is deduced zero otherwise we conclude nothing in both cases we continue on to the next variable not yet deduced zero eventually no more variables can be deduced zero none of the constraints appear violated and the ip is undecided or enough of the variables are deduced zero such that a constraint is violated and the ip is infeasible the triple overlay matching based closure algorithm interchangeably associate matrix q with a matrix that has entries at zero level where matrix q has entries at zero level and unit entries where matrix q has pu i or pu i pv j entries we ll now reference variables qu i u i and qu i v j a unit entry in the uth row and ith column of block u i represents variable pu i the remaining unit entries in the v th row and j th column of block u i with u v and i j can be regarded as representing pv j variables which is what they really do represent in the case of a solution and they can also be regarded as representing pu i pv j variables we think of this associated matrix in terms of patterns in q that cover block permutation matrices and then we ll exploit matching definition match is a logical function input is an nxn matrix row labels are viewed as vertices through n set a and column labels are viewed as vertices n through set b match in earlier work we create an equivalence class the set of all possible v s none of which cover any p whose class representative is hence the term triple overlay every variable not deduced zero participates in a match in an overlay of three blocks of q there exists quadrupal quintuple overlay through to exhaustion where the algorithm tests factorial numbers of n n is sufficient overlays for a match returns true if there exists a match between a and b otherwise match returns false definition overlay is a binary and function applied to two nxn matrices its output is a matrix we loosely use the terms double and triple overlay in place of overlay and overlay overlay etc definition check rowscolumns q is a routine that returns true if a row or column in matrix q is all in which case the algorithm terminates and the graph is deduced infeasible otherwise check rowscolumns q returns false in our fortran implementation of the algorithm before testing for termination we also implement boolean closure within and between blocks in q this efficiently deduces some of the components of q to be at zero level and we note significant speed increases note that boolean closure in check rowscolumns q can be replaced by lp temporarily set a nonzero component of matrix q to unit level and check for infeasibility subject to doubly stochastic constraints of matrix infeasibility implies that the component can be set to zero level whenever the algorithm exits undecided then for every qu i v j there exists a match in a triple overlay of blocks u i v j and at least one w k block the ip is then not deduced infeasible and we call the corresponding matrix q the triple overlay closure of the ip otherwise the algorithm exits and the ip is deduced infeasible open v is deduced to be empty input open v v q output open v decision if check rowscolumns q exit open v infeasible continue triple closure oldq q for u i n and qu i u i do if q u i then qu i u i for v j n and u v and i j and qu i v j do qu i v j qv j u i open v open v pu i pv j pv j pu i end if check rowscolumns q exit open v infeasible next i end for v j n and u v and i j and q u i v j do if overlay q u i q v j then qu i v j qv j u i open v open v pu i pv j pv j pu i next j end doubleoverlay overlay q u i q v j triple closure for w k n and u w v i k j and doubleoverlayw k do if overlay doubleoverlay q w k then doubleoverlay w k triple closure end end if doubleoverlay then qu i v j qv j u i open v open v pu i pv j pv j pu i end end end if oldq q continue triple closure exit open v undecided algorithm the triple overlay matching based closure algorithm application to the hcp let g be an n vertex graph also referenced by its adjacency matrix we model the hcp for simple connected graphs as do others called the background information and classification of graphs the is a well known decision problem and is g is edge if g is hamiltonian and since graphs are either or it follows that if g is then g is these graphs were initially studied by peter tait in the named snarks by martin gardner in tait conjectured that every planar graph has a hamilton cycle later disproved by tutte in via construction of a vertex counterexample this was a significant conjecture and had it been true it implied the famous theorem these ideas are summarized in the figure below all simple connected graphs hamiltonian graphs graphs tutte s counterexample snarks fig classification of simple connected graphs the matching model of the hcp regard paths of length n that start and stop at the same vertex and pass through every vertex as directed graphs on n vertices for undirected graphs every cycle is accompanied by a companion cycle no matter that g is hamiltonian or nonhamiltonian assign vertex n as the origin and terminal vertex for all cycles and assign each directed hamilton cycle to be in correspondence with each nxn permutation matrix p where pu i if and only if the ith arc in a cycle enters vertex u we encode each cycle as a permutation of vertex labels for example the path sequence is code for the first arc enters vertex the second arc enters vertex and so on since for all cycles by definition it s sufficient to code cycles as nxn permutation matrices note that an arc is directed and an edge is undirected the pair of arcs u i i u is the edge u i unless otherwise stated all graphs are simple connected and we next encode graph instance g by examining g s adjacency matrix adding to e all pairs of components of p pu i pv j that encode paths of length j i j i from vertex u to vertex v in cycles not in this encodes precisely the set of cycles not in g every cycle not in g uses at least one arc not in see the algorithm how to initialize exclusion set e below and recall that g is connected for arc u v not in g we can assign pu i pv but we also compute additional pu i pv whenever it s possible to account for no paths of length m in g from vertex u to vertex we do this by implementing dijkstras algorithm with equally weighted arcs to find minimal length paths between all pairs of vertices coded to return m n if no path exists we account for all paths of length one not in g arcs not in g and all paths of length two not in g by temporarily deleting the arc between adjacent vertices begin as follows if u is adjacent to v then temporarily delete arc u v and apply dijkstras algorithm to discover a minimal path of length m a simple no paths of length k can exist k m and pu i pv are discovered that for k and u not adjacent to v correspond with arcs in cycles not in g and for k correspond with paths of length k in cycles not in accounting for all arcs not in g is sufficient to model precisely all cycles not in g and we account for paths in cycles not in g to bolster two special cases arise case last arc in cycle recall that every n arc in a cycle enters vertex n by definition therefore observe arcs u n not in g temporarily deleted or otherwise noting how corresponding sets of cycles not in g can be encoded by permutation matrices for which the nth arc in a cycle enters vertex u pu n this is the case for and u not adjacent to v when dijkstras algorithm returns m if dijkstras algorithm returns m then again for and if u is not adjacent to v set pu n and for no paths of length two exist and these sets of cycles not in g can be encoded by permutation matrices for which the n arc in a cycle enters vertex u pu continuing in this way encode all possible n k th arcs in cycles not in g in paths of length k not in g to enter vertex u pu k m case first arc in cycle recall every first arc in every cycle exits vertex n observe and code all arcs n v in cycles not in g in paths of length k not in g by coding all possible k th arcs to enter vertex v pv k k m for the general case an exclusion set can be constructed by noting that a cycle not in g uses at least one arc not in g u v the complete set of permutation matrices corresponding to these cycles not in g are characterized by pu l pv l n added to by indexing l arc u v can play the role of sequence positions in disjoint sets of cycles not in considering all o arcs not in g each playing the role of all o n possible sequence positions it s possible to construct the set of permutation matrices corresponding to the set of cycles not in g accounted by the union of o pu l pv added to we generalize this idea via dijkstras algorithm and account for some sets of paths of length k not in recall that g is strongly connected but if an arc is temporarily deleted it s possible for no path to exist between a given pair of vertices this useful information indicates that an arc is essential under the assumption of the existence of a hamilton cycle that uses this arc in case this implies that a particular pu n is necessary and by integrality must be at unit level in every assignment of variables assuming the graph is hamiltonian until deduced otherwise if ever thus all other p s in the same row and column can be set at zero level this is accounted for when we initialize recall that m n in the case that dijkstras algorithm returns no minimal path the k loop appends the necessary set of pv j pu to e effectively setting variables in blocks through u n at zero level when implemented in the algorithm pu n must attain unit level via double stochastity and this implies that the other p s in the same column are deduced to be at zero level similarily for case in the general case it s also possible for no path to exist between a given pair of vertices u v when an arc is temporarily deleted under the assumption of the existence of a hamilton cycle this arc is essential and can play the role of sequence position through and so in each case all complementary row and column pu i pv j are assigned to when implemented a single pu i pv j variable remains in each row and therefore is equated with that block s pu i variable via scaled double stochastity within the block rows and columns in the block sum to pu i complementary pu i pv j variables in the corresponding column are therefore set in each block thus essential arcs also contribute to new information by adding their complementary row column pu i pv j to finally encode e into matrix q assign qu i v j for each pu i pv j e and then create v input arc adjacency matrix for g output e e case for u n do arc g u n g u n m dijkstrasalgorithm g u n for k arc arc m do e e pv j pu v n v u j n j n k end g u n arc end case for v n do arc g n v g n v m dijkstrasalgorithm g n v for k arc arc m do e e pv k pu i u n u v i n i k end g n v arc end general case for u n do for v n v u do arc g u v g u v m dijkstrasalgorithm g u v for k arc arc m do e e pu l pv l n k end g u v arc end end exit e algorithm how to initialize exclusion set e empirical results and two conjectures table below lists some details of applications all graphs of the algorithm table below lists some details of applications mostly graphs of an earlier version of the matching based closure algorithm called the a subset from over applications for both algorithms all of the graphs are decided and no application of either algorithm to any other graphs failed that are not reported empirical results in both tables heading pu i is the count of pu i variables and the size of initial available set v the number of qu i v j components in q after initializing e before implementing the algorithm note pu i qu i u i we only count qu i v j i j distinct qu i v j in table heading v refers to an upper bound on v for selected graphs each modified to include the cycle n n simply to observe open v two of these graphs are also hypohamiltonian the count in parentheses is an upper bound on v after removing a vertex and the wca two conjectures distinct pu i pv j e exists conjecture polynomial sized proof of membership of all n for all simple connected graphs conjecture triple overlay matching based closure deduces open v for all simple connected graphs the wca is a closure exhausting the middle v j loop before returning to label continue triple closure it s followed by triple closure also applied exhausting the interior w k loop before returning to label triple closure many more applications of boolean closure across all of q at many more intermediate steps are also implemented unlike triple overlay matching based closure as we have presented although these checks can also be included block overlays are also restricted to be of the form q u i and q v j i j in this way we can solve problems in the vertex range the wca is designed to be parallelized and the fortran code is written for distributed computing table applications of the triple overlay matching based closure algorithm name of graph vertices all graphs pu i petersen snark flower snarks tietzs snark blanusa snarks house of graphs a loupekine snark a goldberg snark house of graphs jan goedgebeur snark snarks house of graphs a double star snark table applications of a matching based closure algorithm wca name of graph petersen snark herschel graph a kleetope matteo coxeter house of graphs snark zamfirescu snark a hypohamiltonian a grinberg graph szekeres snark watkins snark thomassen meredith a flower snark a goldberg snark vertices edges pu i v no not run yet not run yet not run yet not run yet not run yet not run yet not run yet not run yet not run yet simple connected and hypohamiltonian confirmed existence of open v after removing a vertex and the wca historical note ignoring the planarity condition on tait s conjecture the matteo graph is the smallest counterexample while the graph is the smallest planar counterexample to tait s conjecture tutte s graph is a larger counterexample we also note that the georges graph is the smallest counterexample to tutte s conjecture and horton s graph was the first counterexample to tutte s conjecture discussion about practical generalizations of the algorithm the algorithm can be designed to invoke arbitrary levels of overlay adaptive strategies that change the level of overlay if more depth is desired needed to deduce variables at zero level but in order to make use of increased overlay it s necessary to add more variables to retain information about tests for matching for example if we create a quadrupal overlay version of the algorithm we then introduce pu i pv j pw k variables and redefine system and matrix q in terms of triply nested birkhoff polyhedra see the discussion in for a description of these polyhedra as feasible regions of lp formulations relaxed ips there exists a sequence of feasible regions in correspondence with increasing levels of nested birkhoff polyhedra whose end feasible region is the convex hull of the set of integer extrema of system see for a discussion of inequalities the term closure has so far been reserved for deducing variables added to e by invoking the algorithm but other polynomial time techniques can be used to deduce variables at zero level for example prior to matching we could implement lp and maximize each variable in system and if its maximum is less than unit level the variable can be set zero in our implementation we use boolean closure see for more details we also note there exist entire conferences devoted to matching under preferences perhaps many more innovative heuristics exist and can be included in the algorithm the algorithm is designed for parallel processing each qu i v j variable not yet deduced zero can be tested independent of the others by making a copy of matrix q and implementing the algorithm if an independent process deduces a qu i v j variable at zero level simply update the corresponding qu i v j variable in each q across all processes for some applications there exist model specific dependencies between variables undirected hcp implies pu i pv j if and only if pu pv in this way we account for companion cycles about study of the algorithm exclusion set e is the focus of study we propose to classify different e by the pattern that remains in matrix q after exit from the algorithm up to isomorphism q covers the set of all possible solutions to the ip it would be useful to know what kinds of e cause the algorithm to generate q as a minimal cover since it then follows that the algorithm would decide feasibility of the ip even if there exist classes of e for which infeasible ips provably exit the algorithm infeasible no matter that q is or is not a minimal cover it still follows that the algorithm decides feasibility of the ip we plan to investigate counterexamples via the matching model for hcp graph not fails an earlier version of the algorithm we will convert and study it as instance of two more matching model applications for input to the algorithm we now present two more matching models as applications for the q s components no longer have the interpretation as sequenced arcs in a cycle instead let q be an block permutation matrix whose blocks are mxm permutation matrices p we note from that f is a subgraph of g if and only if there exists permutation matrix p such that p t gp covers f and we add if and only if covers f where f and g are column vectors of adjacency matrices f and g formatted as f f f m f f f m m and g g g m g g g m m we now model both the graph and subgraph isomorphism decision problems as matching models the single difference being that in the case of graph isomorphism more information appears to be added to first note that q g covers f means q g is required to place ones in the same positions as those of so for each of these equations a subset of row components sum to one implying that the complement row components must therefore all be set at zero level add them to this completes the subgraph isomorphism matching model and only part of the graph isomorphism model for graph isomorphism cover means equality the remaining equations to be satisfied are those for which q g is required to place zeroes in the same positions as those of so for each of these equations a subset of row components sum to zero implying that these row components must therefore all be set at zero level add them to this completes the graph isomorphism matching model other applications of the algorithm we originally intended for the algorithm to decide feasibility of a matching model when it decides infeasibility the algorithm has served its purpose otherwise it s not known if the model is feasible or infeasible we note that open v is a refined cover of possible solutions to the ip and we believe that this is useful we propose that the algorithm can be developed as see for more information about these modelling techniques part of other search based algorithms either to provide refined information prior to a search or incorporated and updated alongside a search based algorithm to provide more information during a search there is one last thought about an academic use for the algorithm suppose we are given a correctly guessed infeasible ip and the algorithm exits undecided we can attribute the failure to e as lacking the necessary right kind of pu i pv j that could induce closure we could then theoretically augment e with additional pu i pv j until we deduce infeasibility and discover extra information needed to generate open v so for application when the algorithm gets stuck and open v simply augment e with additional pu i pv j open v and test if open v becomes empty while it might be difficult to guess minimal sized sets of additional pu i pv j if they can be guessed we will then have articulated what critical information is needed to solve the problem of course it s not known if these additional pu i pv j can be efficiently computed or validated as members in see conjecture acknowledgements and dedication thank you to adrian lee for preparing and running some of the examples presented in tables and nicholas swart for testing and implementing graphs in catherine bell for suggestions and contributions early on in this project we dedicate this paper to the late pal fischer for ted pal was a colleague and friend for myself gismondi pal taught me analysis an understanding of convex polyhedra and later became a colleague ted and i both already miss him very much references brinkmann coolsaet goedgebeur and melot house of graphs a database of interesting graphs discrete applied mathematics available at http pp demers and gismondi enumerating facets of util pp ejov haythorpe and rossomakhine a conversion of hcp to australasian journal of combinatorics pp mathematics stack exchange string sspp wc g w o i ag bo g retrieved may http filar haythorpe and rossomakhine a new heuristic for detecting in cubic graphs computers operations research pp gary johnson and tarjan the planar hamilton circuit problem is siam j pp gismondi subgraph isomorphism and the hamilton tour decision problem using a linearized form of p gp t util pp modelling decision problems via birkhoff polyhedra journal of algorithms and computation pp gismondi and swart a model of the tour decision problem math prog ser a pp haythorpe fhcp challenge set retrieved july http microsoft research lab new england cambridge ma usa https swart gismondi swart bell and lee deciding graph via a closure algorithm journal of algorithms and computation pp wolfram math world graph retrieved may http
| 8 |
nov learning hierarchical information flow with recurrent neural modules danijar hafner google brain mail alex irpan google brain alexirpan james davidson google brain jcdavidson nicolas heess google deepmind heess abstract we propose thalnet a deep learning model inspired by neocortical communication via the thalamus our model consists of recurrent neural modules that send features through a routing center endowing the modules with the flexibility to share features over multiple time steps we show that our model learns to route information hierarchically processing input data by a chain of modules we observe common architectures such as feed forward neural networks and skip connections emerging as special cases of our architecture while novel connectivity patterns are learned for the compression task our model outperforms standard recurrent neural networks on several sequential benchmarks introduction deep learning models make use of modular building blocks such as fully connected layers convolutional layers and recurrent layers researchers often combine them in strictly layered or ways instead of prescribing this connectivity a priori our method learns how to route information as part of learning to solve the task we achieve this using recurrent modules that communicate via a routing center that is inspired by the thalamus warren mcculloch and walter pitts invented the perceptron in as the first mathematical model of neural information processing laying the groundwork for modern research on artificial neural networks since then researchers have continued looking for inspiration from neuroscience to identify new deep learning architectures while some of these efforts have been directed at learning biologically plausible mechanisms in an attempt to explain brain behavior our interest is to achieve a flexible learning model in the neocortex communication between areas can be broadly classified into two pathways direct communication and communication via the thalamus in our model we borrow this latter notion of a centralized routing system to connect specializing neural modules in our experiments the presented model learns to form connection patterns that process input hierarchically including skip connections as known from resnet highway networks and densenet and feedback connections which are known to both play an important role in the neocortex and improve deep learning the learned connectivity structure is adapted to the task allowing the model to computational width and depth in this paper we study these properties with the goal of building an understanding of the interactions between recurrent neural modules work done during an internship with google brain conference on neural information processing systems nips long beach ca usa a module f receives the task input f can be used for side computation f is trained on an auxiliary task and f produces the output for the main task b computation of modules unrolled in time one possible path of hierarchical information flow is highlighted in green we show that our model learns hierarchical information flow skip connections and feedback connections in section figure several modules share their learned features via a routing center dashed lines are used for dynamic reading only we define both static and dynamic reading mechanisms in section section defines our computational model we point out two critical design axes which we explore experimentally in the supplementary material in section we compare the performance of our model on three sequential tasks and show that it consistently outperforms recurrent networks in section we apply the best performing design to a language modeling task where we observe that the model automatically learns hierarchical connectivity patterns thalamus gated recurrent modules we find inspiration for our work in the neurological structure of the neocortex areas of the neocortex communicate via two principal pathways the comprises direct connections between nuclei and the comprises connections relayed via the thalamus inspired by this second pathway we develop a sequential deep learning model in which modules communicate via a routing center we name the proposed model thalnet model definition our system comprises a tuple of computation modules f f f i that route their respective features into a shared center vector an example instance of our thalnet model is shown in figure at every time step t each module f i reads from the center vector via a context input cit and an optional task input xit the features f i cit xit that each module produces are directed into the center output modules additionally produce task output from their feature vector as a function oi y i all modules send their features to the routing center where they are merged to a single feature vector m in our experiments we simply implement m as the concatenation of all at the next time step the center vector is then read selectively by each module using a reading mechanism to obtain the context input ri this reading mechanism allows modules to read individual features allowing for complex and selective reuse of information between modules the initial center vector is the zero vector in practice we experiment with both feed forward and recurrent implementations of the modules f i for simplicity we omit the hidden state used in recurrent modules in our notation the reading mechanism is conditioned on both and separately as the merging does not preserve in the general case y x c figure the thalnet model from the perspective of a single module in this example the module receives input xi and produces features to the center and output y i its context input ci is determined as a linear mapping of the center features from the previous time step in practice we apply weight normalization to encourage interpretable weight matrices analyzed in section in summary thalnet is governed by the following equations module features f i cit xit module output yti center features i read context input i o m ri the choice of input and output modules depends on the task at hand in a simple scenario single task there is exactly one input module receiving task input some number of side modules and exactly one output module producing predictions the output modules get trained using appropriate loss functions with their gradients flowing backwards through the fully differentiable routing center into all modules modules can operate in parallel as reads target the center vector from the previous time step an unrolling of the process can be seen in figure this figure illustrates the ability to arbitrarily route between modules between time steps this suggest a sequential nature of our model even though application to static input is possible by allowing observing the input for multiple time steps we hypothesize that modules will use the center to route information through a chain of modules before producing the final output see section for tasks that require producing an output at every time step we repeat input frames to allow the model to process through multiple modules first before producing an output this is because communication between modules always spans a time reading mechanisms we now discuss implementations of the reading mechanism ri and modules f i ci xi as defined in section we draw a distinction between static and dynamic reading mechanisms for thalnet for static reading ri is conditioned on independent parameters for dynamic reading ri is conditioned on the current corresponding module state allowing the model to adapt its connectivity within a single sequence we investigate the following reading mechanisms linear mapping in its simplest form static reading consists of a fully connected layer r w with weights w as illustrated in figure this approach performs reasonably well but can exhibit unstable learning dynamics and learns noisy weight matrices that are hard to interpret regularizing weights using or penalties does not help here since it can cause side modules to not get read from anymore weight normalization we found linear mappings with weight normalization paw rameterization to be effective for this the context input is computed as r with scaling factor r weights w and the euclidean matrix norm please refer to graves for a study of a similar approach normalization results in interpretable weights since increasing one weight pushes other less important weights closer to zero as demonstrated in section fast softmax to achieve dynamic routing we condition the reading weight matrix on the current module features this can be seen as a form of fast weights providing a biologically plausible method for attention we then apply softmax normalization to the computed weights so that each element of the context is computed as a weighted average over center elements rather than just a weighted sum specifically r j e w j e w jk with weights w and biases b while this allows for a different connectivity pattern at each time step it introduces learned parameters per module fast gaussian as a compact parameterization for dynamic routing we consider choosing each context element as a gaussian weighted average of with only mean and variance vectors learned conditioned on the context input is computed as r j f w b j u d j with weights w u biases b d and the gaussian density function f the density is evaluated for each index in based on its distance from the mean this reading mechanism only requires parameters per module and thus makes dynamic reading more practical reading mechanisms could also select between modules on a high level instead of individual feature elements we do not explore this direction since it seems less biologically plausible moreover we demonstrate that such knowledge about feature boundaries is not necessary and hierarchical information flow emerges when using routing see figure theoretically this also allows our model to perform a wider class of computations performance comparison we investigate the properties and performance of our model on several benchmark tasks first we compare reading mechanisms and module designs on a simple sequential task to obtain a good configuration for the later experiments please refer to the supplementary material for the precise experiment description and results we find that the weight normalized reading mechanism provides best performance and stability during training we will use thalnet models with four modules of configuration for all experiments in this section to explore the performance of thalnet we now conduct experiments on three sequential tasks of increasing difficulty sequential permuted mnist we use images from the mnist data set the pixels of every image by a fixed random permutation and show them to the model as a sequence of rows the model outputs its prediction of the handwritten digit at the last time step so that it must integrate and remember observed information from previous rows this delayed prediction combined with the permutation of pixels makes the task harder than the static image classification task with a recurrent neural network achieving test error we use the standard split of training images and testing images sequential in a similar spirit we use the data set and feed images to the model row by row we flatten the color channels of every row so that the model observes a vector of elements at every time step the classification is given after observing the last row of the image this task is more difficult than the mnist task as the image show more complex and often ambiguous objects the data set contains training images and testing images language modeling this text corpus consisting of the first bytes of the english wikipedia is commonly used as a language modeling benchmark for sequential models at every time step the model observes one byte usually corresponding to character encoded as a vector of length the task it to predict the distribution of the next character in the sequence performance is measured in bits per character bpc pn computed as p xi following cooijmans et al we train on the first and evaluate performance on the following of the corpus for the two image classification tasks we compare variations of our model to a stacked gated recurrent unit gru network of layers as baseline the variations we compare are different sequential testing sequential permuted mnist testing epochs thalnet thalnet thalnet gru gru baseline thalnet thalnet ff sequential permuted mnist training bits per character bpc thalnet thalnet thalnet ff thalnet thalnet gru gru baseline epochs sequential training thalnet thalnet thalnet thalnet ff thalnet gru gru baseline epochs accuracy gru baseline thalnet gru thalnet thalnet thalnet thalnet ff epochs gru step gru steps thalnet steps epochs language modeling training bits per character bpc accuracy language modeling evaluation accuracy accuracy thalnet steps gru step gru steps epochs figure performance on the permuted sequential mnist sequential cifar and language modeling tasks the stacked gru baseline reaches higher training accuracy on cifar but fails to generalize well on both tasks thalnet clearly outperforms the baseline in testing accuracy on cifar we see how recurrency within the modules speeds up training the same pattern is shows for the experiment where thalnet using parameters matches the performance of the baseline with parameters the step number or refers to repeated inputs as discussed in section we had to smooth the graphs using a running average since the models were evaluated on testing batches on a rolling basis choices of layers and gru layers for implementing the modules f i ci xi we test with two fully connected layers ff a gru layer gru fully connected followed by gru gru followed by fully connected and a gru sandwiched between fully connected layers for all models we pick the largest layer sizes such that the number of parameters does not exceed training is performed for epochs on batches of size using rmsprop with a learning rate of for language modeling we simulate thalnet for steps per token as described in section to allow the output module to read information about the current input before making its prediction note that on this task our model uses only half of its capacity directly since its side modules can only integrate dependencies from previous time steps we run the baseline once without extra steps and once with steps per token allowing it to apply its full capacity once and twice on each token respectively this makes the comparison a bit difficult but only by favouring the baseline this suggests that architectural modifications such as explicit between modules could further improve performance the task requires larger models we train thalnet with modules of a size feed forward layer and a size gru layer each totaling in million model parameters we compare to a standard baseline in language modeling a single gru with units totaling in million parameters we train on batches of sequences each containing bytes using the adam optimizer with a default learning rate of we scale down gradients exceeding a norm of results for epochs of training are shown in figure the training took about days for thalnet with steps per token days for the baseline with steps per token and days for the baseline without extra steps figure shows the training and testing and training curves for the three tasks described in this section thalnet outperforms standard gru networks in all three tasks interestingly thalnet experiences a note that the modules require some amount of local structure to allow them to specialize implementing the modules as a single fully connected layer recovers a standard recurrent neural network with one large layer much smaller gap between training and testing performance than our baseline a trend we observed across all experimental results on the task thalnet scores bpc using parameters while our gru baseline scores bpc using parameters lower is better our model thus slightly improves on the baseline while using fewer parameters this result places thalnet in between the baseline and regularization methods designed for language modeling which can also be applied to our model the baseline performance is consistent with published results of lstms with similar number of parameters we hypothesize the information bottleneck at the reading mechanism acting as an implicit regularizer that encourages generalization compared to using one large rnn that has a lot of freedom of modeling the mapping thalnet imposes local structure to how the mapping can be implemented in particular it encourages the model to decompose into several modules that have stronger than thus to some extend every module needs to learn a computation hierarchical connectivity patterns using its routing center our model is able to learn its structure as part of learning to solve the task in this section we explore the emergent connectivity patterns we show that our model learns to route features in hierarchical ways as hypothesized including skip connections and feedback connections for this purpose we choose the corpus a language modeling benchmark consisting of the first bytes of wikipedia preprocessed for the hutter prize the model observes one encoded byte per time step and is trained to predict its future input at the next time step we use comparably small models to be able to run experiments quickly comparing thalnet models of modules with layer sizes and both experiments use weight normalized reading our focus here is on exploring learned connectivity patterns we show competitive results on the task using larger models in section we simulate two sub time steps to allow for the output module to receive information of the current input frame as discussed in section models are trained for epochs on batches of size containing sequences of length using rmsprop with a learning rate of in general we observe different random seeds converging to similar connectivity patterns with recurring elements trained reading weights figure shows trained reading weights for various reading mechanisms along with their connectivity graphs that were manually each image represents a reading weight matrix for the modules to top to bottom each pixel row shows the weight factors that get multiplied with to produce a single element of the context vector of that module the weight matrices thus has dimensions of white pixels represent large magnitudes suggesting focus on features at those positions the weight matrices of weight normalized reading clearly resemble the boundaries of the four concatenated module features in the center vector even though the model has no notion of the origin and ordering of elements in the center vector a similar structure emerges with fast softmax reading these weight matrices are sparser than the weights from weight normalization over the course of a sequence we observe some weights staying constant while others change their magnitudes at each time step this suggests that optimal connectivity might include both static and dynamic elements however this reading mechanism leads to less stable training this problem could potentially alleviated by normalizing the fast weight matrix with fast gaussian reading we see that the distributions occasionally tighten on specific features in the first and last modules the modules that receive input and emit output the other modules learn large variance parameters effectively spanning all center features this could potentially be addressed by reading using mixtures of gaussians for each context element instead we generally find that weight normalized and fast softmax reading select features with in a more targeted way developing formal measurements for this deduction process seems beneficial in the future skip connection skip connection feedback connection x y x skip connection a weight normalization y feedback connection b fast softmax c fast gaussian figure reading weights learned by different reading mechanisms with modules on the language modeling task alongside manually deducted connectivity graphs we plot the weight matrices that produce the context inputs to the four modules top to bottom the top images show focus of the input modules followed by side modules and output modules at the bottom each pixel row gets multiplied with the center vector to produce one scalar element of the context input ci we visualize the magnitude of weights between the to the percentile we do not include the connectivity graph for fast gaussian reading as its reading weights are not clearly structured commonly learned structures the top row in figure shows manually deducted connectivity graphs between modules arrows represent the main direction of information flow in the model for example the two incoming arrows to module in figure indicate that module mainly attends to features produced by modules and we infer the connections from the larger weight magnitudes in the first and third quarters of the reading weights for module bottom row a typical pattern that emerges during the experiments can be seen in the connectivity graphs of both weight normalized and fast softmax reading figures and namely the output module reads features directly from the input module this direction connection is established early on during training likely because this is the most direct gradient path from output to input later on the side modules develop useful features to support the input and output modules in another pattern one module reads from all other modules and combines their information in figure module takes this role reading from modules and distributing these features via the input module in additional experiments with more than four modules we observed this pattern to emerge predominantly this connection pattern provides a more efficient way of information sharing than all modules both connectivity graphs in figure include hierarchical computation paths through the modules they include learn skip connections which are known to improve gradient flow from popular models such as resnet highway networks and densenet furthermore the connectivity graphs contain backward connections creating feedback loops over two or more modules feedback connections are known to play a critical role in the neocortex which inspired our work related work we describe a recurrent mixture of experts model that learns to dynamically pass information between the modules related approaches can be found in various recurrent and methods as outlined in this section modular neural networks thalnet consists of several recurrent modules that interact and exploit each other modularity is a common property of existing neural models learn a matrix of tasks and robot bodies to improve both multitask and transfer learning learn modules modules specific to objects present in the scene which are selected by an object classifier these approaches specify modules corresponding to a specific task or variable manually in contrast our model automatically discovers and exploits the inherent modularity of the task and does not require a correspondence of modules to task variables the column bundle model consists of a central column and several around it while not applied to temporal data we observe a structural similarity between our modules and the in the case where weights are shared among layers of the which the authors mention as a possibility learned computation paths we learn the connectivity between modules alongside the task there are various methods in the context that also connectivity between modules fernando et al learn paths through multiple layers of experts using an evolutionary approach rusu et al learn adapter connections to connect to fixed previously trained experts and exploit their information these approaches focus on architectures the recurrency in our approach allows for complex and flexible computational paths moreover we learn interpretable weight matrices that can be examined directly without performing costly sensitivity analysis the neural programmer interpreted presented by reed and de freitas is related to our dynamic gating mechanisms in their work a network recursively calls itself in a parameterized way to perform computations in comparison our model allows for parallel computation between modules and for unrestricted connectivity patterns between modules memory augmented rnns the center vector in our model can be interpreted as an external memory with multiple recurrent controllers operating on it preceding work proposes recurrent neural networks operating on external memory structures the neural turing machine proposed by graves et al and work investigate differentiable ways to address a memory for reading and writing in the thalnet model we use multiple recurrent controllers accessing the center vector moreover our center vector is recomputed at each time step and thus should not be confused with a persistent memory as is typical for model with external memory conclusion we presented thalnet a recurrent modular framework that learns to pass information between neural modules in a hierarchical way experiments on sequential and permuted variants of mnist and are a promising sign of the viability of this approach in these experiments thalnet learns novel connectivity patterns that include hierarchical paths skip connections and feedback connections in our current implementation we assume the center features to be a vector introducing a matrix shape for the center features would open up ways to integrate convolutional modules and similaritybased attention mechanisms for reading from the center while matrix shaped features are easily interpretable for visual input it is less clear how this structure will be leveraged for other modalities a further direction of future work is to apply our paradigm to tasks with multiple modalities for inputs and outputs it seems natural to either have a separate input module for each modality or to have multiple output modules that can all share information through the center we believe this could be used to hint specialization into specific patterns and create more controllable connectivity patterns between modules similarly we an interesting direction is to explore the proposed model can be leveraged to learn and remember a sequence of tasks we believe modular computation in neural networks will become more important as researchers approach more complex tasks and employ deep learning to rich domains our work provides a step in the direction of automatically organizing neural modules that leverage each other in order to solve a wide range of tasks in a complex world references andreas rohrbach darrell and klein neural module networks in ieee conference on computer vision and pattern recognition pages ba hinton mnih j leibo and ionescu using fast weights to attend to the recent past in advances in neural information processing systems pages cho van bahdanau and bengio on the properties of neural machine translation approaches syntax semantics and structure in statistical translation page cooijmans ballas laurent and courville recurrent batch normalization arxiv preprint devin gupta darrell abbeel and levine learning modular neural network policies for and transfer arxiv preprint fernando banarse blundell zwols ha a rusu pritzel and wierstra pathnet evolution channels gradient descent in super neural networks arxiv preprint gilbert and sigman brain states influences in sensory processing neuron graves adaptive computation time for recurrent neural networks arxiv preprint graves wayne and danihelka neural turing machines arxiv preprint graves wayne reynolds harley danihelka colmenarejo grefenstette ramalho agapiou et al hybrid computing using a neural network with dynamic external memory nature hawkins and george hierarchical temporal memory concepts theory and terminology technical report numenta he zhang ren and j sun deep residual learning for image recognition in ieee conference on computer vision and pattern recognition pages hinton krizhevsky and wang transforming artificial neural networks and machine learning icann pages hochreiter and schmidhuber long memory neural computation huang liu weinberger and van der maaten densely connected convolutional networks arxiv preprint jacobs jordan and barto task decomposition through competition in a modular connectionist architecture the what and where vision tasks cognitive science kingma and ba adam a method for stochastic optimization in international conference on learning representations kirkpatrick pascanu rabinowitz veness desjardins a rusu milan quan ramalho et al overcoming catastrophic forgetting in neural networks proceedings of the national academy of sciences page krizhevsky learning multiple layers of features from tiny images krueger maharaj pezeshki ballas ke goyal bengio larochelle courville et al zoneout regularizing rnns by randomly preserving hidden activations arxiv preprint lecun and cortes the mnist database of handwritten digits lillicrap cownden tweed and akerman random synaptic feedback weights support error backpropagation for deep learning nature communications mahoney about the test data http mcculloch and pitts a logical calculus of the ideas immanent in nervous activity the bulletin of mathematical biophysics pham tran and venkatesh one size fits many column bundle for learning arxiv preprint reed and de freitas neural in international conference on learning representations a rusu rabinowitz desjardins soyer kirkpatrick kavukcuoglu pascanu and hadsell progressive neural networks arxiv preprint salimans and kingma weight normalization a simple reparameterization to accelerate training of deep neural networks in advances in neural information processing systems pages schmidhuber learning to control memories an alternative to dynamic recurrent networks neural computation shazeer mirhoseini maziarz davis q le hinton and j dean outrageously large neural networks the layer arxiv preprint sherman thalamus plays a central role in ongoing cortical functioning nature neuroscience srivastava greff and schmidhuber highway networks arxiv preprint tieleman and hinton lecture divide the gradient by a running average of its recent magnitude coursera neural networks for machine learning zenke poole and ganguli improved multitask learning through synaptic intelligence arxiv preprint supplementary material for learning hierarchical information flow with recurrent neural modules a module designs and reading mechanisms sequential mnist testing thalnet thalnet thalnet gru baseline thalnet gru thalnet ff epochs accuracy accuracy sequential mnist testing a module designs thalnet weight norm thalnet linear gru baseline thalnet fast softmax epochs b reading mechanisms figure test performance on the sequential mnist task grouped by module design left and reading mechanism right plots show the top median and bottom accuracy over the other design choices recurrent modules train faster than pure fully connected modules and weight normalized reading is both stable and performs best modules perform similarly to while limiting the size of the center we use a sequential variant of mnist to compare the reading mechanisms described in section along with implementations of the module function in sequential mnist the model observes handwritten digits of pixels from top to bottom one row per time step the prediction is given at the last time step so that the model has to integrate and remember observed information over the sequence this makes the task more challenging than in the static setting with a recurrent network achieving error on this task to implement the modules f i ci xi we test various combinations of fully connected and recurrent layers of gated recurrent units gru modules require some amount of local structure to allow them to we test with two fully connected layers ff a gru layer gru fully connected followed by gru gru followed by fully connected and a gru sandwiched between fully connected layers in addition we compare performance to a stacked gru baseline with layers for all models we pick the largest layer sizes such that the number of parameters does not exceed we train for epochs on batches of size using rmsprop with a learning rate of figure shows the test accuracy of module designs and reading mechanisms thalnet outperforms the stacked gru baseline in most configurations we assume that the structure imposed by our model acts as a regularizer we perform a further performance comparison in section results for module designs are shown in figure in the appendix we observe a benefit of recurrent modules as they exhibit faster and more stable training than fully connected modules this could be explained by the fact that pure fully connected modules have to learn to use the routing center to store information over time which is a long feedback loop having a fully connected layer before the recurrent layer also significantly improves performance a fully connected layer after the gru let us produce compact feature vectors that scale better to large modules although we find to be beneficial in later experiments section implementing the modules as a single fully connected layer recovers a standard recurrent neural network with one large layer results for the reading mechanisms area shown in figure the reading mechanism only has a small impact on the model performance we find weight normalized reading to yield more stable performance than linear or fast softmax reading for all further experiments we use weight normalized reading due to both its stability and predictive performance we do not include results for fast gaussian reading here as it performed below the performance range of the other methods b interpretation as recurrent mixture of experts thalnet can route information from the input to the output over multiple time steps this enables it to trade off shallow and deep computation paths to understand this we view thalnet as a smooth mixture of experts model where the modules f f f i are the recurrent experts each module outputs its features to the center vector a linear combination of is read at the next time step which effectively performs a mixing of expert outputs compared to the recurrent mixture of experts model presented by shazeer et al our model can recurrently route information through the mixture of multiple times increasing the number of mixture compounds to highlight two extreme cases the modules could read from identical locations in the center in this case the model does a wide and shallow computation over time step analogous to graves in the other extreme each module reads from a different module recovering a hierarchy of recurrent layers this gives a deep but narrow computation stretched over multiple time steps in between there exist a spectrum of complex patterns of information flow with differing and dynamic computation depths this is comparable to densenet which also blends information from paths of different computational depth although in a purely model using modules our model could still leverage the recurrence between the modules and the center to store information over time however this bounds the number of distinct computation steps that thalnet could apply to an input using recurrent modules the computation steps can change over time increasing the flexibility of the model recurrent modules give a stronger prior for using feedback and shows improved performance in our experiments c comparison to long memory when viewing the equations in the model definition section one might think how our model compares to long memory lstm however there exists only a limited similarity between the two models empirically we observed that lstms performed similarly to our gru baselines when given the same parameter budget lstm s context vector ct is processed while thalnet s routing center modules lstm s hidden output ht is a better candidate for comparison with thalnet s center features which allows us to relate the recurrent weight matrix of an lstm layer to the linear version of our reading mechanism we could relate each thalnet module to a set of multiple lstm units however lstm units perform separate scalar computations while our modules can learn complex interactions between multiple features at each time step alternatively we could see lstm units as very small thalnet modules reading exactly four context elements each namely for the input and the three gates however the computational capacity and local structure of individual lstm units is not comparable to that of the thalnet modules used in our work
| 2 |
towards efficient abstractions for concurrent may carlo and vasileios trinity college dublin ireland spaccasc abstract consensus is an often occurring problem in concurrent and distributed programming we present a programming language with simple semantics and support for consensus in the form of communicating transactions we motivate the need for such a construct with a characteristic example of generalized consensus which can be naturally encoded in our language we then focus on the challenges in achieving an implementation that can efficiently run such programs we setup an architecture to evaluate different implementation alternatives and use it to experimentally evaluate runtime heuristics this is the basis for a research project on realistic programming language support for consensus keywords concurrent programming consensus communicating transactions introduction achieving consensus between concurrent processes is a ubiquitous problem in multicore and distributed programming among the classic instances of consensus is leader election and synchronous communication programming language support for consensus however has been limited for example cml s communication primitives provide a programming language abstraction to implement consensus however they can not be used to abstractly implement consensus between three or more processes thm needs to be implemented in a basis let us consider a hypothetical scenario of generalized consensus which we will call the saturday night out sno problem in this scenario a number of friends are seeking partners for various activities on saturday night each has a list of desired activities to attend in a certain order and will only agree for a night out if there is a partner for each activity alice for example is looking for company to go out for dinner and then a movie not necessarily with the same person to find partners for these events in this order she may attempt to synchronize on the handshake channels dinner and movie student project paper primarily the work of the first author supported by msr mrl supported by sfi project sfi def alice sync dinner sync movie here sync is a synchronization operator similar to csp synchronization bob on the other hand wants to go for dinner and then for dancing def bob sync dinner sync dancing alice and bob can agree on dinner but they need partners for a movie and dancing respectively to commit to the night out their agreement is tentative let carol be another friend in this group who is only interested in dancing def carol sync dancing once bob and carol agree on dancing they are both happy to commit to going out however alice has no movie partner and she can still cancel her agreement with bob if this happens bob and carol need to be notified to cancel their agreement and everyone starts over their search of partners an implementation of the sno scenario between concurrent processes would need to have a specialized way of reversing the effect of this synchronization suppose david is also a participant in this set of friends def david sync dancing sync movie after the partial agreement between alice bob and carol is canceled david together with the first two can synchronize on dinner dancing and movie and agree to go out leaving carol at home notice that when alice raised an objection to the agreement that was forming between her bob and carol all three participants were forced to restart if however carol was taken out of the agreement even after she and bob were happy to commit their plans david would have been able to take carol s place and the work of alice and bob until the point when carol joined in would not need to be repeated programming sno between an arbitrary number of processes which can form multiple agreement groups in cml is complicated especially if we consider that the participants are allowed to perform arbitrary computations between synchronizations affecting control flow and can communicate with other parties not directly involved in the sno for example bob may want to go dancing only if he can agree with the babysitter to stay late def bob sync dinner if babysitter then sync dancing in this case bob s computation has outside of the sno group of processes to implement this would require code for dealing with the sno protocol to be written in the babysitter or any other process breaking any potential modular implementation this paper shows that communicating transactions a recently proposed mechanism for automatic error recovery in ccs processes is a useful mechanism for modularly implementing the sno and other generalized consensus scenarios previous work on communicating transactions focused on behavioral theory with a unit bool int a a a a a chan v x true false n v v fun f x e c e v e e e e op e let x e in e if e then e else e send e e recv e newchana spawn e atomic j e e k commit k p e p k p j p p k co k op fst snd add sub mul leq e e e v e e e v e op e let x e in e if e then else send e e send v e recv e spawn e where n n x var c chan k k fig tcml syntax respect to safety and liveness however the effectiveness of this construct in a pragmatic programming language has yet to be proven one of the main milestones to achieve on this direction is the invention of efficient runtime implementations of communicating transactions here we describe the challenges and our first results in a recently started project to investigate this research direction in particular we equip a simple concurrent functional language with communicating transactions and use it to discuss the challenges in making an efficient implementation of such languages sect we also use this language to give a modular implementation of consensus scenarios such as the sno example the simple operational semantics of this language allows for the communication of sno processes with arbitrary other processes such as the babysitter process without the need to add code for the sno protocol in those processes moreover the more efficient partially aborting strategy discussed above is captured in this semantics our semantics of this language is allowing different runtime scheduling strategies of processes some more efficient than others to study their relative efficiency we have developed a skeleton implementation of the language which allows us to plug in and evaluate such runtime strategies sect we describe several such strategies sect and report the results of our evaluations sect finally we summarize related work in this area and the future directions of this project sect the tcml language we study tcml a language combining a with and communicating transactions for this language we use the abstract syntax shown in fig and the usual abbreviations from the and values in tcml are either constants of base type unit bool and int pairs of values of type a a recursive functions a a and channels carrying values of type a a chan a simple type system with appropriate progress if true then else if false then else let x v in e op v fun f x e e op v e fun f x step e e spawn e spawn v newchan e newchana atomic e atomic j k commit e commit k e v k e c j e e k co k k e let op app if e if c fc e fig sequential reductions and preservation theorems can be found in an accompanying technical report and is omitted here source tcml programs are expressions in the functional core of the language ranged over by e whereas running programs are processes derived from the syntax of p besides standard lambda calculus expressions the functional core contains the constructs send c e and recv c to synchronously send and receive a value on channel c respectively and newchana to create a new channel of type chan a the constructs spawn and atomic when executed respectively spawn a new process and transaction commit k commits transaction will shortly describe these constructs in detail a simple running process can be just an expression it can also be constructed by the parallel composition of p and q p k q we treat free channels as in the considering them to be global thus if a channel c is free in both p and q it can be used for communication between these processes the construct encodes restriction of the scope of c to process p we use the barendregt convention for bound variables and channels and identify terms up to alpha conversion moreover we write fc p for the free channels in process p we write j k for the process encoding a communicating transaction this can be thought of as the process the default of the transaction which runs until the transaction commits if however the transaction aborts then is discarded and the entire transaction is replaced by its alternative process intuitively is the continuation of the transaction in the case of an abort as we will explain commits are asynchronous requiring the addition of process co k to the language the name k of the transaction is bound in thus only the default of the transaction can potentially spawn a co the ftn p gives us the free transaction names in p processes with no free variables can reduce using transitions of the form p q these transitions for the functional part of the language are shown in fig and are defined in terms of reductions e where e is a redex and eager evaluation contexts e whose grammar is given in fig due to a unique decomposition lemma an expression e can be decomposed to an evaluation context and a redex expression in only one way here we use e for the standard substitution and op v for a returning the result of the operator op on v when this is defined rule step lifts functional reductions to process reductions the rest of the reduction rules of fig deal with the concurrent and transactional of expressions rule spawn reduces a spawn v expression at evaluation position to the unit value creating a new process running the application v the type system of the language guarantees that value v here is a thunk with this rule we can derive the reductions spawn send c recv c send c k recv c send c k recv c the resulting processes of these reductions can then communicate on channel as we previously mentioned the free channel c can also be used to communicate with any other parallel process rule newchan gives processes the ability to create new locally scoped channels thus the following expression will result in an input and an output process that can only communicate with each other let x newchanint in spawn send x recv x spawn send c recv c send c k recv c rule atomic starts a new transaction in the current process engulfing the entire process in it and storing the abort continuation in the alternative of the transaction rule commit spawns an asynchronous commit transactions can be arbitrarily nested thus we can write atomic j spawn recv c commit k k atomic j recv d commit l k which reduces to j recv c commit k k j recv d commit l k atomic j recv d commit l k k this process will commit the after an input on channel c and the inner after an input on as we will see if the k transaction aborts then the inner will be discarded even if it has performed the input on d and the resulting process the alternative of k will restart l atomic j recv d commit l k the effect of this abort will be the rollback of the communication on d reverting the program to a consistent state process and transactional reductions are handled by the rules of fig the first four rules sync eq par and chan are direct adaptations of the reduction rules of the which allow parallel processes to communicate and propagate reductions over parallel and restriction these rules use an omitted recv c k send c v v k eq p p q p q par k k chan p p emb step p p j p k j p k sync k j k j k k k abort co co k k j k j k fig concurrent and transactional reductions omitting symmetric rules structural equivalence to identify terms up to the reordering of parallel processes and the extrusion of the scope of restricted channels in the spirit of the semantics rule step propagates reductions of default processes over their respective transactions the remaining rules are taken from transccs rule emb encodes the embedding of a process in a parallel transaction j this enables the communication of with the default of it also keeps the current continuation of in the alternative of k in case the aborts to illustrate the mechanics of the embed rule let us consider the above nested transaction running in parallel with the process p send d send c j recv c commit k k j recv d commit l k atomic j recv d commit l k k k p after two embedding transitions we will have j recv c commit k k j p k recv d commit l p k k p k k p k k now p can communicate on d with the inner transaction j recv c commit k k j send c k commit l p k k next there are at least two options either commit l spawns a co l process which causes the commit of the or the input on d is embedded in the let us assume that the latter occurs j j recv c commit k k send c k commit l recv c commit k k p k k p k k j j co k k co l k k the transactions are now ready to commit from the to the using rule commit commits are necessary to guarantee that all transactions that have communicated have reached an agreement to commit this also has the important consequence of making the following three processes behaviorally indistinguishable j k k j k j k j k k j k k j j k k j k k k therefore an implementation of tcml when dealing with the first of the three processes can pick any of the alternative mutual embeddings of the k and l transactions without affecting the observable outcomes of the program in fact when one of the transactions has no possibility of committing or when the two transactions never communicate an implementation can decide never to embed the two transactions in this is crucial in creating implementations that will only embed processes and other transactions only when necessary for communication and pick the most efficient of the available embeddings the development of implementations with efficient embedding strategies is one of the main challenges of our project for scaling communicating transactions to pragmatic programming languages similarly aborts are entirely abort and are left to the discretion of the underlying implementation thus in the above example any transaction can abort at any stage discarding part of the computation in such examples there is usually a multitude of transactions that can be aborted and in cases where a forward reduction is not possible due to deadlock aborts are necessary making the tcml programmer in charge of aborts as we do with commits is not desirable since the purpose of communicating transactions is to lift the burden of manual error prediction and handling minimizing aborts and automatically picking the aborts that will undo the fewer computation steps while still rewinding the program back enough to reach a successful outcome is another major challenge in our project the sno scenario can be simply implemented in tcml using restarting transactions a restarting transaction uses recursion to an identical transaction in the case of an abort atomicrec k j e k def fun r atomic j e r k a transactional implementation of the sno participants we discussed in the introduction simply wraps their code in restating transactions let alice atomicrec k j sync dinner sync movie commit k k in let bob atomicrec k j sync dinner sync dancing commit k k in let carol atomicrec k j sync dancing commit k k in let david atomicrec k j sync dancing sync movie commit k k in spawn alice spawn bob spawn carol spawn david here dinner dancing and movie are implementations of csp synchronization channels and sync a function to synchronize on these channels compared transaction trie sched gath abort embed commit notif ack en fig tcml runtime architecture to a potential implementation of sno in cml the simplicity of the above code is evident the version of bob communicating with the babysitter is just as simple however as we discuss in sect this simplicity comes with a severe performance penalty at least for straightforward implementations of tcml in essence the above code asks from the underlying transactional implementation to solve an satisfiability problem leveraging existing useful heuristics for such problems is something we intend to pursue in future work in the following sections we describe an implementation where these transactional scheduling decisions can be plugged in and a number of heuristic transactional schedulers we have developed and evaluated our work shows that although more advanced heuristics bring measurable performance benefits the exponential number of runtime choices require the development of innovative compilation and execution techniques to make communicating transactions a realistic solution for programmers an extensible implementation architecture we have developed an interpreter for the tcml reduction semantics in concurrent haskell to which we can different decisions about the transitions of our semantics here we briefly explain the runtime architecture of this interpreter shown in fig the main haskell threads are shown as round nodes in the figure each concurrent functional expression ei is interpreted in its own thread according to the sequential reduction rules in fig of the previous section in an expression will be generally handled by the interpreting thread creating new channels spawning new threads and starting new transactions except for new channel creation the evaluation of all other in an expression will cause a notification shown as dashed arrows in fig to be sent to the gatherer process this process is responsible for maintaining a global view of the state of the running program in a trie this essentially represents the transactional structure of the program the logical nesting of transactions and processes inside running transactions data ttrie ttrie threads children set threadid map transactionid ttrie a ttrie node represents a transaction or the of the program the main information stored in such a node is the set of threads threads and transactions children running in that transactional level each child transaction has its own associated ttrie node an invariant of the is that each thread and transaction identifier appears only once in it for example the complex program we saw on page j recv c commit k k j recv d commit l k atomic j recv d commit l k k k p tidp will have an associated trie ttrie threads tidp children k ttrie threads children l ttrie threads children the last ingredient of the runtime implementation is the scheduler thread sched in fig this makes decisions about the commit embed and abort transitions to be performed by the expression threads based on the information in the trie once such a decision is made by the scheduler appropriate signals implemented using haskell asynchronous exceptions are sent to the running threads shown as dotted lines in fig our implementation is parametric to the precise algorithm that makes scheduler decisions and in the following section we describe a number of such algorithms we have tried and evaluated a scheduler signal received by a thread will cause the update of the local transactional state of the thread affecting the future execution of the thread the local state of a thread is an object of the tprocess data tprocess tp expr expression ctx context tr alternative data alternative a tname transactionid pr tprocess the local state maintains the expression expr and evaluation context ctx currently interpreted by the thread and a list of alternative processes represented by objects of the alternative this list contains the continuations stored when the thread was embedded in transactions the nesting of transactions in this list mirrors the transactional nesting in the global trie and is thus compatible with the transactional nesting of other expression threads let us go back to the example of page j recv c commit k k j recv d commit l k atomic j recv d commit l k k k p tidp where p send d send c when p is embedded in both k and l the thread evaluating p will have the local state object tp expr p tr a tname l pr p a tname k pr p recording the fact that the thread running p is part of the which in turn is inside the if either of these transactions aborts then the thread will rollback to p and the list of alternatives will be appropriately updated the aborted transaction will be removed once a transactional reconfiguration is performed by a thread an acknowledgment is sent back to the gatherer who as we discussed is responsible for updating the global transactional structure in the trie this closes a cycle of transactional reconfigurations initiated from the process by starting a new transaction or thread or the scheduler by issuing a commit embed or abort what we described so far is a simple architecture for an interpreter of tcml various improvements are possible addressing the message bottleneck in the gatherer but are beyond the scope of this paper in the following section we discuss various policies for the scheduler which we then evaluate experimentally transactional scheduling policies our goal here is to investigate schedulers that make decisions on transactional reconfiguration based only on runtime heuristics we are currently working on more advanced schedulers including schedulers that take advantage of static information extracted from the program which we leave for future work an important consideration when designing a scheduler is adequacy chap sec for a given program an adequate scheduler is able to produce all outcomes that the operational semantics can produce for that program however this does not mean that the scheduler should be able to produce all traces of the semantics many of these traces will simply abort and restart the same computations over and over again previous work on the behavioral theory of communicating transactions has shown that all program outcomes can be reached with traces that never restart a computation thus a goals of our schedulers is to minimize by minimizing the number of aborts moreover as we discussed at the end of sect many of the exponential number of embeddings can be avoided without altering the observable behavior of a program this can be done by embedding a process inside a transaction only when this embedding is necessary to enable communication between the process and the transaction we take advantage of this in a communicationdriven scheduler we describe in this section even after reducing the number of possible choices faced by the scheduler in most cases we are still left with a multitude of alternative transactional reconfiguration options some of these are more likely to lead to efficient traces than other however to preserve adequacy we can not exclude any of these options since the scheduler has no way to foresee their outcomes in these cases we assign different probabilities to the available choices based on heuristics this leads to measurable performance improvements without violating adequacy of course some program outcomes might be more likely to appear than others this approach is trading measurable fairness for performance improvement however the probabilistic approach is theoretically fair every finite trace leading to a program outcome has a probability diverging traces due to sequential reductions also have probability to occur the only traces with zero probability are those in the reduction semantics that have an infinite number of reductions intuitively these are unfair traces that abort and restart transactions ad infinitum even if other options are possible random scheduler r the very first scheduler we consider is the random scheduler whose policy is to simply at each point select one of all the nondeterministic choices with equal probability without excluding any of these choices with this scheduler any abort embed or commit actions are equally likely to happen although this naive scheduler is not particularly efficient as one would expect it is an obviously adequate and fair scheduler according to the discussion above if a reduction transition is available infinitely often scheduler r will eventually select it this scheduler leaves much room for improvement suppose that a transaction k is ready to commit j p k co k q k since r makes no distinction between the choices of committing and aborting k it will often unnecessarily abort all processes embedded in this transaction will have to roll back and if k was a transaction that restarts the transaction will also this results to a considerable performance penalty similarly scheduler r might preemptively abort a transaction that could have have committed given enough time and embeddings for the purpose of communication staged scheduler s the staged scheduler partially addresses these issues by prioritizing its available choices whenever a transaction is ready to commit scheduler s will always decide to send a commit signal to that transaction before aborting it or embedding another process in it this does not violate adequacy before continuing with the algorithm of s let us examine the adequacy of prioritizing commits over other transactional actions with an example example consider the following program in which k is ready to commit j p k co k q k k r if embedding r in k leads to a program outcome then that outcome can also be reached after committing k from the residual p k alternatively a program outcome could be reachable by aborting k from the process q k r however the co k was spawned from one of the previous states of the program in the current trace in that state transaction k necessarily had the form j p k e commit k q in that state the abort of k was enabled therefore the staged interpreter indeed allows a trace leading to the program state q k r from which the outcome in question is reachable if no commit is possible for a transaction the staged interpreter prioritizes embeds into that transaction over aborting the transaction this is again an adequate decision because the transactions that can take an abort reduction before an embed step have an equivalent abort reduction after that step when no commit nor embed options are available for a transaction the staged interpreter lets the transaction run with probability giving more chances to make progress in the current trace and with probability it aborts numbers have been with a number of experiments the benefit of the heuristic implemented in this scheduler is that it minimizes unnecessary aborts improving performance its drawback is that it does not abort transactions often thus program outcomes reachable only from transactional alternatives are less likely to appear moreover this scheduler does not avoid unnecessary embeddings scheduler cd to avoid spurious embeddings scheduler cd improves over r by performing an embed transition only if it is necessary for an imminent communication for example in the following program state the embedding of the process into k will never be chosen j e recv c q k k send c v however after that process reduces to an output its embedding into k will be enabled because of the equivalence j p q k k r j p k r q k r k which we previously discussed this scheduler is adequate for the implementation of this scheduler we augment the information stored in the trie sect with the channel which each thread is waiting to communicate on if any as we will see in sect this heuristic significantly boosts performance because it greatly reduces the exponential number of embedding choices scheduler da the final scheduler we report is da which adds a minor improvement upon scheduler cd this scheduler keeps a timer for each running transaction k in the transaction trie this timer is reset whenever a communication or transactional operation happens inside transaction k will only be considered for an abort when this timer expires this strategy benefits longrunning transactions that perform multiple communications before committing the cd scheduler is obviously adequate because it only adds time delays evaluation of the interpreters we now report the experimental evaluation of interpreters using the preceding scheduling policies the interpreters were compiled with ghc and the experiments were performed on a windows machine with intel r coretm ghz processor and of ram we run several versions of two programs sno example committed rendezvous number of concurrent processes r s cd ta id fig experimental results the rendezvous in which a number of processes compete to synchronize on a channel with two other processes forming groups of three which then exchange values this is a standard example of agreement in the tcml implementation of this example each process nondeterministically chooses between being a leader or follower within a communicating transaction if a leader and two followers communicate they can all exchange values and commit any other situation leads to deadlock and eventually to an abort of some of the transactions involved the sno example of the introduction as implemented in sect with multiple instances of the alice bob carol and david processes to test the scalability of our schedulers we tested a number of versions of the above programs each with a different number of competing parallel processes each process in these programs continuously performs or sno cycles and our interpreters are instrumented to measure the number of operations in a given time from which we compute the mean throughput of successful or sno operations the results are shown in fig each graph in the figure contains the mean throughput of operations in logarithmic scale as a function of the number of competing concurrent tcml processes the graphs contain runs with each scheduler we discussed random r staged s cd and with timed aborts ta as well as with an ideal program id the ideal program in the case of is similar to the tcml implementation the ideal version of the sno is running a simpler instance of the scenario without any carol instance has no deadlocks and therefore needs no error handling ideal programs give us a performance upper bound as predictable the random scheduler r s performance is the worst in many cases r could not perform any operations in the window of measurements the other schedulers perform better than r by an order of magnitude even just prioritizing the transactional reconfiguration choices significantly cuts down the exponential number of inefficient traces however none of the schedulers scale to programs with more processes their performance deteriorates exponentially in fact when we go from the cd to the timedaborts ta scheduler we see worst throughput in larger process pools this is because with many competing processes there is more possibility to enter a path to deadlock in these cases the results suggest that it is better to abort early the upper bound in the performance as shown by the throughput of id is one order of magnitude above that of the best interpreter when there are few concurrent processes and within the range of our experiments two orders when there are many concurrent processes the performance of id is increasing with more processes due to better utilization of the processor cores it is clear that in order to achieve a pragmatic implementation of tcml we need to address the exponential nature in consensus scenarios as the ones we tested here our exploration of purely runtime heuristics shows that performance can be improved but we need to turn to a different approach to close the gap between ideal implementations and abstract tcml implementations conclusions and future work consensus is an often occurring problem in concurrent and distributed programming the need for developing programming language support for consensus has already been identified in previous work on transactional events te communicating memory transactions cmt transactors and cjoin these approaches propose forms of restarting communicating transactions similar to those described in sect te cmt and transactors can be used to implement the instance of the saturday night out sno example in this paper te extends cml events with a transactional sequencing operator transactional communication is resolved at runtime by search threads which exhaustively explore all possibilities of synchronization avoiding runtime aborts cmt extends stm with asynchronous communication maintaining a directed dependency graph mirroring communication between transactions stm abort triggers cascading aborts to transactions that have received values from aborting transactions transactors extend actor semantics with primitives enabling the composition of systems with consistent distributed state via distributed checkpointing the cjoin calculus extends the join calculus with isolated transactions which can be merged merging and aborting are managed by the programmer offering a manual alternative to tcml s nondeterministic transactional operations it is unclear to us how to write a straightforward implementation of the sno example in cjoin reference implementations have been developed for te cmt and cjoin the discovery of efficient implementations for communicating transactions could be equally beneficial for all approaches stabilizers add transactional support for in the presence of transient faults but do not directly address concensus scenarios such as the sno example this paper presented tcml a simple functional language with support for consensus via communicating transactions this is a construct with a robust behavioral theory supporting its use as a programming language abstraction for automatic error recovery tcml has a simple operational semantics and can simplify the programming of advanced consensus scenarios we introduced such an example sno which has a natural encoding in tcml the usefulness of communicating transactions in applications however depends on the invention of efficient implementations this paper described the obstacles we need to overcome and our first results in a recently started project on developing such implementations we gave a framework to develop and evaluate current and future runtime schedulers of communicating transactions and used it to examine schedulers which are based solely on runtime heuristics we have found that some heuristics improve upon the performance of a naive randomized implementation but do not scale to programs with significant contention where an exponential number of alternative computation paths lead to necessary rollbacks it is clear that purely dynamic strategies do not lead to sustainable performance improvements in future work we intend to pursue a direction based on the extraction of information from the source code which will guide the language runtime this information will include an abstract model of the communication behavior of processes that can be used to predict with high probability their future communication pattern a promising approach to achieve this is the development of technology in type and effect systems and static analysis although the scheduling of communicating transactions is theoretically computationally expensive realistic performance in many programming scenarios could be achievable references bruni melgratti montanari cjoin join with communicating transactions to appear in mscs de vries koutavas hennessy liveness of communicating transactions pp aplas donnelly fluet transactional events pp icfp field varela transactors a programming model for maintaining globally consistent distributed state in unreliable environments popl harris marlow jones herlihy composable memory transactions commun acm pp herlihy shavit the art of multiprocessor programming kaufmann jones gordon finne concurrent haskell popl kshemkalyani singhal distributed computing principles algorithms and systems cambridge university press lesani palsberg communicating memory transactions ppopp marlow jones moran reppy asynchronous exceptions in haskell pldi reppy concurrent programming in ml cambridge university press spaccasassi transactional concurrent ml tech de vries koutavas hennessy communicating transactions pp concur ziarek schatz jagannathan stabilizers a modular checkpointing abstraction for concurrent functional programs icfp
| 6 |
network of recurrent neural networks wang oct school of software beijing jiaotong university beijing china oujago abstract we describe a class of systems theory based neural networks called network of recurrent neural networks nor which introduces a new structure level to rnn related models in nor rnns are viewed as the neurons and are used to build the layers more specifically we propose several methodologies to design different nor topologies according to the theory of system evolution then we carry experiments on three different tasks to evaluate our implementations experimental results show our models outperform simple rnn remarkably under the same number of parameters and sometimes achieve even better results than gru and lstm introduction in recent years recurrent neural networks rnns elman have been widely used in natural language processing nlp traditionally rnns are directly used to build the final models in this paper we propose a novel idea called network of recurrent neural networks nor which utilizes existing basic rnn layers to make the structure design of the layers from a standpoint of systems theory von bertalanffy von bertalanffy a recurrent neural network is a group or an organization made up of a number of interacting parts and it actually is viewed as a complex system or a complexity dialectically every system is relative it is not only the system of its parts but also the part of a larger system in nor structures rnn is viewed as the neuron and several neurons are used to build the layers rather than directly used to construct the whole models conventionally there are three levels of structure in deep neural networks dnns neurons layers and whole nets or called models from a perspective of systems theory at each level of such increasing complexity novel features that do not exist at lower levels emerge lehn for example at the neurons level single neuron is simple and its generalization capability is very poor but when a certain number of such neurons are accumulated into a certain elaborate structure by certain ingenious combinations the layers at the higher level begin to get the unprecedented ability of classification and feature learning more importantly copyright c association for the advancement of artificial intelligence all rights reserved such new gained capability or property is deducible from but not reducible to constituent neurons of lower levels it s not a property of the simple superposition of all constituent neurons and the whole is greater than the sum of the parts in systems theory such kind of phenomenon is known as whole emergence wierzbicki whole emergence often comes from the evolution of the system arthur and others in which a system develops from the lower level to the higher level from simplicity to complexity in this paper the motivation of nor structures is to introduce a new structure level to networks by transferring traditional rnn from the system to the agent and from the outer dimension to the inner dimension fromm in brian arthur arthur and others has identified three mechanisms by which complexity tends to grow as systems evolve mechanism increase in diversity the agent in the system seem to be a new instance of agent class type or species as a result the system seems to have new external agent types or capabilities mechanism increase in structural sophistication the individual system steadily accumulates increasing numbers of new systems or parts thus newly formed system seems to have new internal subsystems or capabilities mechanism increase by capturing software the system capture simpler elements and learns to program these as software to be used as its own ends in this paper with the guidance of first two mechanisms we introduce two methodologies to nor structures design which are named as aggregation and specialization aggregation and specialization are natural operations for increasing complexity in complex systems fromm the former is related to arthur s second mechanism in which traditional rnns are aggregated and accumulated into a highlevel layer in accordance with a specific structure and the latter is related to arthur s first mechanism in which the rnn agent in a layer is specialized as the rnn agent that performs a specific function we make several implementations and carry out experiments on three different tasks including sentiment classification question type classification and named entity recognition experimental results show that our models outperform constitute simple rnn remarkably with the same ber of parameters and achieve even better results than gru and lstm sometimes background systems theory systems theory was originally proposed by biologist ludwig von bertalanffy von bertalanffy von bertalanffy for biological phenomena in biology systems there are several different levels which begin with the smallest units of life and reach to the largest and most extensive category molecule cell tissue organ organ system organization etc traditionally a system could be decomposed into its individual components so that each component could be analyzed as an independent entity and components could be added in a linear fashion to describe the totality of the system walonick however von bertalanffy argued that we can not fully comprehend a phenomenon by simply breaking it down into elementary parts and then reforming it we instead need to apply a global and systematic perspective to underline its functionality mele pels and polese because a system is characterized by the interactions of its components and the nonlinearity of those interactions walonick whole emergence in systems theory the phenomenon the whole is irreducible to its parts is known as emergence or whole emergence wierzbicki emergence can be qualitatively described as the whole is greater than the sum of the parts upton janeka and ferraro or it can also be quantitatively expressed as n x w pi arbitrary sequences of inputs formally given a sequence of vectors xt the equation of simple rnn elman is ht f w xt u where w and u are parameter matrices and f denotes a nonlinearity function such as tanh or relu for simplicity the neuron biases are omitted from the equation actually rnns can behave chaotically there have been some works analysing rnns theoretically or experimentally from the perspective of systems theory sontag provided an exposition of research regarding systemtheoretic aspects of rnns with sigmoid activation functions bertschinger and analyzed the computation at the edge of chaos in rnns and calculated the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place pascanu mikolov and bengio employed a dynamical systems perspective to understand the exploding gradients and vanishing gradients problems in rnns in this paper we obtain methodologies from systems theory to conduct structure designs of rnn related models network of recurrent neural networks overall architecture i where w is the whole of the system and consists of n parts and pi is the part in philip anderson highlighted the idea of emergence in has article more is different anderson in which he stated that a change of scale very often causes a qualitative change in the behavior of the system for example in human brains when one examines a single neuron there is nothing that suggests conscious but a collection of millions of neurons is clearly able to produce wonderful consciousness the mechanisms behind the emergence of complexity can be used to design neural network structures one of the widely accepted reasons is the repeated application and combination of two complementary forces or operations stretching and folding in physics term thompson and stewart splitting and merging in computer science term hannebauer or specialization and cooperation in sociology term merging or aggregating of agents means generally a number of agents is aggregated or conglomerated into a single agent splitting or specializing means the agents are clearly separated from each other and each agent is constrained to a certain class or role fromm recurrent neural networks at the edge of chaos recurrent neural networks rnns werbos elman are a class of deep neural networks that possess internal memory due to recurrent connections between units which makes them be able to process figure overview of nor structure as the illustration shown in figure nor architecture is a structure we summarize nor architecture as four components i m s and o in which component i input and o output control the head and tail of nor layer component s subnetworks is in charge of the spatial extension and component m memories is responsible for the temporal extension of the whole structure we describe each component as follows component i component i controls the head of nor architecture it does data preprocessing tasks and distributes processed input data to subnetworks at each t the form of upcoming input data xt may be various such as one single vector or several vectors with the multigranularity information even the feature vectors with noise one single vector may be the simplest situation and the common solution is copying this vector into n duplicates and feed each of them into one single subnetwork in the component in this paper the copying method meets our needs a layer b another layer c layer d layer figure the sectional views of nor layers at one i means the component i o means the component o and r means rnn neuron and we formalize it as xit c xt in which c means copy function and xit will be fed into subnetwork component m component m manages all memories over the whole layer not only internal but also external memories weston chopra and bordes but in this paper component m only considers internal memory and do not apply any extra processing to the individual memory of each rnn neuron that is mjt i where i means identity function the superscript j is the identifier of rnn neuron mjt is the memory of jth rnn neuron at t and is the transformation output of rnn neuron at t component s component s is made up of several different or same subnetworks interaction may exist in these subnetworks the responsibility of component s is to manage the logic of each subnetwork and handle the interaction between them suppose component s has n and m component s receives n inputs and produces m outputs the output is generated by necessary inputs and memories skt f x m where skt is the output at t x and m are needed inputs and memories and f is the nonlinear function which can be rnn or rnn etc component o to form a layer we need a certain amount of neurons so one of the nor properties is multiple rnns a natural approach to integrate multiple rnn neurons signals is collecting all outputs first and then using a mlp layer to measure the weights of each outputs traditional neuron outputs a single real value so the collection method is directly arranging them into a vector but rnn neurons is different for each of them outputs a vector not a value a simple method is concatenating all vectors and then connecting the notation of the subnetwork is different from the neuron for one subnetwork may be composed of several neurons we use superscript i as the identifier of the subnetwork so the input data of subnetwork at t is denoted as xit in this paper we just use simple rnn elman applied with relu activation as our basic rnn neuron thus the memory at this is just the output at last the concatenated vector to the next mlp another is pooling each rnn output vector into a real value then arranging all these real values into a vector which seems same as traditional neurons in this paper the former solution is used and formalized as st sm t ot r wm lp st where st is the concatenated vector wm lp is the weight of mlp and r means the relu activation function of mlp methodology i aggregation any operation with changing a boundary can cause a emergence of complexity the natural boundary is the agent itself and sudden emergence of complexity is possible at this boundary if complexity is transfered from the agent to the system or vice versa from the system to the agent there are two basic operations aggregation and specialization that can be used to transfer complexity between different dimensions fromm according to arthur s second mechanism internal complexity can be increased by aggregation and composition of which means a number of rnn agents is conglomerated into a single big system in this way aggregation and composition transfer traditional rnn from the outer to the inner dimension from the system to the agent for the selected rnns are accumulated to become a part of a larger group for a concrete nor layer suppose it is composed of n subnetworks and subnetwork is made up of k i rnn neurons then at the t given the input xt the operation flow is as follows component i copy xt into n duplications using equation then we get xnt component m deliver the memory of each rnn neuron from the last to the current using equation then we get memories in t mt first subnetwork and memories mt mt in second subnetwork etc component s for each subnetwork i take advantage of i ki the input xit and memories to get the nont mt linear transformation output i i k sit f xit t mt then we get snt component o concatenate all outputs by equation and use a mlp function to determine how much signals in each subnetwork to flow through the component o by equation obviously the number the type and the interaction of the aggregated rnns determine the internal structure or inner complexity of the newly formed layer system thus we propose three kinds of topologies of nor aggregation method in systems theory the natural description of complex system is the system created by replication and adaptation replication means to copy and reproduce a new rnn agent and adaptation means they are not totally same and some changes on weights or somewhere else by variation can increase the diversity of the system as shown in figure a there is a nor layer called manor composed of four parallel rnns figure shows this layer being unrolled into a full network each subnetwork of layer is a rnn thus at t the subnetwork of component s in is calculated as sit oit r wi xit ui mit where r means relu activation function wi and ui are parameters of corresponding rnn neuron oit is the nonlinear transformation output and will be delivered to next to be used as and sit is the output of subnetwork which is equal to oit figure the unfolding of in three introduce new agent type to the system and can learn sequence dependencies in different timescales figure c shows a nor layer made up of four subnetworks in which two of them are rnns and the others are rnns two kinds of timescale dependencies are learned in component s which are formalized as follows r r r t r wt ot ut mt r t r wt ot ut mt the above mentioned aggregation and composition operation lead to big rnn while in turn they can also be combined to form even bigger group such repeated aggregation and high accumulation makes the fractal and structure come into being figure the unfolding of in three the nonlinear function in equation of each subnetwork may be more complex for example figure b shows a nor layer made up of three rnns at t the subnetwork in component s is calculated as i t r wt xt ut mt sit t r xit t the combination of multiple rnns in a nor layer makes it somewhat like an ensemble and empirically diversity among the members of a group of agents is deemed to be a key issue in ensemble structure kuncheva and whitaker one way to increase the diversity is to use the topology which figure the unfolding of in three as shown in figure d we also use three paths but after each path first learns its own intermediate tation the second layers gather all intermediate representations of three paths to learn abstract features in this way different paths do not learn and train independently the connections among each other helps the model easy to share informations thus it becomes possible that the whole model learns and trains to be an organic rather than parallel independent structure we formalize the cooperation of component s as follows r t t t t t t r t r wt xt ut mt r t ot ot r t ot ot r t ot ot t ut mt t figure gate specialization methodology ii specialization we have mentioned that the emergence of complexity is usually connected to a transfer of complexity a transfer at the boundary of the system aggregation and composition transfer complexity from the system to the agent and from the outer dimension to the inner dimension another way to be used to cross the agent boundary is the specialization or inheritance which transfer complexity from the agent to the system and from the inner dimension to the outer dimension fromm specialization is related to arthur s first mechanism it increases structural sophistication outside of the agent by adding new agent forms through inheritance and specialization objects become objects of a certain class and agents become agents of a certain type and the more such an agent becomes a particular class or type the more it needs to delegate special tasks that it can not handle alone to other agents fromm the effect of specialization is the emergence of delegation and division of labor in the newly formed groups thus the formalization of output in component s can be rewritten as the following skt g x m x m fl x m where fl is the specialized agent function g means the cooperation of all specialized agents and l is the number of specialized agents equation denotes the function f in equation is implemented by the separated operations fl and we see gate mechanism is one of the specialization methods as shown in figure a general rnn agent is separated into two specialized rnn agents one is for gate duty and the other is for generalization duty a concrete is shown in in the original nor layer each rnn agent is specialized as one generalization specific rnn and one gate specific rnn figure the sectional views of layer at one we formalize it as r r where denotes the sigmoid activation and multiplication denotes relationship with lstm and gru we see long shortterm memory lstm hochreiter and schmidhuber and gated recurrent unit gru chung et al as two special cases of network of recurrent neural networks take lstm for example at t given input xt and previous memory cell and hidden state the transition equations of standard lstm can be expressed as the following i wi xt ui f wi xt uf o wo xt uo g tanh wg xt ug ct f g i st tanh ct o from the perspective of nor network of recurrent neural networks lstm is made up of four rnns in which three task sentiment classification question classification named entity recognition of params k k k k k k k k k irnn gru lstm table number of hidden neurons for rnn gru lstm and for each network size specified in terms of the number of parameters weights of four i f o rnns are specialized for gate tasks to control how much of informations let through in different parts moreover there is only a shared memory which can be accessed by each rnn cell in lstm while in turn lstm and gru can also be combined to form even bigger group experiments in order to evaluate the performance of the presented model structures we design experiments on the following tasks sentiment classification question type classification and named entity recognition we compare all models under the comparable parameter numbers to validate the capacity of better utilizing the parametric space in order to verify the effectiveness and universality of the experiments we conduct three comparative tests under total parameters of different orders of magnitude see table every experiment is repeated times with different random initializations and then we report the mean results it s worthy noting that our aim here is to compare the model performance under the same settings not to achieve best performance for one single model le jaitly and hinton showed that when initializing the recurrent weight matrix to be the identity matrix and biases to be zero simple rnn composed of relu activation function named as irnn can be comparable with even outperform lstm in our experiments all basic rnn neurons are simple rnns applied with relu function we also keep the number of the hidden units same over all rnn neurons in a nor model obviously our baseline model is a single giant simple rnn elman applied with relu activation at the same time two improved rnns gru chung et al and lstm hochreiter and schmidhuber have been widely and successfully used in nlp in recent years so we also choose them as our baseline models the glove and google news were obtained for the word embeddings during training we fix all word embeddings and learn only the other parameters in all models the embeddings for words are set to zero vectors we pad or crop the input sentences to a fixed length the https https trainings are done through stochastic gradient optimizer descent over shuffled with the optimizer adam kingma and ba all models are regularized by using dropout srivastava et al method at the same time in order to avoid overfitting early stopping is applied to prevent unnecessary computation when training more details on setting can be found in our codes which are publicly available at sentiment classification we evaluate our models on the task of sentiment classification on the popular stanford sentiment treebank sst benchmark socher et al which consists of movie reviews and is split into train dev and test sst provides detailed annotation and all sentences along with the phrases are annotated with labels very positive positive neural negative and very negative in our experiments we only use the annotation one of our goals is to avoid expensive phraselevel annotation like qian huang and zhu another is in practice annotation is hard to provide all models use the same architecture embedding layer dropout layer layer layer layer dropout layer softmax layer the first layer is the word embedding layer next are layers as the feature transformation layer then a layer all transformed feature vectors by selecting the max value in each position to get sentence representation finally a softmax layer is used as output layer to get the final result to benefit from the regularization two dropout layers with rate of are added after embedding layer and before softmax layer the initial learning rates of all models are set to we use public available glove vectors to initialize word embeddings three different network sizes are tested for each architecture such that the number of parameters are roughly k k and k see table we set the minibatch size as finally we use the criterion as loss function the results of the experiments are shown in table it is obvious that nor models get superior performances compared with irnn baseline especially when the network size is big enough all models improve with network size grows among all nor models gets the best results model irnn gru lstm params params params table accuracy comparison over different experiments on sst corpus however we find that lstm and gru get much better results in three comparative tests which consists of sentences in the training set sentences in the validation set and sentences in the test set model irnn gru lstm params params params table comparison over different experiments on corpus question type classification question classification is an important step in a question answering system which classifies a question into a specific type for this task we use trec li and roth benchmark which divides all questions into categories location human entity abbreviation description and numeric terc provides labeled questions in the training set and questions in the test we randomly select of the training data as the validation set model irnn gru lstm params params params table accuracy comparison over different experiments on trec corpus all network types use the same architecture embedding layer dropout layer layer layer dropout layer softmax layer dropout rates are set to three hidden layer sizes are chosen such that the total number of parameters for the whole model is roughly k k k see table all networks use a learning rate of and are trained to minimize the cross entropy error table shows the accuracy of the different networks on the question type classification task here again nor models get better results than baseline irnn model among all nor models also gets the best result in this dataset we find the performances of lstm and gru are even not comparable with irnn which proves the validity of results in le jaitly and hinton named entity recognition named entity recognition ner is a classic nlp task which tries to identity the proper names of persons organizations locations or other entities in the given text we experiment on dataset tjong kim sang and de meulder recently popular ner models are based on bidirectional lstm combined with conditional random fields crf named as lample et al the networks can effectively use past and future features via a layer and sentence level tag information via a crf layer in our experiments we also adapt this architecture by replacing lstm with nors or other variation of rnns so the universal architecture of all tested models is embedding layer dropout layer layer crf layer three hidden layer sizes are chosen such that the total number of parameters for the whole network is roughly k k and k see table we apply dropout after embedding layer initial learning rate is set to and every epoch it is reduced by factor the size of each minibatch is we train all networks for epochs and early stop the training when there are epochs no improvement on validation set our results are summarized in the table not surprisingly all nors perform much better than giant single rnnrelu model as we can see gru performs the worst followed by irnn compared to gru and irnn lstm performs very well especially when network size grows up at the same time all nor models get superior performances than irnn gru and lstm among them model get best results conclusion in conclusion we introduced a novel kind of systems theory based neural networks called network of recurrent neural network nor which views existing rnns for example simple rnn gru lstm as neurons and then utilizes rnn neurons to design layers then we proposed several methodologies to design different nor topologies according to the evolution of systems theory arthur and others we conducted experiments on three kinds of tasks including sentiment classification question type classification and named entity recognition to evaluate our proposed models experimental results demonstrated that nor models get superior performances compared with single giant rnn models and sometimes their performances even exceed gru and lstm references anderson anderson more is different science arthur and others arthur et al on the evolution of complexity technical report bertschinger and bertschinger and computation at the edge of chaos in recurrent neural networks neural computation chung et al chung gulcehre cho and bengio y empirical evaluation of gated recurrent neural networks on sequence modeling arxiv preprint elman elman finding structure in time cognitive science fromm fromm j the emergence of complexity kassel university press kassel hannebauer hannebauer autonomous dynamic reconfiguration in systems improving the quality and efficiency of collaborative problem solving hochreiter and schmidhuber hochreiter and schmidhuber j long memory neural computation kingma and ba kingma and ba j adam a method for stochastic optimization arxiv preprint kuncheva and whitaker kuncheva and whitaker j measures of diversity in classifier ensembles and their relationship with the ensemble accuracy machine learning lample et al lample ballesteros subramanian kawakami and dyer neural architectures for named entity recognition arxiv preprint le jaitly and hinton le jaitly and hinton a simple way to initialize recurrent networks of rectified linear units arxiv preprint lehn lehn toward complex matter supramolecular chemistry and proceedings of the national academy of sciences li and roth li and roth learning question classifiers in proceedings of the international conference on computational association for computational linguistics mele pels and polese mele pels and polese a brief review of systems theories and their managerial applications service science pascanu mikolov and bengio pascanu mikolov and bengio y on the difficulty of training recurrent neural networks in icml qian huang and zhu qian huang and zhu x linguistically regularized lstms for sentiment classification arxiv preprint socher et al socher perelygin wu chuang manning ng potts et al recursive deep models for semantic compositionality over a sentiment treebank in proceedings of the conference on empirical methods in natural language processing emnlp volume citeseer sontag sontag recurrent neural networks some aspects in dealing with complexity a neural network approach citeseer srivastava et al srivastava hinton krizhevsky sutskever and salakhutdinov dropout a simple way to prevent neural networks from overfitting journal of machine learning research thompson and stewart thompson and stewart b nonlinear dynamics and chaos john wiley sons tjong kim sang and de meulder tjong kim sang and de meulder introduction to the shared task named entity recognition in proceedings of the seventh conference on natural language learning at association for computational linguistics upton janeka and ferraro upton janeka and ferraro the whole is more than the sum of its parts aristotle metaphysical journal of craniofacial surgery von bertalanffy von bertalanffy general system theory new york von bertalanffy von bertalanffy the history and status of general systems theory academy of management journal walonick walonick general systems theory information on http statpac htm werbos werbos j generalization of backpropagation with application to a recurrent gas market model neural networks weston chopra and bordes weston chopra and bordes a memory networks arxiv preprint wierzbicki wierzbicki systems theory theory of chaos emergence in technen elements of recent history of information technologies with epistemological conclusions springer
| 9 |
on the second cohomology of nilpotent orbits in exceptional lie algebras nov pralay chatterjee and chandan maity abstract in bc the second de rham cohomology groups of nilpotent orbits in all the complex simple lie algebras are described in this paper we consider exceptional lie algebras and compute the dimensions of the second cohomology groups for most of the nilpotent orbits for the rest of cases of nilpotent orbits which are not covered in the above computations we obtain upper bounds for the dimensions of the second cohomology groups introduction let g be a connected real simple lie group with lie algebra an element x g is called nilpotent if ad x g g is a nilpotent operator let ox ad g x g g be the corresponding nilpotent orbit under the adjoint action of g on such nilpotent orbits form a rich class of homogeneous spaces and they are studied at the interface of several disciplines in mathematics such as lie theory symplectic geometry representation theory algebraic geometry various topological aspects of such orbits have drawn attention over the years see cm m and references therein for an account in bc proposition for a large class of semisimple lie groups a criterion is given for the exactness of the two form on arbitrary adjoint orbits which in turn led the authors asking the natural question of describing the full second cohomology groups of such orbits towards this in bc the second cohomology groups of nilpotent orbits in all the complex simple lie algebras under the adjoint actions of the corresponding complex groups are computed in this paper we continue the program of studying the second cohomology groups of nilpotent orbits which was initiated in bc we compute the second cohomology groups for most of the nilpotent orbits in exceptional lie algebras and for the rest of the nilpotent orbits in exceptional lie algebras we give upper bounds of the dimensions of second cohomology groups see theorems in particular our computations yield that the second cohomologies vanish for all the nilpotent orbits in and notation and background in this section we fix some general notation and mention a basic result which will be used in this paper a few specialized notation are defined as and when they occur later the center of a lie algebra g is denoted by z g we denote lie groups by capital letters and unless mentioned otherwise we denote their lie algebras by the corresponding lower case german letters sometimes for convenience the lie algebra of a lie group g is also denoted by lie g the connected component of a lie group g containing the identity element is denoted by for a subgroup h of g and a subset s of g the subgroup of h that fixes s point wise is called the centralizer of s in h and is denoted by zh s similarly for a lie subalgebra h g and a subset mathematics subject classification key words and phrases nilpotent orbits exceptional lie algebras second cohomology chatterjee and maity s g by zh s we will denote the subalgebra consisting elements of h that commute with every element of if g is a lie group with lie algebra g then it is immediate that the coadjoint action of on z k is trivial in particular one obtains a natural action of on z k we denote by z g the space of fixed points of z g under the action of for a real semisimple lie group g an element x g is called nilpotent if ad x g g is a nilpotent operator a nilpotent orbit is an orbit of a nilpotent element in g under the adjoint representation of g for a nilpotent element x g the corresponding nilpotent orbit ad g x is denoted by ox for a g be a lie algebra over r a subset x h y g is said to be r if x h x h y and x y it is immediate that if x h y g is a r triple then spanr x h y is a of g which is isomorphic to the lie algebra r we now recall the theorem see cm theorem which ensures that if x g is a nilpotent element in a real semisimple lie algebra g then there exist h y in g such that x h y g is a r to facilitate the computations in we need the following result theorem let g be an algebraic group defined over r which is let x lie g r x be a nilpotent element and ox be the orbit of x under the adjoint action of the identity component g r on lie g r let x h y be a r in lie g r let k be a maximal compact subgroup in zg r x h y and m be a maximal compact subgroup in g r containing then h ox r z k m m in particular dimr h ox r dimr z k the above theorem follows from cm lemma and a description of the second cohomology groups of homogeneous spaces which generalizes bc theorem the details of the proof of theorem and the generalization of bc theorem mentioned above will appear elsewhere the second cohomology groups of nilpotent orbits in this section we study the second cohomology of the nilpotent orbits in noncomplex exceptional lie algebras over the results in this section depend on the results of tables tables and k tables we refer to cm chapter and for the generalities required in this section we begin by recalling the parametrization of nilpotent orbits in this parametrization of nilpotent orbits in exceptional lie algebras we follow the parametrization of nilpotent orbits in exceptional lie algebras as given in tables and tables we consider the nilpotent orbits in g under the action of int g where g is a real exceptional lie algebra we fix a semisimple algebraic group g defined over r such that g lie g r here g r denotes the associated real semisimple lie group of the of let g c be the associated complex semisimple lie group consisting of the of it is easy to see that orbits in g under the action of int g are the same as the orbits in g under the action of g r thus in this for a nilpotent element x g we set ox ad g x g g r let g m p be a cartan decomposition and be the corresponding cartan involution let gc be the lie algebra of g c then gc can be identified with the complexification of let mc pc be the of m and p in gc respectively then gc mc pc let mc be the connected subgroup of g c with lie on the nilpotent orbits in lie algebras algebra mc recall that if g is as above and g is different from both and then g is of inner type or equivalently rank mc rank gc when g is of inner type the nilpotent orbits are parametrized by a finite sequence of integers of length l where l rank mc rank gc when g is not of inner type that is when g is either or then the nilpotent orbits are parametrized by a finite sequence of integers of length let x g be a nonzero nilpotent element and x h y g be a r then e h e ye in g such that h e e x h y is g r to another r x e e e e e e e e e e x set e h i x y f h i x y and h i x y then e h f is a r and e f pc and h mc the r e h f is then called a pc triple associated to x parametrization in exceptional lie algebras of inner type we now recall from column tables the parametrization of nilpotent orbits in g when g is an exceptional lie algebra of inner type let hc mc be a cartan subalgebra of mc such that hc m is a cartan subalgebra of as g is of inner type hc is a cartan subalgebra of gc set h hc im let r be the root systems of gc hc mc hc respectively let b be a basis of let be b where is the negative of the highest root of r b then there exists an unique basis of say such that be let be the closed weyl chamber of in h corresponding to the basis let be the rank of mc mc then either l or l if l we set if l in this case we have b we set b clearly we enumerate as in and table iv let x g be a nonzero nilpotent element and e h f be a pc triple in gc associated to x then ad mc h is a singleton set say the element is called the characteristic of the orbit ad mc e as it determines the orbit mc e uniquely consider the map from the set of nilpotent orbits in g to the set of integer sequences of length l which assigns the sequence to each nilpotent orbits ox in view of the theorem cf cm theorem this gives a bijection between the set of nilpotent orbits in g and the set of finite sequences of the form as above we use this parametrization while dealing with nilpotent orbits in exceptional lie algebras of inner type parametrization in or we now recall from column tables the parametrization of nilpotent orbits in g when g is either or we need a piece of notation here henceforth for a lie algebra a over c and an automorphism autc a the lie subalgebra consisting of the fixed points of in a is denoted by let now hc be a cartan subalgebra of gc we point out the difference of our notation with that in g and h of are denoted here by gc and hc respectively let g let be the involution of gc as defined in which keeps hc invariant then the subalgebra is of type and is a cartan subalgebra of let g c be the connected lie subgroup of g c with lie algebra let be the simple roots of as defined in let x be a nonzero nilpotent element let e h f be a pc triple in gc associated to x then h and e f c we may further assume that h then the finite sequence of integers h h h h determine the orbit ad g c e uniquely see let g let be the involution of gc as defined in which keeps hc invariant then the subalgebra is of type and is a cartan subalgebra of let g c be the connected lie subgroup of g c with lie algebra let be the simple roots of as defined in let x be a nonzero nilpotent element let e h f we may further be a pc triple in gc associated to x then h and e f gc chatterjee and maity assume that h it then follows that the finite sequence of integers h h h h determine the orbit ad g c e uniquely see nilpotent orbits of three types for the sake of convenience of writing the proofs that appear in the later part it will be useful to divide the nilpotent orbits in the following three types let x g be a nonzero nilpotent element and x h y be a r in let g be as in the beginning of let k be a maximal compact subgroup in zg r x h y and m be a maximal compact subgroup in g r containing a nonzero nilpotent orbit ox in g is said to be of type i if z k id and m m m type ii if either z k id m m m or z k m m m type iii if z k in what follows we will use the next result repeatedly corollary let g be a real simple exceptional lie algebra let x g be a nonzero nilpotent element if the orbit ox is of type i then dimr h ox r dimr z k if the orbit ox is of type ii then dimr h ox r dimr z k if the orbit ox is of type iii then dimr h ox r proof the proof of the corollary follows immediately from theorem let g be as above in the proofs of our results in the following subsections we use the description of a levi factor of zg x for each nilpotent element x in g as given in the last columns of tables and tables this enables us compute the dimensions dimr z k easily we also use k column tables for the component groups for each nilpotent orbits in nilpotent orbits in the real form of recall that up to conjugation there is only one real form of we denote it by there are only five nonzero nilpotent orbits in see table vi note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in if the parametrization of the orbit ox is given by either or then dimr h ox r if the parametrization of the orbit ox is given by any of then dimr h ox r proof from column table vi we have dimr z k and from k column table we have id for the nilpotent orbits as in thus these are of type i we refer to column table vi for the orbits as given in these orbits are of type iii as dimr z k in view of the corollary the conclusions follow nilpotent orbits in real forms of recall that up to conjugation there are two real forms of they are denoted by and on the nilpotent orbits in lie algebras nilpotent orbits in there are nonzero nilpotent orbits in see table vii note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is either or then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table vii and from k column table pp for the orbits ox as in we have dimr z k and id hence these are of type i for the orbits ox as in we have dimr z k and id hence they are of type ii for the orbits ox as in we have dimr z k and id hence these are also of type ii the rest of the orbits which are not given by the parametrizations in are of type iii as z k now the theorem follows from corollary nilpotent orbits in there are two nonzero nilpotent orbits in see table viii theorem for all the nilpotent elements x in we have dimr h ox r proof as the theorem follows trivially when x we assume that x we follow the parametrization of nilpotent orbits as in from the last column of table viii we conclude that z k hence the nonzero nilpotent orbits are of type iii using corollary we have dimr h ox r nilpotent orbits in real forms of recall that up to conjugation there are four real forms of they are denoted by and nilpotent orbits in there are nonzero nilpotent orbits in see table viii note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in if the parametrization of the orbit ox is given by either or or then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table viii and from k column table as pointed out in the paragraph of k there is an error in row of table viii thus when ox is given by the parametrization it follows from k that z k chatterjee and maity we have dimr z k and id for the orbits given in thus these orbits are of type i for the orbits as in we have dimr z k and hence the orbits in are of type ii for rest of the nonzero nilpotent orbits which are not given by the parametrizations of are of type iii as dimr z k now the results follow from corollary nilpotent orbits in there are nonzero nilpotent orbits in see table ix note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is given by either or or then dimr h ox r if the parametrization of the orbit ox is given by then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table ix and from k column table pp we have z k for the orbits as given in and these orbits are of type iii for the orbits as given in we have dimr z k and id thus the orbits in are of type i for the orbits as given in we have dimr z k and id hence are of type ii for the orbits as given in we have dimr z k and thus this orbit is of type ii for the rest of orbits which are not given in any of we have dimr z k and id thus these orbits are of type i now the conclusions follow from corollary nilpotent orbits in there are nonzero nilpotent orbits in see table x note that in this case m r and hence m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in if the parametrization of the orbit ox is given by then dimr h ox r if ox is not given by the above parametrization of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table x the orbit in is of type iii as z k and hence dimr h ox r the other orbits are of type ii as dimr z k and m m m hence dimr h ox r nilpotent orbits in there are two nonzero nilpotent orbits in see table vii theorem for all the nilpotent element x in we have dimr h ox r on the nilpotent orbits in lie algebras proof as the theorem follows trivially when x we assume that x we follow the parametrization of the nilpotent orbits as given in the two nonzero nilpotent orbits in are of type iii as z k see last column of table vii hence by corollary we conclude that dimr h ox r nilpotent orbits in real forms of recall that up to conjugation there are three real forms of they are denoted by and nilpotent orbits in there are nonzero nilpotent orbits in see table xi pp note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in if the parametrization of the orbit ox is given by then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is given by either or then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table xi pp and from k column table pp the orbit ox as given in is of type i as dimr z k and id for the orbits as given in we have dimr z k and id hence these are also of type i for the orbits as given in we have dimr z k and id hence they are of type i for the orbits as given in we have dimr z k and thus these are of type ii for the orbits as given in we have dimr z k and hence these are also of type ii for the orbits as given in we have dimr z k and id hence they are of type ii rest of the orbits which are not given by the parametrizations in are of type iii as z k now the results follow from corollary nilpotent orbits in there are nonzero nilpotent orbits in see table xii note that in this case m m m chatterjee and maity theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in if the parametrization of the orbit ox is given by either or then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is given by either or then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table xii pp and from k column table pp for the orbit ox as in we have dimr z k and id hence these orbits are of type i for the orbit ox as in we have dimr z k and id hence these orbits are also of type i for the orbit ox as in we have dimr z k and hence are of type ii for the orbit ox as in we have dimr z k and hence these are also of type ii rest of the orbits which are not given by the parametrizations in are of type iii as z k now the conclusions follow from corollary nilpotent orbits in there are nonzero nilpotent orbits in see table xiii in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if ox is not given by any of the above parametrization of such orbits are then we have dimr h ox r proof note that the parametrization of nilpotent orbits in as in k table is different from table x iii as the component group for all orbits in is id see k column table pp it does not depend on the parametrization we refer to the last column of table x iii for the orbits as given in these are type iii as z k for rest of the orbits we have dimr z k see last column of table x iii as m m m these are of type ii now the results follow from corollary nilpotent orbits in real forms of recall that up to conjugation there are two real forms of they are denoted by and nilpotent orbits in there are nonzero nilpotent orbits in see table xiv pp note that in this case we have m m m on the nilpotent orbits in lie algebras theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is given then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table xiv pp and from k column table pp for the orbits ox as given in we have dimr z k and id hence these orbits are of type i for the orbits ox as given in we have dimr z k and id hence these orbits are also of type i for the orbit ox as given in we have dimr z k and id hence they are of type ii for the orbits ox as given in we have dimr z k and id thus these orbits are of type ii for the orbits ox as given in we have dimr z k and id hence these are of type ii rest of the orbits which are not given by the parametrizations of are of type iii as z k now the conclusions follow from corollary nilpotent orbits in there are nonzero nilpotent orbits in see table xv note that in this case we have m m m theorem let the parametrization of the nilpotent orbits be as in let x be a nonzero nilpotent element in assume the parametrization of the orbit ox is given by any of the sequences then dimr h ox r if the parametrization of the orbit ox is given by either or then dimr h ox r if ox is not given by the parametrizations as in above of such orbits are then we have dimr h ox r proof for the lie algebra we can easily compute dimr z k from the last column of table xv and from k column table pp chatterjee and maity for the orbits ox as given in we have dimr z k and id hence these are of type i for the orbits ox as given in we have dimr z k and id hence these orbits are of type ii rest of the orbits which are not given by the parametrizations of are of type iii as z k now the conclusions follow from corollary remark here we make some observations about the first cohomology groups of the nilpotent orbits in real exceptional lie algebras to do this we begin by giving a convenient description of the first cohomology groups of the nilpotent orbits following the of theorem it can be shown that if k m m m dimr h ox r if k m m the proof of the above result will appear elsewhere as a consequences of for all the nilpotent orbit ox in a simple lie algebra g we have dimr h ox r recall that if g is a real exceptional lie algebra such that g and g then any maximal compact subgroup of int g is semisimple and hence using it follows that dimr h ox r for all nilpotent orbit ox in we next assume g or g note that in both the cases m m we follow the parametrizations of the nilpotent orbits of g as given in tables x xiii see also when g we are able to conclude that dimr h ox r only for one orbit namely the orbit ox parametrized by in this case from the last column and row of table x one has k k k thus k m m m and applies for g we obtain that dimr h ox r when ox is parametrized by any of the following sequences for the above orbits from the last column of table xiii we have k k k and hence using analogous arguments apply references bc cm k m biswas and chatterjee on the exactness of form and the second cohomology of nilpotent orbits internat j math no pp collingwood and mcgovern nilpotent orbits in semisimple lie algebras van nostrand reinhold mathematics series van nostrand reinhold new york djokovic classification of nilpotent elements in simple exceptional real lie algebras of inner type and description of their centralizers alg djokovic classification of nilpotent elements in simple exceptional real lie algebras and and description of their centralizers alg donald king the component groups of nilpotents in exceptional simple real lie algebras communications in algebra mcgovern the adjoint representation and the adjoint action in algebraic quotients torus actions and cohomology the adjoint representation and the adjoint action encyclopaedia math springer berlin the institute of mathematical sciences hbni campus tharamani chennai india address pralay the institute of mathematical sciences hbni campus tharamani chennai india address cmaity
| 4 |
sep a sheaf on the second spectrum of a and mustafa alkan cekensecil alkan abstract let r be a commutative ring with identity and specs m denote the set all second submodules of an m in this paper we construct and study a sheaf of modules denoted by o n m on specs m equipped with the dual zariski topology of m where n is an we give a characterization of the sections of the sheaf o n m in terms of the ideal transform module we present some interrelations between algebraic properties of n and the sections of o n m we obtain some morphisms of sheaves induced by ring and module homomorphisms mathematics subject classification keywords and phrases second submodule dual zariski topology sheaf of modules introduction throughout this article all rings will be commutative rings with identity elements and all modules will be unital left modules unless otherwise stated r will denote a ring given an m the annihilator of m in r is denoted by annr m and for an ideal i of r the annihilator of i in m is defined as the set m i m m im clearly m i is a submodule of recall that a sheaf of rings modules f on a topological space x is an assignment of a ring module f u to each open subset u of x together with for each inclusion of open subsets v u a morphism of rings modules f u f v subject to the following conditions i f ii idf u iii if w v u then w iv if u is an open subset of x and is an open cover of u and if f f u is an element such that f for all then f this paper is submitted to communications in algebra on june for the referee process v if u is an open subset of x and is an open cover of u and if we have a collection elements f with the property that for all then there is an element f f u such that for all f if f is a sheaf on a topological space x we refer to f u as the sections of f over the open subset u we call the maps as the restriction maps cf the prime spectrum of a ring r denoted by spec r consists of all prime ideals of r and it is for each ideal i of r the sets v i p spec r i p where i is an ideal of r satisfy the axioms for closed sets of a topology on spec r called the zariski topology of it is that for any commutative ring r there is a sheaf of rings on spec r denoted by ospec r defined as follows for an open subset f u spec r we define ospec r u to be the set of all functions s u rp such that s p rp for each p u and such that for each p u there is a neighborhood v of p contained in u and elements a f r such that for each q v we have f q and s q fa in rq see let m be an a proper submodule n of m is said to be prime if for any r r and m m with rm n we have m n or r annr n m if n is a prime submodule of m then p n m is a prime ideal of in this case n is called a submodule of m the set of all prime submodules of a module m is called the prime spectrum of m and denoted by spec m for any submodule n of an m we have a set v n p spec m n m p m then the sets v n where n is a submodule of m satisfy the axioms for closed sets of a topology on spec m called the zariski topology of m several authors have investigated the prime spectrum and the zariski topology of a module over the last twenty years see for example recently some authors have investigated a sheaf structure on the prime spectrum of a module which generalizes the sheaf of rings ospec r on the topological space spec r in the author obtained an ospec m u for each open subset u of spec m equipped with the zariski topology of m such that ospec m is a sheaf of modules on spec m in the authors defined and studied a sheaf of modules which is denoted by a n m on the topological space spec m equipped with the zariski topology of m where m and n are two in fact both ospec m and a n m are generalizations of the sheaf of rings ospec r to modules in the authors proved that if n r then a r m is a scheme on spec m this scheme structure were investigated in recently a dual theory of prime submodules has been developed and extensively studied by many authors the dual notion of prime submodules was first introduced by yassemi in a submodule n of an m is said to be a second submodule provided n and for all r r rn or rn n if n is a second submodule of m then p annr n is a prime ideal of in this case n is called a submodule of m cf in recent years second submodules have attracted attention of various authors and they have been studied in a number of papers see for example the set of all second submodules of a module m is called the second spectrum of m and denoted by specs m as in for any submodule n of an rmodule m we define v n to be the set of all second submodules of m contained in n clearly v is the empty set and v m is specs m note that for any family of submodules ni i i of m v ni v ni thus if z m denotes the collection of all subsets v n of specs m where n m then z m contains the empty set and specs m and z m is closed under arbitrary intersections but in general z m is not closed under finite unions a module m is called a cotop module if z m is closed under finite unions in this case z m is called the topology on specs m see note that in a cotop module was called a tops more information about the class of cotop modules can be found in and let m be an and n be a submodule of m we define the set v s n s specs m annr n annr s in lemma it was shown that v s n v s m annr n v m annr n in particular v s m i v m i for every ideal i of r and that the set z s m v s n n m satisfies the axioms for the closed sets thus there exists a topology say s on specs m having z s m as the family of closed subsets this topology is called the dual zariski topology of m see lemma dual zariski topology the second spectrum of modules and related notions have been investigated by some authors in recent years see and in this paper we define and study a sheaf structure on the second spectrum of a module let m be an in section we construct a sheaf denoted by o n m on specs m equipped with the dual zariski topology of m where n is an firstly we find the stalk of the sheaf o n m see theorem in theorem we give a characterization for the sections of the sheaf o n m in terms of the ideal transform module let r be a noetherian ring and m be a faithful secondful we prove that if n is a free projective or flat then so is o n m specs m see theorem in section we deal with a scheme structure on the second spectrum of a module in theorem we prove that o r m is a scheme when m is a faithful secondful and specs m is a then we define two morphisms of locally ringed spaces by using ring and module homomorphisms see theorem and corollary a sheaf structure on the second spectrum of a module throughout the rest of the paper m will be an x s will denote specs m and we consider x s with the dual zariski topology unless otherwise stated for every open subset u of x s we set supps u annr s s u in this section we construct a sheaf on x s and investigate some properties of this sheaf s definition let n be an for every open subset u of q x we define o n m u to be the set of all elements p u np u in which for each q u there is an open neighborhood w of q with q w u and there exist elements t r m n such that for every s w we have t p annr s and p m t np let u and v be open subsets of x s with v u and p u o n m u then it is clear that the restriction p v belongs to o n m v therefore we have the restriction map o n m u o n m v p v for all p u o n m u we define o n m u o n m to be the zero map it is clear from the local nature of the definition o n m is a sheaf with the restriction maps defined above we can define a map u n n o n m u n s by u for all n n we note that n n n is an u u v homomorphism clearly o n n recall that for any r r the set dr spec r rr is open in spec r and the family dr r r forms a base for the zariski topology on spec r let m be an for each r r we define yr x s s m r in theorem it was shown that the set b yr r r forms a base for the dual zariski topology on x s remark let r r and q x s then r annr q if and only if q yr let q yr suppose that r annr q then rq and so q m r this implies that q v m r v s m r a contradiction conversely let r annr q then q m r and so q yr in the proof our results we will use this fact without any further comment let f be a sheaf of modules rings on a topological space x and p x recall that the stalk fp of f at p is defined to be the direct limit f u p of the modules rings f u for all open subsets u of x containing p via the restriction maps see in the following theorem we determine the stalk of the sheaf o n m at a point s in x s theorem let n be an and s x s then the stalk o n m s of the sheaf o n m at s is isomorphic to np where p annr s proof let s be a submodule of m and m o n m s o n m u p then there exists an open neighborhood u of s and p u o n m u such that represents we define o n m s np by m p let v be another neighborhood of s and v o n m v such that also represents then there exists an open set w u v such that s w and since s w we have p this shows that is a map we claim that is an isomorphism let x np then x at for some a n t since t p annr s we have s yt now we define q at in nq for all q yt where t q annr q then q yt o n m yt if m is the equivalence class of in o n m s then m x hence is surjective now let m o n m s and m let u be an open neeighborhood of s and p u o n m u is a representative of there is an open neighborhood v u of s and there are elements a n t r such that for all q v we have t q annr q and q at nq then m p at in np so there is h such that ha for all a q yth we have q ha ht t in nq where q annr q thus therefore in o n m yth consequently m this shows that is injective thus is an isomorphism a ringed space is a pair x ox consisting of a topological space x and a sheaf of rings ox on x the ringed space x ox is called a locally ringed space if for each point p x the stalk ox p is a local ring cf corollary x s o r m is a locally ringed space example consider the m q zp and n z where p is a prime number then specs m q zp by theorem o z q zp z q and o z q zp zpz z p a q a b z b p b b let m be an the map s specs m spec m defined by s s annr s m is called the natural map of specs m m is said to be secondful if the natural map s is surjective cf let m be an the zariski socle of a submodule n of m denoted by n is defined to be the sum of all members of v s n and if v s n then n is defined to be cf lemma let r be a noetherian ring and n be an let m be a secondful and u x s s k where k m then for each p u o n m u there exist r sr annr k i and mr n such that u ysi and p m si for all s ysi i r where p annr s proof since r is noetherian u is by corollary d thus there exist n open subsets wn of u tn r an a n such that u wj and for each j n and s wj we have p tjj where tj p annr s fix j n there is a submodule hj of m such that wj x s s hj also we have v s k v s hj since r is noetherian annr hj rbjnj for some bjnj this implies that wj x s s hj x s m annr hj n j m rbjf x s m rbjnj x s n n j j s s x v m rbjf x m rbjf nj nj nj ybjf ybjf wj x s s m rbjf on the other hand we have k hj as v s k v s hj this implies p that annr h p j annr k by theorem e we get annr hj annr k since r is noetherian there exists p d such that annr k d annr k and we have q p annr hj d annr hj d annr k d annr k it follows that bdjf annr k for each f nj also for each f nj and for each s ybjf we have tj bdjf annr s we conclude that ybjf x s s m bjf x s s m tj bdjf ytj bdjf and we can write annr s aj tj aj bd jf tj bd jf nannr s this completes the proof let k be an for an ideal i of r the submodule of k is defined to be k k i n and k is said to be if k k cf lemma let n be an and u x s s k where k m then k o n m u proof let p u k o n m u there exists n such that annr k n consider p supps u there exists a second submodule q u such that p annr q if annr k n p then annr k p and so q v s k a contradiction thus annr k n so there exists t tp annr k n since tp we have tp p and p ptp p np hence theorem let r be a noetherian ring m be a faithful secondful and n be an let u x s s k where k m then k n u ker u n and so ker n is annr k proof by lemma we have u n k n k o n m u so k n ker u n m s suppose that m ker u n then np for all p supp u s so p for each p supp u there is tp such that tp m put j rtp then jm u let q v j we claim that annr k q suppose on the contrary that annr k q since m is faithful secondful there is a second submodule s of m such that q annr s therefore s u and q supps u this implies that tq j q this contradicts the fact that tq thus p t t j annr k annr k annr k j since r is noetherian annr k n j for some n hence annr k n m jm this shows that m k n and the result follows r p lemma let r n sr then v s m r p rsi rsni v s m proof let s v s m r p rsni then annr m r p rsni annr m rsi n rsni annr m rsni annr s for each i r we have n r n rsi annr m rsi annr s since annr s is a prime ideal r p rsi annr s it follows that s m we have rsi annr s and so annr s m v s m r p r p rsi this shows that s v m for the other containment m r p r p rsi and hence v s m r p rsni and hence v s m rsni r p rsni v s m r p r p r p rsi rsi implies that m rsi v s m r p rsi r p rsni rsi theorem let r be a noetherian ring m be a faithful secondful n be an and u x s s k where k m let w be an open subset of x s such that u w then ker u k o n m w proof by lemma u k o n m w k o n m u so k o n m w ker u there exists l m such that w x s s l let p w ker u by lemma there exist r sr annr l and mr n such that w ysi and for each i r and each i s ysi we have annr s m si since ker u we have p for all p supps u fix i r set u u ysi hence u x s s k x s s m si x s v s k v s m si x s s k m si s i then m si np for all p supp u this implies that n mi by theorem there exists hi such that annr k m si hi mi let h max hr now let p supps w there exists i r such that p supps ysi let d annr k h since shi annr m si h we have dshi annr k h annr m si h annr k annr m si h annr k m si h therefore we conclude that p h h dmi si dsh i mi i np for all d annr k this implies that annr k and so k o n m w theorem let r be a noetherian ring m be a faithful secondful n be an and u x s s k where k m then the map u n n o n m u has an annr k cokernel proof let p u o n m u by lemma there exist r s ysi and for r sr annr k and mr n such that u i each i r and each s ysi we have annr s m si fix i r si mi then for each s ysi si annr s si nannr s this means that ys si si n i mi u n mi thus si u n mi ker m si o n m u by theorem hence there exists ni such that annr m si ni si u n mi m define n max n n then for all then sni i si u i r n ni u i ni m it follows that i r we have sni s s s i i n i i i u u sni u m s m n by lemma we have i i n n i n r r p p s n s v m rsi v m rsi r p m rsi r v m rsi v m rsi v s m rsi it follows that r p x s s m rsni x s v s m rsi x s s m rsi ysi u x s s k r p rsni v s k since m is faithful secondful this means that v s m r p p annr k v annr k hence annr k we get that v s r p rsni since r is noetherian there exists h such that annr k h r r p p h n n rsi it follows that annr k rsi u n n this completes rsni the proof let k be an and i be an ideal of recall that the ideal transform of k with respect to i is defined as di k lim homr i n k cf theorem let r be a noetherian ring m be a faithful secondful n be an and u x s s k where k m then there is a unique gk n o n m u dannr k n lim homr annr k n n such that the diagram n n o n m u gk n dannr k n commutes proof by theorems and both the kernel and cokernel of u n are annr k therefore there is a unique gk n o n m u dannr k n such that the given diagram commutes by corollary ii by lemma k o n m u so gk n is an isomorphism by corollary iii corollary let r be a noetherian ring m be a faithful secondful rmodule n be an and u x s s k where k m then the following hold o k n m u o n m u o k n m u o n m u o o n m u m u if n is an annr k then o n m u proof parts and follow from theorem and corollary part is an immediate consequence of part example consider the n and m zp where p runs over all distinct prime numbers then m is a faithful secondful let k and u specs m s k then annr k and n is a by corollary o zp u corollary let r be a principal ideal domain m be a faithful secondful n be an and u x s s k then there exists a r such that o n m u na where na is the localization of n with respect to the multiplicative set an n n proof since r is a a principal ideal domain there is an element a r such that annr k ra by theorem and theorem we have o n m u dannr k n na theorem let m be a faithful secondful and n be any rmodule for any element f r the module o n m yf is isomorphic to the localized module nf in particular o n m x s n proof we define the map nf o n m yf by fam fam yf we claim that is an isomorphism first we show that is injective let fan fbm then for every s yf fan fbm in np where p annr s thus there exists h such that h f m a f n b in n let i r f m a f n b then h i and h p so i this holds for any s yf so we deduce that supps yf spec r i since m is faithful secondful df supps yf spec r i and we get that v i v rf this implies that rf rf i therefore f l i for some l now we have f l f m a f n b which shows that fan fbm in nf thus is injective let p yf o n m yf then we can cover yf with the open subsets vi on which annr s is represented by agii with gi annr s for all s vi in other words vi ygi since the open sets of the form yr r r form a base for the dual zariski topology on x s we may assume that vi yhi for some hi since yhi ygi dhi s yhi s ygi i by proposition this implies that v rg v rh and so rh rhi i i i rgi thus hsi rgi for some s so hsi cgi for some c r and bi cai cai ai s gi cgi hsi we see that annr s is represented by ki bi cai ki hi on yki and since yhi yhsi the yki cover yf the open cover yf has a finite subcover by theorem suppose that yf ykn b i j n kbii and kjj both represent annr s on yki ykj by corollary b yki ykj yki kj and by the injectivity of we get that kbii kjj in nki kj hence ki kj nij kj bi ki bj for some nij let m max nij i j n then kim bi kjm bj by replacing each ki by and bi by kim bi we still see that annr s is represented on yki by kbii and furthermore we have kj bi ki bj for all i j since yf ykn by proposition we have df s yf s yki dki pspec r rf spec r n n v rki this implies ppn that v rki v rki v rf are cn r and t z such rf rki so pthere n n that f t ci ki let a ci bi then for each j we have kj a pn pn bj a t ci ki bj bj f this implies that f t kj on ykj ci kj bi fore fat p yf proposition let k l be and k l be an then induces a morphism of sheaves o k m o l m if is an isomorphism of then is an isomorphism of sheaves a proof let u be an open subset of x s and fpp u o k m u ap o l m u for each q u there is we show that fp s u an open neighborhood w of q with q w u and there exist elements t r m k such that for every s w we have t p annr s and ap m fp t kp so there exists sp such that sp tap fp m it follows that sp ap fp m this means that ap fp and t p annr s for every s w this shows m where m ap that fp u o l m u thus the map u o k m u o l m u defined by ap ap u u fp fp u l is clearly u is an since ap ap u u fp fp np fp ap v u fp the following diagram is commutative o k m u o k m v u o l m u v o l m v this shows that o k m o l m is a morphism of sheaves now suppose that is we show that u is injective a ap ap s then fpp for every let u fp u fp u p supps u there exists tp such that tp ap tp ap since a t a is injective tp ap for every p supps u it follows that fpp tpp fpp a for every p supps u this shows that fpp u and so u is injective for every open subset u of x s b o l m u now we show that u is surjective let tpp s u s there exists ap k such that ap bp for each p supp u we show that ap o k m u for each q u there is an open neighborhood tp u w of q with q w u and there exist elements t r b l such that for b a every s w we have t annr s p and tpp tpp bt there exists a tpp where t p annr s for every a k such that b a so a t s w there exists vp such that vp tp a ap it follows that vp tp a vp tap since is injective vp tp a tap for vp this a means that tpp at where t p annr s for every s w this shows that bp ap ap o k m u and tp tp tp s s s u u u thus u is surjective for every open subset u of x s consequently is an isomorphism of sheaves theorem let r be a noetherian ring m be a faithful secondful and n be an then the following hold if n is a free then o n m x s is a free if n is a projective then o n m x s is a projective rmodule if n is a flat then o n m x s is a flat proof we can write x s x s s since n is a free n is isomorphic to a direct sum of some copies of r say n r for an index set by proposition o n m x s o r m x s by theorem o n m x s dr n and o r m x s dr r dr commutes with direct sums by corollary by using this fact theorem and theorem we get that o n m x s dr r dr r o r m x s this shows that o n m x s is a free since n is a projective there is a free f and a submodule l of f such that f n by using proposition and corollary we get that o f m x s o n l m x s o n m x s o l m x s by part o f m x s is a free o n m x s is a projective as it is a direct summand of the free o f m x s since every flat is a direct limit of projective n lim p for some projective pi and a directed set by proposition i p dr p m x s dr lim and theorem o n m x s o lim i i commutes with direct limits by corollary by using this fact theorem and theorem we get that o n m x s dr lim p i lim d pi lim o pi m x s by part o pi m x s is a projective and r hence a flat for each i since a direct limit of flat modules is flat o n m x s is a flat a scheme structure on the second spectrum of a module recall that an affine scheme is a locally ringed space which is isomorphic to the spectrum of some ring a scheme is a locally ringed space x ox in which every point has an open neighborhood u such that the topological space u together with the restricted sheaf is an affine scheme a scheme x ox is called locally noetherian if it can be covered by open affine subsets of spec ai where each ai is a noetherian ring the scheme x ox is called noetherian if it is locally noetherian and cf a topological space x is said to be a or a kolmogorov space if for every pair of distinct points x y x there exists open neighbourhoods u of x and v of y such that either x v or y u the following proposition from gives some conditions for the dual zariski topology of an to be a proposition theorem the following statements are equivalent for an m the natural map s specs m spec m is injective for any m if v s v s then specsp m for every p spec r where specsp m is the set of all submodules of m specs m s is a theorem let m be a faithful secondful such that x s is a space then x s o r m is a scheme moreover if r is noetherian then x s o r m is a noetherian scheme proof let g since the natural map sm specs m spec r is continuous by proposition the restriction map yg sm yg is also continuous since yg is also a is a bijection let e be closed subset of yg then e yg v s n for some n m hence sm e sm yg v s n sm yg sm v s n sm yg v annr n is a closed subset of sm yg therefore is a homeomorphism since the sets of the form yg g r form a base for the dual zariski topology x s can be written as x s ygi for some gi since m is faithful secondful and x s is a we have ygi sm ygi dgi spec rgi for each i i by theorem ygi is an affine scheme for each i i this implies that x s o r m is a scheme for the last statement we note that since r is noetherian so is rgi for each i i hence x s o r m is a locally noetherian scheme by theorem x s is therefore x s o r m is a noetherian scheme theorem let m and n be and m n be a monomorphism then induces a morphism of locally ringed spaces f f specs n o r n specs m o r m proof by proposition the map f specs m specs n which is defined by f s s for every s specs m is continuous let u be an open subset of specs n and o r n u suppose s f u then f s s u there exists an open neighborhood w of s with s w u such that for each q w g q annr q and q ag in rq since s f s w s f w f u as f is continuous f w is an open neighborhood of we claim that for each f w g annr suppose on the contrary that g annr for some f w then f w since is a monomorphism annr annr so g annr for w a contradiction therefore for every open subset u of specs n we can define the map f u o r n u o r m f u as follows for o r n u f u o r m f u is defined by f u p u annr f s u as we mentioned above f u is a map an clearly it is a ring homomorphism now we show that f is a locally ringed morphism assume that u and v are open subsets of specs n with v u and p u o r n u consider the diagram o r n u f u o r m f u u f v o r n v f v o r m f v then u f v f u p u u f v annr f s u annr f s v f v p v f v p u therefore we get that u f v f u f v thus the above diagram is commutative this shows that f o r n o r m is a morphism of sheaves by theorem the map on the stalks fs o r n f s o r m s is clearly the map of local rings rannr f s rannr s which maps rs rannr f s to rs again this implies that f f specs n o r n specs m o r m is a morphism of locally ringed spaces theorem let m a be r s be a ring homomorphism let n b be and m be a secondful such that specs m is a and annr m annr n if a b is an then induces a morphism of sheaves h o a m o b n proof since annr m annr n induces the homomorphism m n r annr m r anns n it is that the maps f spec s spec r defined by f p p and d spec n spec m defined by d p p and sn specs n spec n defined by sn q anns q n for each q specs n are continuous also sm specs m spec m is a homeomorphism by theorem therefore the map h specs n specs m defined by h q sm sn q is continuous also for each q specs n we get an q af anns q a s banns q r s let u be an open subset of specs m and t tannr p p o a m u suppose that t u then h t u and there exists an open neighborhood w of h t with h t w u and elements r g r such that for each q w we have tannr q ag aannr q where g annr q hence g annr h t by definition of h annr h k anns k for every k w so g anns k for g annr h k thus a define a section o b n u we can define t ag g h u o a m u o b n u o b n u by h u tannr p p t tannr h t for each tannr p p t u o r m u assume that v u consider the diagram o a m u o a m v we see that h u h v o b n u u v o b n v t tannr h t u v h u tannr p p u v t u t tannr h t t v h v tannr p p h v tannr p p and hence u v h u h v for every open subset u of specs m so the above diagram is commutative it follows that h o a m o b n is a morphism of sheaves corollary let r s be a ring homomorphism let n be an smodule and m be a secondful such that specs m is a and annr m annr n then induces a morphism of locally ringed spaces h h specs n o s n specs m o r m proof taking a r b s and in theorem we get the morphism of sheaves h o r m o s n which is defined as in the proof of theorem by theorem the map on the stalks h t o r m h t o s n t is clearly the local homomorphism t rf anns t r s sanns t r s where f is the map defined in the proof of theorem this implies that h h specs n o s n specs m o r m is a locally ringed spaces acknowledgement the authors would like to thank the scientific technological research council of turkey tubitak for funding this work through the project the second author was supported by the scientific research project administration of akdeniz university references abbasi and a scheme over prime spectrum of modules turkish j math abuhlail a dual zariski topology for modules topology appl zariski topologies for coprime and second submodules algebra colloquium farshadifar on the dual notion of prime submodules algebra farshadifar on the dual notion of prime submodules ii mediterr j and farshadifar the zariski topology on the second spectrum of a module algebra colloquium doi farshadifar on the dual notion of prime radicals of submodules j math doi keyvani and farshadifar on the second spectrum of a module ii bull malays math sci soc pourmortazavi keyvani strongly cotop modules journal of algebra and related topics brodmann and y sharp local cohomology an algebraic introduction with geometric applications cambridge univercity press and alkan dual of zariski topology for modules book series aip conference proceedings alkan and smith second modules over noncommutative rings communications in algebra alkan and smith the dual notion of the prime radical of a module journal of algebra alkan on graded second and coprimary modules and graded secondary representations bull malays math sci soc alkan on second submodules contemporary mathematics alkan on the second spectrum and the second classical zariski topology of a module journal of algebra and its applications doi http second spectrum of modules and spectral spaces bulletin of the malaysian mathematical sciences society doi farshadifar modules with noetherian second spectrum journal of algebra and related topics vol no pp hartshorne algebraic geometry no new york inc prime submodules and a sheaf on the prime spectra of modules communications in algebra lu spectra of modules comm algebra lu the zariski topology on the prime spectrum of a module houston j math lu a module whose prime spectrum has the surjective natural map houston j math lu modules with noetherian spectrum comm algebra mccasland moore and smith on the spectrum of a module over a commutative ring comm algebra tekir on the sheaf of modules comm algebra no yassemi the dual notion of prime submodules arch math brno trakya university faculty of sciences department of mathematics edirne turkey mustafa alkan akdeniz university faculty of sciences department of mathematics antalya turkey
| 0 |
towards automatic abdominal segmentation in dual energy ct using cascaded fully convolutional network shuqing michael holger sabrina matthias alexander marc hirohisa kensaku andreas oct university erlangen germany nagoya university nagoya japan german cancer research center dkfz heidelberg germany department of radiology university hospital erlangen erlangen germany university hospital paracelsus medical university germany this work has been submitted to the ieee for possible publication copyright may be transferred without notice after which this version may no longer be accessible abstract automatic segmentation of the dual energy computed tomography dect data can be beneficial for biomedical research and clinical applications however it is a challenging task recent advances in deep learning showed the feasibility to use fully convolutional networks fcn for dense predictions in single energy computed tomography sect in this paper we proposed a fcn based method for automatic segmentation in dect the work was based on a cascaded fcn and a general model for the major organs trained on a large set of sect data we preprocessed the dect data by using linear weighting and the model for the dect data the method was evaluated using torso dect data acquired with a clinical ct system four abdominal organs liver spleen left and right kidneys were evaluated was tested effect of the weight on the accuracy was researched in all the tests we achieved an average dice coefficient of for the liver for the spleen for the right kidney and for the left kidney respectively the results show our method is feasible and promising index dect deep learning segmentation introduction the hounsfield unit hu scale value depends on the inherent tissue properties the spectrum for scanning and the administered contrast media in a sect image materials having different elemental compositions can be represented by identical hu values therefore sect has challenges such as limited information and beam hardening as well as tissue characterization dect has been investigated to solve the challenges of sect in dect two image data sets are acquired at two different spectra which are produced by different energies simultaneously the segmentation in dect can be beneficial for biomedical research and clinical applications such as material decomposition enhanced reconstruction and display and computation of bone mineral density we are aiming at exploiting the prior anatomical information that is gained through the segmentation to provide an improved dect imaging the novel technique offers the possibility to present evermore complex information to the radiologists simultaneously and bears the potential to improve the clinical routine in ct diagnosis automatic segmentation on dect images is a challenging task due to the variance of human abdomen the complex variance among organs soft anatomy deformation as well as different hu values for the same organ by different spectra recent researches show the power of deep learning in medical image processing to solve the dect segmentation problem we use the successful experience from segmentation in volumetric sect images using deep learning the proposed method is based on a cascaded fcn a approach the first stage is used to predict the region of the interest roi of the target organs while the second stage is learned to predict the final segmentation no or prior knowledge is required in the proposed method the results showed that the proposed method is promising to solve segmentation problem for dect to the best of our knowledge this is the first study about segmentation in dect images based on fcns materials and methods network architecture for dect prediction dect as described by krauss et al a mixed image display is employed in clinical practice for the diagnose using dect the mixed image is calculated by linear weighting of the images values of the two spectra imix ilow ihigh where is the weight of the dual energy composition imix denotes the mixed image ilow and ihigh are the images at low and high kv respectively we preprocessed the dect images following eq straightforwardly figure illustrates the network architecture of the proposed method for the dect segmentation first of all mixed image is calculated by combining the images at the low energy level and the high energy level using eq then a binary mask is generated by thresholding the skin contour of the mixed image subsequently the mixed image the binary mask and the labeled image are given into the network as inputs the network consists of two stages the first stage is applied to generate the region of the interest roi in order to reduce the search space for the second stage the prediction result of the first stage is taken as the mask for the second stage each stage is based on a standard which is a fully convolutional network including an analysis and a synthesis path we used the implementation of two stages cascaded network developed by roth et al based on the unet and the caffe deep learning library a general model was trained by roth et al on a large set of sect images including some of the major organ labels our model was trained by the general model with the mixed dect images the difference between the network output and the ground truth labels are compared using softmax with weight loss sect avg sd min max avg sd min max liver spleen table dice coefficients of with and sd is abbreviated for standard deviation notice that the methods used different data set the numbers are not directly comparable an way training data validation data and test data were selected randomly with the ratio in each test we used images for validation images for test and images for training results performance estimation with nvidia geforce gtx ti with gb memory was used for all of the experiments the similarity between the segmentation result and the ground truth was measured with dice metric by using the tool provided by visceral first the performance of the proposed method was estimated by using as as well as fig shows one segmentation results in summarizes the dice coefficients of the segmentation results and compares dect results with the sect results the proposed method under the above weight condition yielded an average dice coefficient of for the liver for the spleen for the right kidney and for the left kidney respectively fig plots the distributions of the dice coefficients for different test scenarios and showed the high robustness of the proposed method experimental setup the proposed method was evaluated with clinical torso dect images scanned by the department of radiology university hospital erlangen all of the images were taken from male and female adult patients who had different clinically oriented indication justified by the radiologist ultravist was given as contrast agent with body weight adapted volumes the images were acquired at different tube voltage setting of kv mas and sn kv mas with sn filter using a siemens somatom force ct system with stellar detector an energy integrating detector each volume consists of slices of pixels the voxel dimensions are mm four abdominal organs were tested including liver spleen right and left kidneys ground truth was generated by experts in study on the weight we are aiming at exploiting the spectral information in the dect data since the mixing results basically in pseudo monochromatic images comparable to single energy scans the influence of the weight on the accuracy was further researched and were chosen as and in this study fig illustrates the distributions of the dice coefficients with different weight combination for the testing fold table lists the average dice coefficient for all of the cases the liver had the highest accuracy the standard deviation of the dice coefficients around was fairly robust the segmentation of the right kidney was usually more accurate than the left kidney the best dice values per organ per training set are highlighted in table the test fig cascaded network architecture for dect segmentation fig rendering of one dect segmentation with yellow for liver blue for spleen green for right kidney and red for left kidney fig dice coefficients of target organs with alpha blending for testing fold with and obtained the highest accuracy for liver and right kidney the test with weight combination showed the best segmentation for spleen the combination with had the finest result for left kidney the generated better segmentation for liver the worked better for spleen discussion and conclusion fig dice coefficients of the target organs with and for different testing folds we proposed a deep learning based method for automatic abdominal segmentation in dect the evaluation results show the feasibility of the proposed method compared to the results of the sect images reported by roth et al our method is promising and robust see table the segmentation of liver and spleen was less accurate than the sect the third testing fold had a large deviation the reason could be that our image data were taken from patients with different disease liver tumor spleen tumor the disease type is not considered by the data selection training liver spleen table dice coefficients of different alpha for testing fold bold denotes the best organ results per training set and test with inconsistent symptoms could have an impact on the accuracy the study on the weight can be divided into three groups with different is close to the low energy images which have on average the best contrast worked thus better in general is close to which is the optimal fusion of both images with respect to ratio snr had therefore usually the smallest deviation and showed the strongest adaptability in the comparison the comparison showed that the cases with identical training and test conditions had a higher probability to get the best segmentation result this is expected because the mixed images generated by the matched training and test conditions may have the highest similarity furthermore the comparison of the case model for image with the case model for image showed that using a model trained on images for segmenting test images works better in addition liver is well segmented in middle to high ranges spleen is segmented best at kidneys work best in matched training and test conditions this suggests that there is an optimal for each organ for image segmentation the weight for the mixed image calculation is currently a parameter in the preprocessing in our approach it can be used to augment the data for the training in future also the net could be modified with two image inputs furthermore more organs and more scans from different patients could be used acknowledgments this work was supported by the german research foundation dfg through research grant no ka le and ma references dushyant sahani ct the technological approaches society of computed body tomography and magnetic resonance cynthia mccollough shuai leng lifeng yu and joel fletcher and ct principles technical approaches and clinical applications radiology vol no pp stefan kuchenbecker sebastian faby david simons michael knaup schlemmer michael lell and marc kachelriess material decomposition in dual energy computed tomography dect in radiological society of north america rsna sabrina dorn shuqing chen stefan sawall andreas maier michael lell and marc organspecific single and dual energy ct dect image reconstruction display and analysis in radiological society of north america sabrina dorn shuqing chen stefan sawall david simons matthias may schlemmer andreas maier michael lell and marc organspecific ct image reconstruction and display in spie accepted s wesarg m kirschner m becker m erdt k kafchitsas mf khan et assessment of the trabecular bone in vertebrae methods of information in medicine vol no pp marc aubreville miguel goncalves christian knipfer nicolai oetter tobias helmut neumann florian stelzle christopher bohr and andreas maier carcinoma detection on confocal laser endomicroscopy images a robustness assessment corr vol holger roth hirohisa oda yuichiro hayashi masahiro oda natsuki shimizu michitaka fujiwara kazunari misawa and kensaku mori hierarchical fully convolutional networks for segmentation in arxiv preprint holger roth ying yang masahiro oda hirohisa oda yuichiro hayashi natsuki shimizu takayuki kitasaka michitaka fujiwara kazunari misawa and kensaku mori torso organ segmentation in ct using fully convolutional networks in jamit bernhard krauss bernhard schmidt and thomas flohr dual energy ct in clinical practice chapter dual source ct springer berlin heidelberg ahmed abdulkadir soeren lienkamp thomas brox and olaf ronneberger learning dense volumetric segmentation from sparse annotation in medical image computing and computer assisted intervention miccai yangqing jia evan shelhamer jeff donahue sergey karayev jonathan long ross girshick sergio guadarrama and trevor darrell caffe convolutional architecture for fast feature embedding arxiv preprint abdel aziz taha and allan hanbury metrics for evaluating medical image segmentation analysis selection and tool bmc medical imaging vol pp august
| 1 |
on the freeness of rational cuspidal plane curves feb alexandru dimca and gabriel sticlaru abstract we bring additional support to the conjecture saying that a rational cuspidal plane curve is either free or nearly free this conjecture was for curves of even degree and in this note we prove it for many odd degrees in particular we show that this conjecture holds for the curves of degree at most introduction a plane rational cuspidal curve is a rational curve c f in the complex projective plane having only unibranch singularities the study of these curves has a long and fascinating history some long standing conjectures as the coolidgenagata conjecture being proved only recently see other conjectures as the one on the number of singularities of such a curve being bounded by see are still open the classification of such curves is not easy there are a wealth of examples even when additional strong restrictions are imposed see free divisors defined by a homological property of their jacobian ideals have been introduced in a local analytic setting by saito in and then extended to projective hypersurfaces see and the references there we have remarked in that many plane rational cuspidal curves are free the remaining examples of plane rational cuspidal curves in the available classification lists turned out to satisfy a weaker homological property which was chosen as the definition of a nearly free curve see subsequently a number of authors have establish interesting properties of this class of curves see in view of the above remark we have conjectured in conjecture that any plane rational cuspidal curve c is either free or nearly free this conjecture was proved in theorem for curves c whose degree d is even as well as for some cases when d is odd when d pk for a prime number p in this note we take a closer look at the case d odd let s c x y z be the polynomial ring in three variables x y z with complex coefficients f s a reduced homogeneous polynomial of degree d and let fx fy and fz be the partial derivatives of f with respect to x y and z respectively consider the graded ar f s of all relations involving the derivatives of f namely a b c ar f q mathematics subject classification primary secondary key words and phrases rational cuspidal curves jacobian syzygy tjurina number free curves nearly free curves alexandru dimca and gabriel sticlaru if and only if afx bfy cfz and a b c are in sq the space of homogeneous polynomials of degree q the minimal degree of a jacobian relation for the polynomial f sd is the integer mdr f defined to be the smallest integer m such that ar f m when mdr f then c f is a union of lines passing through one point and hence c is cuspidal only for d we assume from now on in this note that mdr f it turns out that a rational cuspidal curve c f with mdr f is nearly free indeed this follows from proposition to see this note that the implication there holds for any d assume from now on that d is odd and let d pkmm be the prime decomposition of we assume also that m the case m of our conjecture being settled in corollary by changing the order of the k pj s if necessary we can and do assume that pj j for any j set with these assumptions and notations the main results of this note are the following theorem let c f be a rational cuspidal curve of degree d an odd number then mdr f and if equality holds then c is either free or nearly free theorem let c f be a rational cuspidal curve of degree d an odd number as in then if d then c is either free or nearly free in particular the following hold i if d with p a prime number then c is either free or nearly free ii d with p a prime number pk then c is either free or nearly free unless mdr f mdr f remark note that for d we have and hence d d therefore the only cases not covered by our results correspond to curves of odd degree d such that r mdr f satisfies r d corollary a rational cuspidal curve c f of degree d is either free or nearly free if one of the following holds mdr f or d unless we are in one of the following situations i d and mdr f on the freeness of rational cuspidal plane curves ii iii iv v vi vii d d d d d d and and and and and and mdr f mdr f mdr f mdr f mdr f mdr f in the excluded situations our results do not allow us to conclude the proof of our main results are based on a deep result by walther see theorem bringing into the picture the monodromy of the milnor fiber f f associated to the curve c f a second ingredient is our results on the relations between the hodge filtration and pole order filtration on the cohomology group h f c see theorem and proposition the first author thanks aromath team at inria for excellent working conditions and in particular laurent for stimulating discussions some facts about free and nearly free curves here we recall some basic notions on free and nearly free curves we denote by jf the jacobian ideal of f the homogeneous ideal of s spanned by the partial derivatives fx fy fz and let m f be the corresponding graded ring called the jacobian or milnor algebra of f let if denote the saturation of the ideal jf with respect to the maximal ideal m x y z in s and recall the relation with the local cohomology m f n f if hm it was shown in corollary that the graded n f satisfies a lefschetz type property with respect to multiplication by generic linear forms this implies in particular the inequalities n f n f n f t n f t n f t where t and n f k dim n f k for any integer if we set f dim n f t then c f is a free curve if f we say that c f is a nearly free curve if f see for more details equivalent definitions and many examples note that the curve c f is free if and only if the graded ar f is free of rank there is an isomorphism of graded ar f s s for some positive integers when c is free the integers are called the exponents of they satisfy the relations d and c d pp where c is the total tjurina number of c that is c c xi the xi s being the singular points of c and c xi denotes the tjurina number of the alexandru dimca and gabriel sticlaru isolated plane curve singularity c xi see for instance in the case of a nearly free curve there are also the exponents and this time they verify d and c d both for a free and a nearly free curve c f one has mdr f and hence mdr f d for a free curve c and mdr f for a nearly free curve it follows that theorem gives a similar inequality for any rational cuspidal curve our examples of rational cuspidal curves given in which are also free or nearly free show that all the possible values of mdr f do actually occur for any fixed degree it follows that if we set r mdr f then the curve c f is free resp nearly free if and only if c d r d r d r resp c d r see remark if the equation f of the curve c is given explicitly then one can use a computer algebra software for instance singular in order to compute the integer mdr f such a computer algebra software can of course decide whether the curve c is free or nearly free see for instance the corresponding code on our website http however for large degrees d it is much quicker to determine the integer mdr f the proofs first we recall the setting used in the proof of theorem the key results of walther in theorem yield the inequality dim n f dim h f c for j d where f f x y z is the milnor fiber in associated to the plane curve c and the subscript indicates the eigenspace of the monodromy action corresponding to the eigenvalue exp d j exp j assume that c is a rational cuspidal curve of degree denote by u the complement c and note that its topological euler characteristic is given by e u e c since f is a cyclic covering of the complement u it follows that h m f c h m u c for m we have also dim h f c dim h f c dim h f c e u see for instance prop chapter or cor and remark for any since clearly h f c we get dim h f c dim h f c on the freeness of rational cuspidal plane curves proof of theorem suppose now that d is odd say d in order to prove f in view of the inequality it is enough to show that dim h f c for exp which corresponds to j the equation tells us that this is equivalent to dim h f c using proposition see also remark it follows that dim h f c dim f k dim f where k j here f k and f denote some terms of the second page of spectral sequences used to compute the monodromy action on the milnor fiber f see for details note also that the weaker result in theorem is enough for this proof by the construction of these spectral sequences it follows that for q d one has an identification f q a b c ar f ax by cz where ax is the partial derivative of a with respect to x and so on it follows that dim f k dim f dim ar f dim ar f if mdr f it follows that ar f ar f and hence the curve c is either free or nearly free but this implies that mdr f as explained in the previous section proof of theorem for the reader s convenience we divide this proof into two steps proposition with the above notation we have dim n f j for any integer j d and for any integer j proof let and note that d t we apply the inequality with j d d it follows that d d d dim n f dim h f c with exp exp since this eigenvalue has order a prime power it follows from zariski s theorem see proposition that h f c using we get dim n f j for j d the claim for j follows from the fact that the graded module n f enjoys a duality property dim n f j dim n f t for any integer j see the lefschetz type property of the graded module n f see completes the proof of this proposition proposition a rational cuspidal curve c f of degree d as in and such that r mdr f is either free or nearly free alexandru dimca and gabriel sticlaru proof we use the formulas and from and get the following equality dim ar f dim n f dim ar f r c for any curve c f it is known that if ar f r then the first relation ar f m which is not a multiple of occurs in a degree m d r see lemma it follows that d d d dim ar f dim for any r such that using the obvious fact that ar f a direct computation shows that c d r dim n f since r it follows that r and hence dim n f by proposition the claim follows now using the characterization of free resp nearly free curves given above in it remains to prove the last claim in theorem if d we can assume pk and then d on the other hand d and hence since mdr f by theorem we get either mdr f and then we conclude by theorem or mdr f and then we conclude using theorem proof of corollary to prove the first claim we have to consider the minimal possible value of d pk when d is odd but neither a prime power nor of the form with p prime first if then and hence otherwise but both can not be equalites it follows that the minimal values are obtained for and or and in the first case we get in the second we get to prove the second claim just use remark references artal bartolo dimca on fundamental groups of plane curve complements ann univ ferrara artal bartolo gorrochategui luengo on some conjectures about free and nearly free divisors in singularities and computer algebra festschrift for greuel on the occasion of his birthday pp springer buchweitz conca new free divisors from old commut algebra no on the freeness of rational cuspidal plane curves decker greuel and singular a computer algebra system for polynomial computations available at http dimca singularities and topology of hypersurfaces universitext new york dimca hyperplane arrangements an introduction universitext springer dimca freeness versus maximal global tjurina number for plane curves math proc cambridge phil soc dimca on rational cuspidal plane curves and the local cohomology of jacobian rings dimca popescu hilbert series and lefschetz properties of dimension one almost complete intersections comm algebra dimca sernesi syzygies and logarithmic vector along plane curves journal de l dimca sticlaru on the exponents of free and nearly free projective plane curves rev mat complut dimca sticlaru a computational approach to milnor cohomology forum math dimca sticlaru free divisors and rational cuspidal plane curves math res lett dimca sticlaru free and nearly free curves rational cuspidal plane curves publ rims kyoto dimca sticlaru computing the monodromy and pole order on milnor cohomology of plane curves arxiv sticlaru computing milnor monodromy for projective hypersurfaces fernandez de bobadilla luengo nemethi of rational unicuspidal projective curves whose singularities have one puiseux pair in real and complex singularities sao carlos trends in mathematics birkhauser pp fenske rational and plane curves algebra flenner zaidenberg on a class of rational plane curves manuscripta math koras palka the conjecture duke math j no marchesi nearly free curves and arrangements a vector bundle point of view moe rational cuspidal curves pages master thesis university of oslo palka pelka of planar rational cuspidal curves i c proc london math soc piontkowski on the number of cusps of rational cuspidal plane curves experiment saito theory of logarithmic forms and logarithmic vector fac sci univ tokyo sect ia math no saito polynomials for projective hypersurfaces with weighted homogeneous isolated singularities sakai tono rational cuspidal curves of type d with one or two cusps osaka j math sernesi the local cohomology of the jacobian ring documenta mathematica alexandru dimca and gabriel sticlaru simis homology of homogeneous divisors israel j math van straten warmt gorenstein duality for almost complete an application to real singularities math phil walther the jacobian module the milnor and the generated by f s invent math d azur cnrs ljad and inria france address dimca faculty of mathematics and informatics ovidius university bd mamaia constanta romania address gabrielsticlaru
| 0 |
proper affine actions a sufficient criterion dec ilia smilga december for a semisimple real lie group g with an irreducible representation on a real vector space v we give a sufficient criterion on for existence of a group of affine transformations of v whose linear part is in g and that is free nonabelian and acts properly discontinuously on v this new criterion is more general than the one given in the author s previous paper proper affine actions in representations submitted available at insofar as it also deals with swinging representations we conjecture that it is actually a necessary and sufficient criterion applicable to all representations introduction background and motivation the present paper is part of a larger effort to understand discrete groups of affine transformations subgroups of the affine group gln r rn acting properly discontinuously on the affine space rn the case where consists of isometries in other words on r rn is a classical theorem by bieberbach says that such a group always has an abelian subgroup of finite index we say that a group g acts properly discontinuously on a topological space x if for every compact k x the set g g gk k is finite we define a crystallographic group to be a discrete group gln r rn acting properly discontinuously and such that the quotient space rn is compact in auslander conjectured that any crystallographic group is virtually solvable that is contains a solvable subgroup of finite index later milnor asked whether this statement is actually true for any affine group acting properly discontinuously the answer turned out to be negative margulis gave a nonabelian free group of affine transformations with linear part in so acting properly discontinuously on on the other hand fried and goldman proved the auslander conjecture in dimension the cases n and are easy recently abels margulis and soifer ams proved it in dimension n see for a survey of already known results margulis s breakthrough was soon followed by the construction of other counterexamples to milnor s conjecture most of these counterexamples have been free groups only recently danciger and kassel dgk found examples of affine groups acting properly discontinuously that are neither virtually solvable nor virtually free the author focuses on the case of free groups asking the following question consider a semisimple real lie group g for every representation of g on a real vector space v we may consider the affine group g v does that affine group contain a nonabelian free subgroup with linear part in g and acting properly discontinuously on v more precisely for which values of g and v is the answer positive here is a summary of previous work on this question margulis s original work gave a positive answer for g acting on v abels margulis and soifer generalized this giving a positive answer for g acting on v for every integer n they showed later that for all other values of p and q the answer for g p q acting on v is negative the author of this paper gave a positive answer for any noncompact semisimple real lie group g acting by the adjoint representation on its lie algebra recently he gave smi a simple algebraic criterion on g and v guaranteeing that the answer is positive however this criterion included an additional assumption about v namely that the representation is see definition which is not in fact necessary this paper gives a better sufficient condition on g and v for the answer to be positive this condition works for all representations and swinging the author conjectures that the new condition is in fact necessary thus giving a complete classification of such counterexamples in order to state this condition we need to introduce a few classical notations basic notations for the remainder of the paper we fix a semisimple real lie group g let g be its lie algebra let us introduce a few classical objects related to g and g defined for instance in knapp s book though our terminology and notation differ slightly from his we choose in g a cartan involution then we have the corresponding cartan decomposition g k q where we call k the space of fixed points of and q the space of fixed points of we call k the maximal compact subgroup with lie algebra a cartan subspace a compatible with that is a maximal abelian subalgebra of g among those contained in q we set a exp a a system of positive restricted roots in recall that a restricted root is a nonzero element such that the restricted root space y g a x y x y is nontrivial they form a root system a system of positive roots is a subset of contained in a and such that note that in contrast to the situation with ordinary roots the root system need not be reduced so in addition to the usual types it can also be of type bcn we call be the set of simple restricted roots in we call x a x the open dominant weyl chamber of a corresponding to and x a x the closed dominant weyl chamber then we call m the centralizer of a in k m its lie algebra l the centralizer of a in g l its lie algebra it is clear that l a m and well known see proposition that l m a resp the sum of the restricted root spaces of resp of and n exp and n exp the corresponding lie groups l and l the corresponding minimal parabolic subalgebras p ln and p ln the corresponding minimal parabolic subgroups w ng a a the restricted weyl group the longest element of the weyl group that is the unique element such that it is clearly an involution see examples and in the author s previous paper for working through those definitions in the cases g psln r and g n finally if is a representation of g on a real vector space v we call the restricted weight space in v corresponding to a form the space v v v a x v x v a restricted weight of the representation any form such that the corresponding weight space is nonzero remark the reader who is unfamiliar with the theory of noncompact semisimple real lie groups may focus on the case where g is split its cartan subspace a is actually a cartan subalgebra just a maximal abelian subalgebra without any additional hypotheses in that case the restricted roots are just roots the restricted weights are just weights and the restricted weyl group is just the usual weyl group also the algebra m vanishes and m is a discrete group the case where g is split does not actually require the full strength of this paper in particular because see section then reduce to ordinary translations statement of main result let be any representation of g on a real vector space v without loss of generality we may assume that g is connected and acts faithfully we may then identify the abstract group g with the linear group g gl v let vaff be the affine space corresponding to v the group of affine transformations of vaff whose linear part lies in g may then be written g v where v stands for the group of translations here is the main result of this paper main theorem suppose that satisfies the following conditions i there exists a vector v v such that a l l v v and b v v where is any representative in g of ng a a then there exists a subgroup in the affine group g v whose linear part is zariskidense in g and that is free nonabelian and acts properly discontinuously on the affine space corresponding to v note that the choice of the representative in i b does not matter precisely because by i a the vector v is fixed by l zg a remark it is sufficient to prove the theorem in the case where is irreducible indeed we may decompose into a direct sum of irreducible representations and then observe that if some representation has a vector vk that satisfies conditions a and b then at least one of the vectors vi must satisfy conditions a and b if v and a subgroup g acts properly on then its image i by the canonical inclusion i g g v still acts properly on v we shall start working with an arbitrary representation and gradually make stronger and stronger hypotheses on it introducing each one when we need it to make the construction work so that it is at least partially motivated here is the complete list of places where new assumptions on are introduced assumption which is a necessary condition for i a assumption which is i a assumption which is i here are a few examples items and show that all the examples do fall under the scope of this theorem item is the simplest example that our paper brings to light example for g the standard representation acting on v satisfies these conditions see remark and examples and in smi for details so theorem a from is a particular case of this theorem if the real semisimple lie group g is noncompact the adjoint representation satisfies these conditions see remark and examples and in smi for details so the main theorem of is a particular case of this theorem take g r acting on v s see example in smi for details the group is then split so that l a and the set of vectors is precisely the zero restricted weight space v the representation v has dimension the zero weight space has dimension it is spanned by the vector where is the canonical basis of a representative r of is given by clearly this element acts nontrivially on the vector remark when g is compact no representation can satisfy these conditions indeed in that case l is the whole group g and condition i a fails so for us only noncompact groups are interesting this is not surprising indeed any compact group acting on a vector space preserves a quadratic form and so falls under the scope of bieberbach s theorem strategy of the proof a central pillar of this paper consists in the following proposition template schema let g and h be two elements of the appropriate group such that both g and h are regular the pair g h has a geometry both g and h have sufficient contraction strength in that case i the product gh is still regular ii its attracting geometry is close to that of g iii its repelling geometry is close to that of h iv its contraction strength is close to the product of those of g and h v its asymptotic dynamics is close to the sum of those of g and we prove three different versions of this statement with some slight variations especially concerning asymptotic dynamics all of which involve a different set of definitions for the concepts in scare quotes see table the proximal version is proposition the linear version is proposition for the main part points i to iv in schema and proposition for the asymptotic dynamics the affine version is proposition ii and iii for the main part and proposition for the asymptotic dynamics for the last two versions the definitions in question also depend on a parameter fixed once and for all given in the second line of table to give a first intuition very roughly the geometry of an element has to do with its eigenvectors its contraction strength has to do with its singular values and its asymptotic dynamics has to do with the moduli of its the word asymptotic is explained by gelfand s formula here is the relationship between these three results the proximal version is used as a stepping stone to prove the other two versions this result is not new very similar lemmas were proved in and and this exact statement was proved in the author s previous papers smi we briefly recall it in section the linear version is also used as a stepping stone to prove the affine version but not in a completely straightforward way in fact the main part of the linear version after being reformulated as proposition i is only involved in the proof of the additivity of the margulis invariant the affine version for the asymptotic dynamics on the other hand the linear version for the asymptotic dynamics is already necessary to prove the main part of the affine version it is a slight generalization of a result by benoist we cover it in section in the affine case this is not exactly true the translation part is also involved group where g lives parameter regularity attracting geometry proximal case linear case affine case gl e for some real vector space e g none some subset of simple restr roots sec some extreme generically symmetric sec proximality def def def egs p e ygx ag essentiallya g v def def def egu p e ygx ag essentiallya g v def def def contr strength g def sx g def g def repelling possiblyb asymp dynamics a b c the log of the spectral radius r g def roughlyc the jordan projection jd g def roughlyc the jordan projection jd g and the margulis invariant m g def see the last point of remark in the proximal case there are at least two reasonable definitions of asymptotic dynamics another one would be the logarithm of the spectral gap defined in the same place technically the statement of schema only holds for a further projection of jd g given by the subset of coordinates jd g for i for the remaining coordinates we only get an inequality however proposition allows us to circumvent this problem also if which is often the case see example and remark ii this issue does not arise at all table possible meanings of notions appearing in schema with references to the corresponding definitions the affine version is the key result of this paper just defining the affine concepts keeps us busy for a long time sections and proving the results also takes a fair amount of work sections and once we have it it takes only five pages to prove our main theorem the proof has a lot in common with the author s previous papers smi and ultimately builds upon an idea that was introduced in margulis s seminal paper let us now present a few highlights of what distinguishes this paper from the previous works the main difference lies in the treatment of the dynamical spaces as long as we only worked in representations see definition we could associate to every element of g v that is regular in the appropriate sense a decomposition of v into three dynamical subspaces v vg vg all stable by g and on which g acts with eigenvalues respectively of modulus of modulus and of modulus in the general case this is no longer possible we need to enlarge the neutral subspace to an approximately neutral subspace the eigenvalues of g on that subspace are in some rather weak sense not too far from but will still grow exponentially in the group we will eventually construct the decomposition now becomes v this forces us to completely change our point of view we now define these approximate dynamical spaces in a purely algebraic way by focusing on the dynamics that g would have on them if it were conjugate to the exponential of some reference element of the weyl chamber for this reason we actually call them the ideal dynamical spaces see definition only by imposing an additional condition on g asymptotic contraction see definition can we ensure that the ideal dynamical spaces become indeed the approximate dynamical spaces the linear version of schema never explicitly appeared in the author s previous papers so far we have been able to simply present the linear theory as a particular case of the affine theory even proposition in smi which seems almost identical to proposition in the current paper relies in fact on the affine versions of the properties however this becomes untenable in the case of swinging representations as the relationship between affine contraction strength and linear contraction strength becomes less straightforward see section this led us to develop the linear theory on its own we think that propositions and might have some interest independently of the remainder of the paper we must however point out that these results are not completely new the particular case where is due to benoist the case where is arbitrary is an easy generalization which also relies on the tools developed by benoist and is to experts in the field it might seem at first sight that the general case is actually mentioned explicitly in but this is not so see remark in smi the central argument of the paper namely the proof of proposition has been completely overhauled though still highly technical it is now much cleaner it is more symmetric and the organization of the proof better reflects the separation of the ideas it involves even if we forget the fact that it works in a more general setting it is definitely an improvement compared to the proof given in smi plan of the paper in section we give a few algebraic results and introduce some notations related to metric properties and estimates most of these results are in section we recall the definitions of the proximal versions of the key properties and the proximal version of schema this is also in section we define the linear versions of the key properties then prove the linear version of schema this section clarifies and expands the results mentioned in section of smi ultimately most of the definitions and ideas here are due to benoist in section we choose an element that will be used to define the affine versions of the key properties this generalizes section in smi in section we present some preliminary constructions involving and introduce some elementary formalism that expresses affine spaces in terms of vector spaces part of the material is borrowed from sections and in smi in section we define the affine versions of the key properties this generalizes sections in smi but introduces several new ideas in section we prove the main part of the affine version of schema this generalizes section in smi section contains the key part of the proof we prove the asymptotic dynamics part of the affine version of schema in other terms approximate additivity of the margulis invariants this generalizes section in smi and section in the proof is based on the same idea but has been considerably rewritten the very short section uses induction to extend the results of the two previous sections to products of an arbitrary number of elements it is a straightforward generalization of section in smi and section in section contains the proof of the main theorem it is an almost straightforward generalization of section in smi and section in acknowledgments i am very grateful to my advisor yves benoist who introduced me to this exciting and fruitful subject and gave invaluable help and guidance in my initial work on this project i would also like to thank bruno le floch for some interesting discussions which in particular helped me gain more insight about weights of representations preliminaries in subsection for any element g g we give a formula for the eigenvalues and singular values of the linear maps g where is an arbitrary representation of this is nothing more than a reminder of subsection in smi in subsection we present some properties of restricted weights of a real finitedimensional representation of a real semisimple lie group this is mostly a reminder of subsection in smi in subsection we give some basic results from the theory of parabolic subgroups and subalgebras not necessarily minimal this is a development on subsection in smi in subsection we give some notation conventions related to metric properties and estimates mostly borrowed from the author s earlier papers eigenvalues in different representations in this subsection we express the eigenvalues and singular values of a given element g g acting in a given representation exclusively in terms of the structure of g in the abstract group g respectively its jordan decomposition and its cartan decomposition this is just a reminder of subsection in smi the result itself was certainly even before that proposition jordan decomposition let g there exists a unique decomposition of g as a product g gh ge gu where gh is conjugate in g to an element of a hyperbolic ge is conjugate in g to an element of k elliptic gu is conjugate in g to an element of n unipotent these three maps commute with each other proof this is and given for example in theorem note however that the latter theorem uses definitions of a hyperbolic elliptic or unipotent element applied to the case of the adjoint representation to state the theorem with the definitions that we used we need to apply proposition and theorem from the same book proposition cartan decomposition let g then there exists a decomposition of g as a product g with k and a exp x with x moreover the element x is uniquely determined by proof this is a classical result see theorem in definition for every element g g we define the jordan projection of g sometimes also known as the lyapunov projection written jd g to be the unique element of the closed dominant weyl chamber such that the hyperbolic part gh from the jordan decomposition g gh ge gu given above is conjugate to exp jd g the cartan projection of g written ct g to be the element x from the cartan decomposition given above to talk about singular values we need to introduce a euclidean structure we are going to use a special one lemma let be some real representation of g on some space there exists a quadratic form on such that all the restricted weight spaces are pairwise we want to reserve the plain notation for the default representation to be fixed once and for all at the beginning of section we use the notation so as to encompass both this representation and the representations defined in proposition proof see lemma in bq for a more succinct proof or lemma in smi for a more detailed proof example if ad is the adjoint representation then is the form given by y g x y x see in where b is the killing form and is the cartan involution recall that the singular values of a map g in a euclidean space are defined as the square roots of the eigenvalues of g where is the adjoint map the largest and smallest singular values of g then give respectively the operator norm of g and the reciprocal of the operator norm of proposition let g gl be any representation of g on some vector space let be the list of all the restricted weights of repeated according to their multiplicities let g g then i the list of the moduli of the eigenvalues of g is given by i jd g ii the list of the singular values of g with respect to a euclidean norm on that makes the restricted weight spaces of pairwise orthogonal such a norm exists by lemma is given by i ct g proof this is also completely straightforward see proposition in smi properties of restricted weights in this subsection we introduce a few properties of restricted weights of real finitedimensional representations proposition is actually a general result about coxeter groups the corresponding theory for ordinary weights is see for example chapter v in this is mostly a reminder of subsection from smi the only addition is lemma let be an enumeration of the set of simple restricted roots generating for every i we set if is a restricted root otherwise for every index i such that i r we define the fundamental restricted weight by the relationship i for every j such that j by abuse of notation we will often allow ourselves to write things such as for all i in some subset satisfies tacitly identifying the set with the set of indices of the simple restricted roots that are inside in the following proposition for any subset we denote by the weyl subgroup of type by the fundamental domain for the action of on a x a x which is a kind of prism whose base is the dominant weyl chamber of proposition take any and let us fix x let y a then the following two conditions are equivalent i the vector y is in and satisfies the system of linear inequalities y x y x ii the vector y is in and also in the convex hull of the orbit of x by proof for the particular case this is well known see proposition the general case can easily be reduced to this particular case see smi proposition proposition every restricted weight of every representation of g is a linear combination of fundamental restricted weights with integer coefficients proof this is a particular case of proposition in for a correction concerning the proof see also remark in proposition if is an irreducible representation of g there is a unique restricted weight of called its highest restricted weight such that no element of the form with i r is a restricted weight of remark in contrast to the situation with weights the highest restricted weight is not always of multiplicity nor is a representation uniquely determined by its highest restricted weight proof the corresponding result for ordinary weights is see theorem d for result for restricted weights can easily be deduced from the former see proposition in smi proposition let be an irreducible representation of g let be its highest restricted weight let be the restricted root lattice shifted by cr cr z then the set of restricted weights of is exactly the intersection of the lattice with the convex hull of the orbit w w w of by the restricted weyl group proof once again this follows from the corresponding result for non restricted weights see theorem by passing to the restriction in the case of restricted weights one of the inclusions is stated in proposition proposition for every index i such that i r there exists an irreducible representation of g on a space vi whose highest restricted weight is equal to ni for some positive integer ni and has multiplicity proof this follows from the general theorem in this is also stated as lemma in example if g sln r we may take vi rn so that is the exterior power of the standard representation of g on rn more generally if g is split then all restricted weight spaces correspond to ordinary weight spaces hence have dimension so we may simply take every ni to be so that the s are precisely the fundamental representations lemma fix an index i such that i then all restricted weights of other than ni have the form r x cj ni with cj for every j proof this is the last remark in section of for a proof see lemma in smi lemma assume that the lie algebra g is simple and that its restricted root system has a diagram let be an irreducible representation of g let be the set of its restricted weights assume that we have the representation has as a restricted weight and at least one nonzero restricted weight then we have note that the only case where is the only restricted weight is when the image of g by is a compact group for simple g this means that either g itself is compact or is trivial proof first let us show that contains at least one restricted root by proposition every restricted weight of may be written as a sum l where l are some restricted roots define the level of to be the smallest integer l for which such a decomposition is possible let be an element of whose level l is the smallest possible it exists since is not reduced to and consider its decomposition then for every i l we have i l i indeed otherwise i l would be a restricted root so we could combine them together to produce a decomposition of length l hence we have l x i l i l l i l i l l i l l i l l i which implies that l conv l conv w from proposition it follows that l is still a restricted weight of but its level is now l since l is by assumption the smallest nonzero level necessarily we have l and is indeed a restricted root now it is see problem in that for a simple lie algebra w acts transitively on the set of restricted roots having the same length as since the restricted root system of g has a diagram all of the restricted roots have the same length hence the orbit of by w is the whole set we conclude that parabolic subgroups and subalgebras in this subsection we recall the theory of parabolic subalgebras and subgroups we begin by defining them as well as the levi subalgebra and subgroup of a given type and the corresponding subset of and subgroup of w so far we follow subsection in smi but now we go further giving a few propositions relating these different objects in particular the generalized bruhat decomposition lemma a parabolic subgroup or subalgebra is usually defined in terms of a subset of the set of simple restricted roots we find it more convenient however to use a slightly different language to every such subset corresponds a facet of the weyl chamber given by intersecting the walls corresponding to elements of we may exemplify this facet by picking some element x in it that does not belong to any subfacet conversely for every x we define the corresponding subset x the parabolic subalgebras and subgroups of type can then be very conveniently rewritten in terms of x as follows definition for every x we define x and px the parabolic subalgebras of type x and lx the levi subalgebra of type x m x l x x l m x lx l m x and the corresponding parabolic subgroups and lx the levi subgroup of type x ng x ng x lx the following statement is proposition we have lx zg x proof first note that by combining propositions b e and a in we get that lx zg ax where ax y a y is the intersection of all walls of the weyl chamber containing x it remains to show that zg ax zg x clearly the lie algebra of both groups is equal to lx hence their identity components zg ax e and zg x e are also equal but by combining propositions and in it follows that zg x m zg x e and similarly for zg ax the conclusion follows an object closely related to these parabolic subgroups see corollary the bruhat decomposition for parabolic subgroups is the stabilizer of x in the weyl group definition for any x we set wx w w wx x remark the group wx is also closely related to the set indeed it follows immediately that a simple restricted root belongs to if and only if the corresponding reflection belongs to wx conversely it is chevalley s lemma see proposition that these reflections actually generate the group wx thus wx is actually the same thing as as defined before proposition where we substitute example to help understand the conventions we are taking here are the extreme cases if x lies in the open weyl chamber then p is the minimal parabolic subgroup p lx l wx id if x then lx g wx w the following result shows why wx is important definition let be a set of elements of we say that is a of if a of if example in the set of restricted roots the positive restricted roots form a and the negative restricted roots form a lemma generalized bruhat decomposition let be some real representation of g on some space let be the set of restricted weights of for every subset we set m then we have stabg p stabw p stabg p stabw p if is a of if is a of proof assume that is a the case of a is analogous the first step is to show that stabg contains p indeed the group l stabilizes every restricted weight space indeed take some v l l x a then we have x l v l ad x v l x v x l v for every and every clearly we have by definition of a hence stabilizes the statement follows as p l exp now take any element g let us apply the bruhat decomposition see theorem there exists an element w of the restricted weyl group w such that we may write g where are some elements of the minimal parabolic subgroup p and ng a is some representative of w w ng a a from the statement that we just proved it follows that g stabilizes iff does so on the other hand it is clear that for every we have v the choice of the representative does not matter since as seen above the kernel zg a l stabilizes v the conclusion follows the following particular case is see for example theorem corollary bruhat decomposition for parabolic groups we have the identities p wx p and p wx p proof take to be the adjoint representation then g and take x this is a it is easy to show that we have stabw wx and by tion x applying the lemma the first identity follows applying the lemma to the subset defined analogously the second identity follows metric properties and estimates in this subsection we mostly introduce some notational conventions they were already introduced in the beginning of subsection in smi and the beginning of subsection in for any linear map g acting on a euclidean space e we write kgk kg x k kxk its operator norm consider a euclidean space we introduce on the projective space p e a metric by setting for every x y p e x y arccos kxkkyk where x and y are any vectors representing respectively x and y obviously the value does not depend on the choice of x and y this measures the angle between the lines x and y for shortness sake we will usually simply write x y with x and y some actual vectors in e for any vector subspace f e and any radius we shall denote the of f in p e by bp f x p e x p f you may think of it as a kind of conical neighborhood consider a metric space m let x and y be two subsets of we shall denote the ordinary minimum distance between x and y by x y inf inf x y as opposed to the hausdorff distance which we shall denote by haus x y max sup x y sup y x finally we introduce the following notation let x and y be two positive quantities and pk some parameters whenever we write x pk y we mean that there is a constant k depending on nothing but pk such that x ky if we do not write any subscripts this means of course that k is an absolute constant or at least that it does not depend on any local parameters we consider the global parameters such as the choice of g and of the euclidean norms to be fixed once and for all whenever we write x pk y we mean that x pk y and y pk x at the same time the following result will often be useful lemma let c then any map gl e such that k c induces a c continuous map on p e proof see lemma proximal maps in this section we give the definitions of the proximal versions of the concepts from table and state the proximal version of schema namely proposition it contains no new results let e be a euclidean space definition proximal version of regularity let gl e let be its eigenvalues repeated according to multiplicity and ordered by nonincreasing modulus we define the spectral radius of as r we do not use as it could be confused with the representation we say that is proximal if we have r if is the only eigenvalue with modulus r and has multiplicity equivalently is greater than is proximal if and only if its spectral gap definition proximal version of geometry for every proximal map we may then decompose e into a direct sum of a line called the attracting space of and a hyperplane called the repelling space of both stable by and such that id for every eigenvalue of definition proximal version of consider a line e s and a hyperplane e u of e transverse to each other an optimal canonizing map for the pair e s e u is a map gl e satisfying e s e u and minimizing the quantity max k we define an optimal canonizing map for a proximal map gl e to be an optimal canonizing map for the pair let c we say that the pair formed by a line and a hyperplane e s e u resp that a proximal map is if it has an optimal canonizing map such that this is equivalent to the angle between e s and e u being bounded below by a constant that depends only on now take two proximal maps in gl e we say that the pair is if every one of the four possible pairs is definition proximal version of contraction strength let gl e be a proximal map we define the proximal contraction strength of by r where r is the spectral radius of equal to in the notations of the previous definition we say that is if proposition for every c there is a positive constant c with the following property take a pair of proximal maps in gl e and suppose that both and are c then is proximal and we have i ii iii r the constant c is indexed by the number of the proposition a scheme that we will stick to throughout the paper similar results have appeared in the literature for a long time see lemma in proposition in or lemma in proof see proposition in for the proof of i and ii and proposition in smi for the proof of iii remark if we wanted to literally follow schema taking asymptotic dynamics to mean the logarithm of the spectral radius we would need to add a point i to replace iii by iii r r r however the estimate i will not be used in the sequel it is nevertheless true it follows by considering the action of gl e on the dual space e and applying i the estimate iii is on the contrary not strong enough for the applications we need it is also true it follows by plugging the identity r valid for proximal c obtained from iii by setting into iii itself linear maps in this section we define the linear versions of the properties from table and state the linear version of schema most of the basic ideas and several definitions come from however we use a slightly different point of view benoist relies most of the time only on the proximal versions of the properties from table using the representations as a proxy we on the other hand clearly separate the linear versions from the proximal versions and establish the correspondences between linear and proximal versions as theorems in subsection we give the definitions of these linear properties all of them are parametrized by some vector x or equivalently by some subset see the discussion in the beginning of subsection when we will apply the linear case to the affine case this vector will be set to the vector chosen in section in subsection we examine what happens to these properties when we replace g by its inverse in subsection we relate the linear properties with the proximal properties and then prove propositions and which together comprise the linear version of schema definitions let us fix some x definition linear version of regularity we say that an element g g is if every root which does not vanish on x does not vanish on jd g either jd g benoist calls such elements elements of type where his is our set see definition in example if x then g g is if and only if jd g if x then the condition is vacuous all elements of g are elements g g such that jd g are often called or loxodromic so informally being should be understood as being partially in fact technically we should probably say instead of definition linear version of geometry let g g be an element let g gh ge gu be its jordan decomposition and let be any element of g realizing the conjugacy exp jd g called a canonizing map for g then we define the attracting of g denoted by ygx to be the class of in the flag variety ygx the repelling of g denoted by ygx to be the class of in the flag variety ygx the of g to be the data of its attracting and repelling the x x pair yg yg benoist defines the attracting and repelling flags in the last sentence of in remark depending on context sometimes it is the map itself that is more relevant to consider and sometimes it is its inverse indeed while is the map that brings g to the canonical position its inverse is the map that defines the geometry of g starting from the canonical position this is why the formulas above involve we need to check that those definitions do not depend on the choice of indeed is unique up to multiplication on the right by an element of the centralizer of jd g by proposition the latter is equal to ljd g since g is it is contained in lx which in turn is contained both in and in definition we say that a pair y y is transverse if the intersection of y and y seen as cosets in g is nonempty if there exists an element g such that y y in particular the pair of flags giving the geometry of any element is transverse compare this with the definition given in in l proposition the map x x x x gives a canonical diffeomorphism between and the subset of formed by transverse pairs from now on we shall tacitly identify with the set of transverse pairs which is also known as the open in proof the group g acts smoothly on the manifold the orbit of the point is precisely the set of transverse pairs and its stabilizer is lx by l we mean the third line from the end our map corresponds to benoist s injection introduced near the end of in definition on every flag variety and on every flag variety we now fix once and for all a distance coming from some riemannian metric all these distances shall be denoted by remark note that every flag variety and which is isomorphic is compact indeed by the iwasawa decomposition see theorem the maximal compact subgroup k acts transitively on it this means that any two riemannian metrics on a given flag variety are always it turns out that we will only be interested in properties that are true up to a multiplicative constant so the choice of a riemannian metric does not influence anything in the sequel we now introduce the notion of which is basically a quantitative measure of transversality every transverse pair of flags is for some constant c but the smaller the constant gets the more strongly the flags are transverse in this notion appears bundled together with contraction strength in the concept of for a single element or for a pair or more generally a family of elements definition we fix once and for all a continuous proper map g a typical example of such a map is given by g max g k g k where can be any faithful representation of g and k k can be any euclidean norm on the representation space the specific choice of is not really important see the remark below in practice we will indeed find it convenient to use a very specific map of this form see the important property is that then the family of the preimages c indexed by c is a nested family of compact sets whose union exhausts the set definition linear version of note that the last statement also holds for the projections of these preimages onto we may call these projections c where we set min to justify that this is notice that for every g the intersection of the coset with is compact and nonempty so the continuous map reaches a minimum on it also the map is still proper we say that an element g is an optimal representative of the coset if reaches its minimum at if we have we say that a transverse pair y y is if y y c or in other terms c y y where we identify y y with a coset of by the map defined above we say that an element g g is if its is if ygx ygx c we say that a pair of elements of g is if we have x c ygx y gj i for all four possible pairs i j remark let us now explain why the choice of the function is not really important indeed suppose we replace the function by another function having the same property then we will simply need to replace every constant c by some c that depends only on example take g n this group has real rank so the closed weyl chamber is a with only two facets and its interior taking x makes everything trivial so assume that x let us identify g with the group of isometries of the hyperbolic space hn in this case an element g g is then if and only if it is loxodromic fixes exactly two points of the ideal boundary hn the flag variety canonically identifies with this ideal boundary hn and so does the opposite flag variety the attracting flag ygx resp repelling flag ygx of an element g corresponds to the attracting resp repelling fixed point at infinity of the loxodromic isometry two flags y y are transverse if and only if the corresponding points of hn are distinct one possible choice of the function is as follows choose any reference point hn and let and be starting at and reaching the ideal boundary at points corresponding to y and y respectively then we may let y y be the reciprocal of the angle between and in that case a pair y y is if and only if the corresponding points of the ideal boundary are separated by an angle of at least when looking from we finish this subsection by introducing the following notion definition linear version of contraction strength we define the linear xcontraction strength of an element g g to be the quantity g exp min ct g it measures how far the cartan projection of g is from the walls of the weyl chamber except those containing x impact of the group inverse in this section we examine what happens to the properties we just introduced when we pass from an element g g to its inverse g though slightly technical the proof is straightforward we start by observing that for every g g we have jd jd g ct g ct g and the map is benoist s involution opposition compare these formulas with section l the first identity immediately follows from the definitions of the jordan projection and of the second identity also follows from the definitions using the fact that every element of the restricted weyl group w in particular has a representative in the maximal compact subgroup k see formulas and proposition i an element g g is if and only if its inverse g is x ii for every element g g we have x x x yg where x are the diffeomorphisms given by x p x x iii if a pair y y is then the pair x y y x x is c for some constant c that depends only on iv for every element g g we have g x g remark starting from section we will only consider situations where x will be symmetric x x which simplifies the above formulas proof i this is an immediate consequence of ii to show that the map x is note that we have px x this easily follows from the definitions it is obviously smooth hence the map x is also smooth and is clearly equal to the inverse of to show the desired identity note that by if is a canonizing map for g then is a canonizing map for g pay attention to the convention of versus iii the map descends to a diffeomorphism x that makes the diagram x x x x x x commutative here is the embedding from proposition and denotes the map a b b a the double arrow is meant to suggest this graphically so clearly the map x preserves transversality l c to some of pairs now for every c the map maps the preimage compact subset of x which is in particular contained in the preimage c for some c x iv this is an immediate consequence of remark in point i since the choice of the metrics on the flag varieties was arbitrary we lose no generality in assuming that the diffeomorphisms x defined above are actually isometries in point iii if is chosen in a sufficiently natural way for example as defined by we may actually let c products of maps in this subsection we start by proving a few results that link linear properties to proximal properties via the representations proposition i for regularity lemma for geometry proposition for proposition ii for contraction strength their proofs are adapted from section in smi we then prove the linear version of schema it consists of two parts proposition is the main part this result did not appear in the previous paper proposition is the asymptotic dynamics part it gives the same conclusion as proposition in smi but uses the linear versions of the properties in its hypotheses which are the most natural here rather than the affine versions proposition i an element g g is if and only if for every i the map g is proximal ii for every c there is a constant c with the following property let g g be a map such that g c then for every i we have g g these two statements essentially correspond to the respective left halves of and in smi the proof is also essentially the same part i is given in definition in remark note that since all euclidean norms on a vector space are equivalent this estimate makes sense even though we did not specify any norm on vi in the course of the proof we shall choose one that is convenient for us recall that i is a notation shortcut for i such that should be thought of as a kind of exceptional set in practice it will often be empty see remark below proof of proposition i by proposition i the list of the moduli of the eigenvalues of g is precisely j jd g where di is the dimension of vi and is the list of restricted weights of listed with multiplicity up to reordering that list we may suppose that ni is the highest restricted weight of we may also suppose that ni indeed we have ni ni ni i i i ni ni recall that is equal to if is a restricted root and to otherwise but by proposition ni is a restricted weight of because it is the image of a restricted weight of by an element of the weyl group and then ni is also a restricted weight of as a convex combination of two restricted weights of that belongs to the restricted root lattice shifted by ni take any j since by hypothesis the restricted weight ni has multiplicity we have by lemma it follows that this restricted weight has the form ni r x with for every index finally since by definition jd g for every index we have jd g it follows that for every j we have jd g jd g jd g in other words among the moduli of the eigenvalues of g the largest is exp jd g exp ni jd g and the second largest is exp jd g exp ni jd g jd g it follows that the spectral gap of g is equal to exp jd g exp jd g exp jd g the conclusion follows immediately ii let c be a constant small enough to satisfy all the constraints that will appear in the course of the proof let us fix i and let g g be a map satisfying the hypotheses clearly it is enough to show that we have g exp ct g indeed by definition the side is smaller or equal than g we start with the following observation for every c the continuous map max is bounded above on the compact set c by some constant that depends only on c and on the choice of a norm on vi to be made soon let be an optimal representative of the coset in giving the geometry of g and let then we get g now let us choose on the space vi where the representation acts a euclidean form bi such that all the restricted weight spaces for are pairwise bi this is possible by lemma applied to then is simply the quotient of the two largest singular values of g by proposition ii giving the singular values of an element of g in a given representation and by a calculation analogous to the previous point we have exp ct g the desired estimate follows by combining with proposition let be a pair of elements of then for every i the pair is a c pair of proximal maps in gl vi where c is some constant that depends only on this is a straightforward generalization of proposition in smi its proof relies on the following two lemmas which are analogous to lemmas and in smi however the two lemmas that follow are now formulated more generally the stronger statements will be useful in order to prove proposition lemma we have i stabg vini ii stabg m proof for every i let be the set of restricted weights of the representation i we begin by noting that for every i we have stabg vini p stabw ni p p p this follows from lemma indeed the singleton ni is clearly a in corollary then gives us that stabg vini i now it is an easy exercise to show that for any x y z we have x py pz wx wy wz now obviously and if we remove from all elements that are not in we are left with precisely thus i and the conclusion follows ii note that for every i the complement ni is a of and has the same stabilizer in w as ni we may then follow the same line of reasoning in the following lemma we identify the projective space p vi with the set of vector lines in vi and the projective space p with the set of vector hyperplanes of vi lemma there exist two smooth embeddings y p vi and y p with the following properties i for every map g g we have ygx g x yg g ii if y y is a transverse pair then for every i we have y i y i our map is the map defined in the beginning of in statement ii appears a little further in the same text together with its converse which is also true proof we define the maps and in the following way for every g we set vini vi these maps are and injective by lemma and obviously continuous they are by construction hence to prove that they are smooth embeddings it is sufficient to prove that both of them have injective differential at the identity coset this also follows from lemma by differentiating it to show property i we essentially use the identities exp jd g vini l exp jd g which follow from the inequality ranking the values of different restricted weights of evaluated at jd g and the simple observation that any eigenspace of is the image by of the eigenspace of g with the same eigenvalue property ii is obvious from the definitions proof of proposition let c then the set of transverse pairs in is compact on the other hand the function y y max y i y i is continuous and by lemma ii takes positive values on that set hence it is bounded below so there is a constant c depending only on c such that whenever a transverse pair y y is all pairs y i y i are c the conclusion then follows by lemma i proposition for every c there is a positive constant c with the following property take any pair g h of maps suppose that we have g c and h c then gh is still and we have x ygh ygx g y x y x h c x gh h remark we did not include the conclusion about contraction strength namely that gh g h this statement is true as we shall see in a moment but will never be directly useful to us proof let us fix some constant c small enough to satisfy all the constraints that will appear in the course of the proof let g h be a pair of maps satisfying the hypotheses by proposition i for every i the maps g and h are proximal by proposition for every i the pair g h is c where c depends only on c if we choose c c it follows by proposition ii that for every i we have g g and h h if we choose c sufficiently small then all the maps g and h are c contracting sufficiently contracting to apply proposition thus we may apply proposition we obtain that for every i the map gh is proximal hence by proposition i the element gh is that moreover for every i we have gh g g g q now let us endow the product p vi with the product distance given by y y max yi then all the inequalities may be combined together to yield x ygx g ygh where is the map introduced in lemma since is a smooth embedding of the compact manifold it is in particular a bilipschitz map onto its image hence we also have x x g ygh yg this yields the first line of the conclusion to get the second line simply note that by proposition if we replace x by x the pair satisfies the same hypotheses as the pair g h applying what we did above to this pair we get x x y h y x h x gh by remark the conclusion follows proposition for every c there are positive constants c and c with the following property take any pair g h of elements of g such that g c and h c then we have i jd gh ct g ct h ii jd gh ct g ct h c proof the proof is completely analogous to the proof of proposition in smi see figure for a picture explaining both this proposition and the corollary below let us also give a more palatable though slightly weaker reformulation corollary for every c there exists a positive constant c with the following property for any pair g h satisfying the hypotheses of the proposition we have jd gh conv wx g h where conv denotes the convex hull and g h is some vector in a satisfying k g h ct g ct h k c proof the proof is the same as the proof of corollary in smi note that we do not require that the vector g h lie in the closed dominant weyl chamber even though in practice it is very close to the vector ct g ct h which does remark at first sight one might think that by putting together lemma and in we recover this result and even something stronger however this is not the case in fact we recover only the particular case when see remark in smi for an explanation remark though we shall not use it an interesting particular case is g we the obviously have jd gh jd g and ct g h ct g so the statement simply reduces to a relationship between the jordan and cartan projections on the proposition and its corollary also hold if we replace jd gh by ct gh this immediately implies compare this to remark which involved a similar construction in the proximal case wx id c x g h jd gh ct g ct h g h figure this picture represents the situation for g acting on and x chosen such that or with the usual abuse of notations this choice of x is not random it satisfies the conditions that will be required starting from section this also corresponds to example in smi the group wx is then generated by the single reflection proposition states that jd gh lies in the shaded trapezoid corollary states that it lies on the thick line segment in any case it lies by definition in the dominant open weyl chamber the shaded sector choice of a reference jordan projection for the remainder of the paper we fix an irreducible representation of g on a finitedimensional real vector space v for the moment may be any representation but in the course of the paper we shall gradually introduce several assumptions on namely assumptions and that will ensure that satisfies the hypotheses of the main theorem we call the set of restricted weights of for any x a we call x resp x the set of all restricted weights of that take a positive resp negative zero nonnegative nonpositive value on x x x x x x x x x x x the goal of this section is to study these sets and to choose a vector for which the corresponding sets have some nice properties this generalizes section in smi in fact these sets are the only property of that matters for us in other terms what we really care about is the class of with respect to the following equivalence relation definition we say that x and y have the same type if y and obviously this implies that the spaces and coincide as well for x and y this is an equivalence relation which partitions a into finitely many equivalence classes remark every such equivalence class is obviously a convex cone taken together the equivalence classes decompose a into a cell complex example if is the adjoint representation two dominant vectors x y have the same type if and only if there is only one generic type corresponding to see example in smi for more details and two other examples with pictures the motivation for the study of these five sets x is that they allow us to introduce some reference dynamical spaces see subsection in subsections and we define two properties that we want to satisfy subsection basically consists of examples and may be safely skipped generically symmetric vectors we start by defining the property of being generically symmetric which generalizes generic and symmetric vectors as defined in subsections and of smi one of the goals is to ensure that the set x is as small as possible here is the first attempt we say that an element x a is generic if x remark this is indeed the generic case it happens as soon as x avoids a finite collection of hyperplanes namely the kernels of all nonzero restricted weights of in fact a vector is generic if and only if its equivalence class is open if x is generic its equivalence class is just the connected component containing x in the set of generic vectors otherwise its equivalence class is always contained in some proper vector subspace of a in fact for generic x we actually have x provided the following condition is met assumption from now on we assume that is a restricted weight of or equivalently dim v remark by proposition this is the case if and only if the highest restricted weight of is a combination of restricted roots we lose no generality in assuming this because this assumption is necessary for condition i a of the main theorem which is also assumption see below to hold indeed any nonzero vector fixed by l is in particular fixed by a l which means that it belongs to the zero restricted weight space since we want to construct a group with certain properties it must in particular be stable by inverse the identity encourages us to examine the action of on a definition we say that an element x a is symmetric if it is invariant by x x ideally we would like our reference vector to be both symmetric and generic unfortunately this is not always possible indeed every restricted weight that happens to be invariant by necessarily vanishes on every symmetric vector let be the set of those restricted weights definition we say that an element x a is generically symmetric if it is symmetric and we have x in other terms an element is generically symmetric if it is as generic as possible while still being symmetric extreme vectors besides wx we are also interested in the group x w w wx has the same type as x which is the stabilizer of x up to type it obviously contains wx the goal of this subsection is to show that in every equivalence class we can actually choose x in such a way that both groups coincide this generalizes subsection in smi example in example in smi g acting on v the group x corresponding to any generic x is a group if we take x to be generic not only with respect to but also with respect to the adjoint representation in other terms if x is in an open weyl chamber then the group wx is trivial if however we take as x any element of the diagonal wall of the weyl chamber we have indeed wx x definition we call an element x extreme if wx x if it satisfies the following property w wx has the same type as x wx x remark here is an equivalent definition which is possibly more enlightening it is possible to show that a vector x is extreme if and only if it lies in every wall of the weyl chamber that contains at least one vector of the same type as x in other words a vector is extreme if it is in the furthest possible corner of its equivalence class in the weyl chamber as this last statement will never be used in this paper we have left out its proof proposition for every generically symmetric x there exists a generically symmetric x that has the same type as x and that is extreme this is a straightforward generalization of proposition in smi the proof is similar proof to construct an element that has the same type as x but has the whole group x as stabilizer we simply average over the action of this group we set x wx x as multiplication by positive scalars does not change anything we have written it as a sum rather than an average for ease of manipulation let us check that it has the required properties let us show that x is still symmetric since belongs to the weyl group it induces a permutation on hence we have x x so that swaps the sets x and now by definition we have x stabw x stabw hence normalizes x obviously the map x commutes with everything so also normalizes x we conclude that x x w x x x w x x by definition every wx for w x has the same type as x since the equivalence class of x is a convex cone their sum x also has the same type as x in particular we have x so x is still generically symmetric by construction whenever wx has the same type as x we have wx x conversely if w fixes x then wx has the same type as wx x which has the same type as x so x is extreme it remains to show that x that for every we have x if x x then obviously x otherwise since x is extreme it follows that x does not even have the same type as x this means that there exists a restricted weight of such that x and x and at least one of the two inequalities is strict in particular we have x since is by definition a multiple of it follows that x now the same reasoning applies to any vector of the same type as x hence never vanishes on the equivalence class of x since by hypothesis x and since the equivalence class is connected we conclude that x note that in practice the set for an extreme generically symmetric x can take only a very limited number of values see remark in smi simplifying assumptions in this subsection we discuss how the constructions of this paper may simplify in some particular cases these results will never be reused in the paper which is why we do not provide proofs however they can be helpful for a reader who is only interested in one particular representation which is likely to fit at least one of the cases outlined below definition we say that the representation is limited if every restricted weight is a multiple of a restricted root z n z abundant if every restricted root is a restricted weight awkward if it is neither limited nor abundant if example the adjoint representation is both limited and abundant and always the standard representation of so p p on is limited and if g is simple it seems that all but finitely many representations are abundant if additionally its restricted root system has a diagram then lemma says that all representations except the trivial one are abundant among simple groups it seems that awkward representations occur only when the restricted root system is of type cn or bcn and only for n at least equal to for groups this phenomenon is more common see example for specific examples swinging representations occur only when is a nontrivial automorphism of the dynkin diagram among simple groups this happens only if the restricted root system is of type an n or the bad news is that for these groups most representations all but finitely many are swinging the simplest example is r acting on s see example in smi thus no representation of a simple group is swinging and awkward at the same time however this may happen for a group simply take the tensor product of a swinging representation of and an awkward representation of all the subsequent constructions rely on the choice of a generically symmetric vector that is extreme actually only the type of matters here is what we can say about the choice of up to type in these particular cases remark i in a limited representation there is only one type of generically symmetric vector so we can ignore the dependence on ii in an abundant representation every generically symmetric vector lies in particular in in other terms we have and we do not need the theory of nonminimal parabolic subgroups as developed in section iii in both cases we get that the type of with respect to the adjoint representation the subset does not depend on the choice of in fact in every nonawkward representation for every generically symmetric and extreme we have the identity iv note that one of the inclusions namely is obvious and holds in every representation the other inclusion however may fail in awkward representations and the value of may then depend on the choice of see example v in a representation clearly the vector is generically symmetric if and only if it is generic and also symmetric constructions related to definition for the remainder of the paper we fix some vector in the closed dominant weyl chamber that is generically symmetric and extreme in this section we introduce some preliminary constructions associated to this vector in subsection we introduce the reference dynamical spaces associated to and find their stabilizers in this generalizes subsection in smi subsection consists entirely of definitions we introduce some elementary formalism that expresses affine spaces in terms of vector spaces and use it to define the affine reference dynamical spaces we basically repeat subsection from smi in subsection we try to understand what regularity means in the affine context we introduce two different notions of regularity and establish the relationships between them previously we used only one of the two notions see definition in smi subsection consists entirely of examples it contains counterexamples to help understand why some statements of the previous subsection can not be made stronger reference dynamical spaces definition we define the following subspaces of v l v the reference expanding space l l v the reference contracting space v the reference neutral space the reference noncontracting space the reference nonexpanding space l v l v in other terms is the direct sum of all restricted weight spaces corresponding to weights in and similarly for the other spaces these are precisely the dynamical spaces associated to the map exp acting on v by as defined in section of smi see example in smi for the case of the adjoint representation and of g p q acting on the standard representation remark note that by assumption zero is a restricted weight so the space is always nontrivial let us now determine the stabilizers in g of these subspaces proposition we have i stabg stabg ii stabg stabg this generalizes proposition in smi but now the proof is somewhat more involved proof by lemma and corollary it is enough to show that stabw stabw stabw stabw indeed since clearly and are and and are of now obviously any subset of and its complement always have the same stabilizer by w since is stable by w on the other hand we have since is extreme stabw stabw by definition so it is sufficient to show that and or equivalently and have the same stabilizer in w this is a consequence of lemma below lemma every element w of the restricted weyl group w such that stabilizes every set of restricted weights such that in particular all such sets have the same stabilizer in w indeed every w w that stabilizes and every w w that stabilizes satisfies in particular proof let us decompose g as a sum of three pieces g with respective cartan subspaces and such that every simple summand of has a restricted root system with a nonempty simplylaced diagram and acts nontrivially on v the restriction of to is id acts trivially on v then we also have the decompositions where is the restricted root system of gi where every is a system of positive restricted roots for gi w where wi is the restricted weyl group of gi we now prove the two contrasting statements and on the one hand we claim that for any w w such that we have w indeed let us decompose w with wi wi applying lemma to every simple summand of we find that since does not intersect it does not is generically symmetric the set even intersect as by definition does not leave any restricted root invariant since is dominant we deduce that it follows that w w w since w stabilizes by assumption thus w fixes which precisely means that id as required on the other hand we claim that the equality holds since is generically symmetric indeed take any and let us decompose it as with the component vanishes by definition of as for we have by definition of that so the component also vanishes combining with we conclude that whenever an element w w satisfies it actually fixes every element of now take such a w and take any element since we may distinguish two cases either then it follows from the previous statement that w or then it follows from the previous statement that w on the other hand we know that w w thus w extended affine space let vaff be an affine space whose underlying vector space is v definition extended affine space we choose once and for all a point of vaff which we take as an origin we call the vector space formally generated by this point and we set a v the extended affine space corresponding to v we hope that a the extended affine space and a the group corresponding to the cartan space occur in sufficiently different contexts that the reader will not confuse them then vaff is the affine hyperplane at height of this space and v is the corresponding vector hyperplane v v v vaff v v definition linear and affine group any affine map g with linear part g and translation vector v defined on vaff by g x g x v can be extended in a unique way to a linear map defined on a given by the matrix g v from now on we identify the abstract group g with the group g gl v and the corresponding affine group g v with a subgroup of gl a definition affine subspaces we define an extended affine subspace of a to be a vector subspace of a not contained in v there is a correspondence between extended affine subspaces of a and affine subspaces of vaff of dimension one less for any extended affine subspace of a denoted by or and so on we denote by or v and so on the space a v which is the linear part of the corresponding affine space a vaff definition translations by abuse of terminology elements of the normal subgroup v g v will still be called translations even though we shall see them mostly as endomorphisms of a so that they are formally transvections for any vector v v we denote by the corresponding translation definition reference affine dynamical spaces we now give a name for the vector extensions of the affine subspaces of vaff parallel respectively to and and passing through the origin we set the reference affine noncontracting space the reference affine nonexpanding space the reference affine neutral space these are obviously the affine dynamical spaces in the sense of smi corresponding to the map exp seen as an element of g v by identifying g with the stabilizer of in g v we then have the decomposition z a v z this gives a hint for why we do not introduce the spaces a or see remark in smi for a detailed explanation definition affine jordan projection finally we extend the notion of jordan projection to the whole group g v by setting g v jd g jd g conditions on the jordan projection in this subsection we introduce two new notions of regularity of an element g g v given as conditions on its jordan projection we also determine the relationships between them definition affine version of regularity we say that an element g g v is if it is and we have jd g or in other terms jd g jd g jd g jd g asymptotically contracting along if of course both of these are really properties of jd g by abuse of terminology we say that a vector y a is respectively or asymptotically contracting along if y itself satisfies respectively or remark rigorously we should talk about as the definition depends on the choice of however the author feels that this dependence is not significant enough to be constantly mentioned in this way see in particular point in the following example example if the representation is limited all three properties of being and asymptotically contracting along become equivalent this includes the standard representation of g so p p on see example in smi if the representation is limited and abundant at the same time the adjoint representation all three properties actually reduce to ordinary since generic then means if the representation is either limited or abundant the notion of does not depend on the choice of indeed is then uniquely determined by and since is by assumption generically symmetric in general see example the notion of does technically depend on the choice of if the representation is it is possible to show that its set of restricted weights is centrally symmetric or equivalently invariant by the author must however admit that he knows no better proof of this fact than by complete enumeration of representations it then follows that whenever g is asymptotically contracting along so is and asymptotic contraction along is then equivalent to jd g having the same type as in the sense of definition which was the condition considered in smi in general asymptotic contraction along is a stronger condition than being of type for a simple counterexample take g r acting on s see smi example for a fancier counterexample see example both of these new properties are affine analogs of however they are useful in slightly different contexts the purpose of assuming that an element g g v is is just to ensure that the affine ideal dynamical subspaces introduced in section are welldefined this is a relatively weak property jd g is merely required to avoid a finite collection of hyperplanes this is the property we will use the most often and in particular the one that makes the affine version of schema work the purpose of assuming that an element g g v is asymptotically contracting along is roughly to ensure that g acts with the correct dynamics on its ideal dynamical subspaces for more details see the discussion following the definition of the latter for a motivation of the asymptotically contracting terminology see proposition ii this property is not explicitly part of the hypotheses in schema since it is implied by contraction strength see proposition i however we will often need it as an extra assumption in intermediate results actually we will often need to assume that both g and g are asymptotically contracting the latter property is verified as soon as the jordan projection of g points in a direction that is in a sense close enough to that of more precisely remark i the set y a y and y are asymptotically contracting along is a convex cone stable by positive scaling and sum and is open indeed it is an intersection of finitely many open vector ii the intersection contains and is in particular nonempty by the identity which obviously still holds in the affine case this intersection is precisely equal to jd g g g v with both g and g asymptotically contracting along iii since is open and meets the closed set it also meets its interior thus the intersection is a nonempty open convex cone the latter set might not seem relevant at this point but will be useful in the final proof of this paper note the distinction between the set introduced here and the set defined in in smi as the equivalence class of it is however true that in the case the two sets coincide as seen in example here is the relationship between these two notions proposition let g g v then i if g is asymptotically contracting along then g is ii if both g and are asymptotically contracting along then g is remark as mentioned before example if is then we may remove the assumption about in general this is not true see example for a counterexample the proof relies on the following lemma for the moment we only need the weak version the strong version will be useful later lemma for every i there exists a pair of restricted weights i such that i with i weak version i and ii strong version i and remark as long as is provides a version of this lemma namely we can actually take i in general this is not true see example proof i let us reintroduce the decomposition g from the proof of lemma together with the other notations that went along with it we then have where let i we distinguish three cases the case i never occurs indeed for i the symmetry fixes pointwise so we have i if i applying lemma to the simple summand of containing we find that then we may simply take i and finally suppose that i then by definition we have by proposition this means that does not stabilize in other terms there exists a restricted weight of such that and compare this with which was slightly weaker since is a restricted weight by proposition the number i i is an integer we have on the one hand on the other hand because hence is positive by proposition every element of the sequence is a restricted weight of let be the last element of this sequence that still lies in then by construction taking and we get the weak version ii the same proof works for the strong version except in the last case i if we happen to have but then since is generically symmetric we have in particular by we have last since is a restricted weight of by proposition the form is also a restricted weight of the latter is the average of the of the former by we actually have thus we may take and i proof of proposition i let i and let i and be the restricted weights constructed in lemma the weak version suffices since g is asymptotically contracting along we then have i jd g jd g hence jd g ii suppose that both g and are asymptotically contracting along by the previous point we already know that g is it remains to check that for any we have jd g we distinguish two cases if since g is asymptotically contracting along we have jd g min jd g x is asymptotically contracting along x we have if since g jd g max jd g x hence g is counterexamples here we give three examples of pathological behavior to explain why some constructions of this paper can not be simplified in general all three of them are fairly for the reader who wishes to focus on behavior it is probably safe to skip this subsection example here are two examples of awkward neither limited nor abundant representations with an explanation of how they provide a counterexample to and to the version of lemma the first one is a development on example in smi take g r which is a split group hence its restricted root system coincides with its ordinary root system and is of type in the notations of its simple restricted roots are and take to be the representation with highest weight it has the restricted weights of the form with multiplicity the restricted weights of the form ej with multiplicity the zero restricted weight with multiplicity for a total dimension of for i we have ei and in particular for any generic so obviously we may take for on the other hand we have so this is no longer the case now note that there are three different types of generic elements extreme representatives of each type are given by a with b with c with in cases a and c the removal of excludes from consideration in case b however we have to deal with it to wit we then have ei ej for all i j and the only possible choice is take g its root system both ordinary and restricted is then of type let us call the restricted roots of the first factor the restricted roots of the second factor let us order the restricted roots by the lexicographical order on each factor this gives a unique ordering of the combined root system the simple restricted roots of g are then and take to be the representation of g with highest weight which corresponds to the standard action of g on the set of its restricted weights is then where by a b we mean a b a a b b the set is of cardinal the set contains the negative simple restricted roots and but not nor there are six different types of generic elements using on a the coordinate system extreme representatives of three of the types are given by a then this is a nice case b then and we need to deal with the two only possibilities are to take fi fi for i or c then and we need to deal with both we have to take and the three other types are obtained by exchanging e and f example here is a counterexample to the statement of remark a representation necessarily swinging and a choice of such that not all elements that asymptotic contraction along does not on its own imply we let the reader check the details take g r its root system both ordinary and restricted is of type and the involution maps to in the notations of appendix c let be a representation of g with highest weight this is a representation of dimension with distinct restricted weights take this vector is generically symmetric and extreme with respect to in fact we have then the vector y is asymptotically contracting along y y however the restricted weight vanishes on y but not on so y is not properties of affine maps the goal of this section is to define the affine versions of the remaining properties it generalizes subsections in smi but several constructions now become considerably more complex in subsection we define the ideal dynamical spaces associated to a map g g v the data of two of them is the affine version of the geometry of g and the remaining ones can be deduced from these two this generalizes at the same time subsections and from smi but using a different approach in subsection we study the action of a map on its affine ideally neutral space ag which turns out to be a this generalizes subsection from smi and subsection from with one small difference see remark in subsection we introduce a groupoid of canonical identifications between all the possible affine ideally neutral spaces we then use them to define the translation part of the asymptotic dynamics of g the margulis invariant this is an almost straightforward generalization of subsection from smi and subsection from in subsection we define and study the affine versions of and contraction strength we mostly follow the second half of subsection from smi or of subsection from in subsection we study the relationships between affine and linear properties this generalizes subsection from smi or subsection from but with a weaker and more complicated statement ideal dynamical spaces the goal of this subsection is to define the ideal dynamical spaces associated to a map g g v definition we start with the following particular case definition take any element g g v we may then write it as g gh ge gu where is a translation by some vector v v and gh ge gu is the jordan decomposition see proposition of the linear part of we say that g is in canonical form if we have i gh exp gh exp jd g ii v if g is in canonical form then we define its ideal dynamical spaces to be the reference dynamical spaces introduced above this is especially useful when g is as shown by the following property proposition if a map g g v is in canonical form and is then it stabilizes all eight reference dynamical spaces namely and proof first of all note that g commutes by definition with its hyperbolic part which is because g is in canonical form equal to exp jd g hence g belongs to the centralizer of jd g which is ljd g now since g is we have ljd g finally from proposition it follows that the group hence in particular g stabilizes the spaces and as a subgroup of the spaces and as a subgroup of the space as a subgroup of both now the action of the affine map g on the subspace v v coincides with the action of its linear part g so g also stabilizes these five subspaces finally we know since g is in canonical form that v is contained in hence in and in hence g also stabilizes and for a general g g v we define the ideal dynamical spaces to be the inverse images of the reference dynamical spaces by a canonizing map such that the conjugate is in canonical form however to ensure that the translation part of is we need g to be regular with respect to more precisely proposition let g g v be a map then i there exists a map g v called a canonizing map for g such that is in canonical form ii any two such maps differ by by an element of the key point of the proof is the following lemma lemma if g g v is in canonical form and is then the linear map g id induces an invertible linear map on the quotient space v proof the fact that the quotient map is follows from proposition indeed g hence g id stabilizes the subspace let us now show that the quotient map is invertible all eigenvalues of the restriction of the hyperbolic part exp jd g to the subspace are real and positive and since g is different from since the elliptic and unipotent parts of g commute with the hyperbolic part of g and have all eigenvalues of modulus it follows that all eigenvalues of the restriction of g to this subspace is stable by g by proposition are different from in particular the restriction of g id to is invertible the conclusion follows proof of proposition i let g be a canonizing map for g which exists by proposition we then have g where g g is in canonical form and v v we now claim that for a suitable choice of w v the map is a canonizing map for indeed we then have w we already know that g is in canonical form on the other hand by lemma here we need surjectivity of the quotient map we may choose w v in such a way that v w g w which finishes the proof ii assume that g g v is already in canonical form so that g g with v and g it is enough to show that any g v such that is still in canonical form is an element of indeed let be such a map let us write where w v is its translation part and g is its linear part by proposition the fact that commutes with jd g implies that ljd g as for the translation part if we have w g w by lemma here we need injectivity of the quotient map we have w definition affine version of geometry for any map g g v we introduce the following eight spaces called ideal dynamical spaces of g the ideally expanding space associated to g the ideally contracting space associated to g the ideally neutral space associated to g vg the ideally noncontracting space associated to g vg the ideally nonexpanding space associated to g ag the affine ideally noncontracting space associated to g the affine ideally nonexpanding space associated to g g the affine ideally neutral space associated to g where is any canonizing map of here is the idea behind this definition suppose first that the representation is nonswinging so that is actually generic and that the jordan projection of g is sufficiently close to then it actually has the same type as we have jd g and similarly for whenever that happens the ideal dynamical spaces of g coincide with its actual dynamical spaces as defined in section of smi if is not generic this is no longer true as such we still want to assume that jd g is sufficiently close to but now this can at best ensure that both g and are asymptotically contracting along in that case we only get that the moduli of the eigenvalues of are much larger than the moduli of the eigenvalues of are much smaller than might now differ from but somehow remain the moduli of the eigenvalues of g not too far from let us finally check that this definition makes sense and prove a few extra properties along the way proposition i the definitions above do not depend on the choice of ii the datum of ag uniquely determines the spaces vg and iii the datum of ag uniquely determines the spaces vg and iv the data of both ag and uniquely determine all eight ideal dynamical spaces the spaces ag and ag will play a crucial role as they are the affine analogs of the attracting and repelling flags and defined in section see remark below for an explanation proof an immediate corollary of proposition is that we have v v v v l x v with the following relationships v v v v the first two lines immediately imply ii and iii for points i and iv note that all eight groups contain point i follows by proposition point iv follows using the identity let us now investigate the action of a map g g v on its affine ideally neutral space g the goal of this subsection is to prove that it is almost a translation proposition we fix on v a euclidean form b satisfying the conditions of lemma for the representation definition we call any affine automorphism of induced by an element of l let us explain and justify this terminology proposition let be the set of fixed points of l in v l lv v let be the complement of in then any is an element of gl in other words are affine automorphisms of that preserve the directions of and and act only by translation on the component you may think of a as a kind of screw displacement the superscripts t and a respectively stand for translation and affine proof we need to show that every element of l fixes pointwise and leaves invariant globally the former is true by definition of for the latter let us prove it separately for elements of m and elements of a by hypothesis elements of m preserve the form b since they leave invariant the space they also leave invariant its complement let us introduce the notation m x the symbol is intended to represent the idea of avoiding zero so that the space decomposes into the orthogonal sum v then since v t v obviously similarly we have an orthogonal sum v now clearly being a sum of restricted weight spaces is invariant by a moreover every element of a actually fixes every element of v in particular leaves invariant the subspace v the conclusion follows remark note that in contrast to the case proposition in smi no longer have to act by isometries on as this space now comprises the possibly nontrivial space where a acts nontrivially this phenomenon is the reason why the reasoning we used in smi to prove additivity of margulis invariants could not be reused here without major restructuring we now claim that any map acts on its affine ideally neutral space by quasitranslations proposition let g be a map and let be any canonizing map for then the restriction of the conjugate to is a let us actually formulate an even more general result which will have another application in the next subsection lemma any map f g v stabilizing both and acts on by quasitranslation proof we begin by showing that any element of acts on in the same way as some element of recall that by definition m l and m v thus we want to show that for every restricted root and restricted weight such that we have v it is sufficient to show that in such a case the sum is no longer a restricted weight but otherwise both and would be elements of hence they would both be fixed by since is generically symmetric this would mean that is also fixed by which is impossible we now conclude in the same fashion as in the proof of lemma in smi passing from the two lie algebras first to the identity components e of then to the whole groups then using proposition to the stabilizers of and of proof of proposition the proposition follows immediately by taking f indeed by proposition the canonized map stabilizes and see example in smi for specific examples of in the nonswinging case we would like to treat a bit like translations for this we need to have at least a nontrivial space so from now on we impose the following condition on assumption the representation is such that dim this is precisely condition i a from the main theorem canonical identifications and the margulis invariant the main goal of this subsection is to associate to every map g g v a vector in called its margulis invariant see definition the two propositions and the lemma that lead up to this definition are important as well and will be often used subsequently proposition iv has shown us that the geometry of a map g namely the position of its ideal dynamical spaces is entirely determined by the pair of spaces a g in fact such pairs of spaces play a crucial role let us begin with a definition its connection with the observation we just made will become clear after proposition definition we define a parabolic space to be any subspace of v that is the image of either or no matter which one since is symmetric by some element of we define an affine parabolic space to be any subspace of a that is the image of by some element of g v equivalently a subspace a a is an affine parabolic space iff it is not contained in v and its linear part v a v is a parabolic space we say that two parabolic spaces or two affine parabolic spaces are transverse if their intersection has the lowest possible dimension or equivalently if their sum is the whole space v or a see example in smi proposition a pair of parabolic spaces resp of affine parabolic spaces is verse if and only if it may be sent to resp to by some element of g resp of g v in particular for any map g g v the pair ag is by definition a transverse pair of affine parabolic spaces this proposition as well as its proof is very similar to claim in and to proposition in smi proof let us prove the linear version the affine version follows immediately let be any pair of parabolic spaces by definition for i we may write vi for some let us apply the bruhat decomposition to the map we may write where belong to the minimal parabolic subgroup p and w is an element of the restricted weyl group w or technically some representative thereof let stabilizes v we have w since p and thus we have and are transverse is transverse to by lemma this implies that stabilizes hence it stabilizes its ment which means that wv v thus we have and as required conversely if those equalities hold then and are obviously transverse remark it follows from proposition that the set of all parabolic spaces can be identified with the flag variety by identifying every parabolic space with the coset for every element g g this identification then matches the ideally expanding space with the attracting flag composing with the bijection defined in proposition we may also identify this set with the opposite flag variety for g g this matches the ideally contracting space with the repelling flag using the bruhat decomposition of see corollary we may then show that two parabolic spaces and are transverse if and only if the corresponding pair of cosets is transverse in the sense of definition similarly it follows from that we can in principle identify the set of all affine parabolic spaces with the affine flag variety v this would however require very cumbersome notations in the linear case it was natural to do it in order to make things representationindependent in the affine case however there is a privileged representation anyway namely we decided that translating everything into that abstract language was not worth the trouble but the reader may go through that exercise if they wish so consider a transverse pair of affine parabolic spaces their intersection may be seen as a sort of abstract affine neutral space we now introduce a family of canonical identifications between those spaces these identifications have however an inherent ambiguity they are only defined up to proposition let be a pair of transverse affine parabolic spaces then any map g v such that gives by restriction an identification of the intersection with which is unique up to here by we mean the pair this generalizes corollary in and proposition in smi remark note that if is obtained in another way as an intersection of two affine parabolic spaces the identification with will in general no longer be the same not even up to there could also be an element of the weyl group involved proof the existence of such a map follows from proposition now let and be two such maps and let f be the map such that f f then by construction f stabilizes both and it follows from lemma that the restriction of f to is a let us now explain why we call these identifications canonical the following lemma while seemingly technical is actually crucial it tells us that the identifications defined in proposition commute up to with the projections that naturally arise if we change one of the parabolic subspaces in the pair while fixing the other lemma take any affine parabolic space let and be any two affine parabolic spaces both transverse to let resp be an element of g v that sends the pair resp to these two maps exist by proposition let be inverse image of by any map such that this image is unique by proposition let be the projection parallel to then the map defined by the commutative diagram is a the space is in a sense the abstract linear expanding space corresponding to the abstract affine noncontracting space more precisely for any map g such that ag we have the projection is because and so this statement generalizes lemma in and lemma in smi proof the proof is exactly the same as the proof of lemma in smi now let g be a map we already know that it acts on its affine ideally neutral space by now the canonical identifications we have just introduced allow us to compare the actions of different elements on their respective affine ideally neutral spaces as if they were both acting on the same space however there is a catch since the identifications are only canonical up to we lose information about what happens in only the translation part along remains formally we make the following definition let denote the projection from onto parallel to definition let g g v be a map take any point x in the affine space g vaff and any map g that canonizes g then we define the margulis invariant of g to be the vector m g g x x we call it the margulis invariant of this vector does not depend on the choice of x or indeed composing with a does not change the of the image see proposition in for a detailed proof of this claim for v g informally the margulis invariant gives the translation part of the asymptotic dynamics of an element g g v the linear part being given by jd g just as in the linear case as such it plays a central role in this paper quantitative properties in this subsection we define and study the affine versions of and contraction strength we more or less follow the second half of subsection in smi or of subsection in we endow the extended affine space a with a euclidean norm written simply k k given by x t v a k x t b x x where b is the norm defined in lemma then the subspaces v and are pairwise orthogonal definition affine version of take a pair of affine parabolic spaces an optimal canonizing map for this pair is a map g v satisfying and minimizing the quantity max k by proposition and a compactness argument such a map exists iff and are transverse we define an optimal canonizing map for a map g g v to be an optimal canonizing map for the pair a g let c we say that a pair of affine parabolic spaces resp a map g is if it has a canonizing map such that c and now take two maps in g v we say that the pair is degenerate if every one of the four possible pairs agi agj is the point of this definition is that there are a lot of calculations in which when we treat a pair of spaces as if they were perpendicular we err by no more than a multiplicative constant depending on remark the set of transverse pairs of extended affine spaces is characterized by two open conditions there is of course transversality of the spaces but also the requirement that each space not be contained in v what we mean here by degeneracy is failure of one of these two conditions thus the property of a pair being actually encompasses two properties first it implies that the spaces and are transverse in a quantitative way more precisely this means that some continuous function that would vanish if the spaces were not transversely is bounded below an example of such a function is the smallest non identically vanishing of the principal angles defined in the proof of lemma iv second it implies that both and are not too close to the space v in the same sense in purely affine terms this means that the affine spaces vaff and vaff contain points that are not too far from the origin both conditions are necessary and appeared in the previous literature such as and however they were initially treated separately the idea of encompassing both in the same concept of seems to have been first introduced in the author s previous paper definition affine version of contraction strength let s for a map g g v we say that g is along if we have x y a g kg y k kg x k kxk kyk note that by definition the spaces and ag always have the same dimensions as and respectively hence they are nonzero we define the affine contraction strength along of g to be the smallest number g such that g is g along in other words we have g ag this notion is closely related to the notion of asymptotic contraction proposition i if a map g g v is and along such that g then it is also asymptotically contracting along ii if a map g g v is and asymptotically contracting along then we have lim gn n proof let g g v be we claim that it is asymptotically contracting along if and only if it satisfies the inequality r r g a g where r f denotes the spectral radius of f indeed as a porism of proposition i we obtain that the spectrum of is n o jd g and the spectrum of is g n o jd g the accounts for the affine extension by assumption the eigenvalue is already contained in the spectrum of the linear part so we may actually ignore the part the claim follows the conclusion then follows from the facts that for every linear map f we have r f kf k this gives i and log kf n k n log r f o log n n also known as gelfand s formula this gives ii comparison of affine and linear properties the goal of this subsection is to prove proposition that for any element g g v relates the quantitative properties we just introduced to the corresponding properties of its linear part g this is given by lemma in in the case of the adjoint representation and by lemma in smi which is a straightforward generalization in the case in the general case however only points i and ii generalize in the obvious way the statement iii holds only in a weaker form we now need to consider the linear contraction strength of g but the affine contraction strength of this is basically what forced us to develop the purely linear theory section in a systematic way rather than presenting it as a particular case of the affine theory as we did in the previous papers in order to be able to compare the affine contraction strength with the linear one we begin by expressing the former in terms of the cartan projection the following is a generalization of lemma in smi formulated in a slightly more general way the original statement was essentially the same inequality without the absolute value proposition for every c there is a constant c with the following property let g g be a map then we have ct g log g min ct g max x c recall that is the set of restricted weights that take nonnegative values on and is its complement in to make sense of this estimate keep in mind that the quantity log g is typically positive also note that the minimum term is certainly nonpositive as proof first of all let be an optimal canonizing map for g and let then it is easy to see that we have g and the difference ct ct g is bounded by a constant that depends only on hence we lose no generality in replacing g by g clearly it is enough to show that for g which is in canonical form we have the equality min ct max ct log x this is the straightforward generalization of in smi and is proved in exactly the same fashion mutatis mutandis as a stepping stone we first need the extension of point ii of lemma in or lemma in smi giving a bound on the affine contraction strength of the linear part of g seen as an element of g v by the usual embedding lemma for any map g g v we have g g the proof is very similar to the one given in proof we have g max g max g g g g g g vg ag to justify the last equality note that if is a canonizing map for g then the space ag contains the subspace v which is nonzero by assumption is clearly and all eigenvalues of g restricted to that subspace have modulus hence a v g we may now state and prove the appropriate generalization of lemma in and lemma in smi linking in all three points the purely linear properties with the affine properties proposition for every c there is a positive constant c with the following property let g h be a pair of elements of g v in this case i the pair g h is c in the sense of definition for some constant c that depends only on c ii we have g g iii moreover if we assume that c then we actually have g g g proof i by remark we lose no generality in specifying some particular form for the map introduced in definition let us define in a way which is consistent with the definition of in the affine case namely using a particular case of the formula max k k where is our working representation and is the euclidean norm we introduced in the beginning of section or more precisely its restriction to v which is just the euclidean norm introduced in lemma clearly if is a canonizing map for g then is a canonizing map for g since obviously for any g we have k the conclusion follows ii first note that by lemma we have g g now apply proposition passing to the exponential we have g exp max ct g exp min ct g x x now the key point is lemma i for every i we may find i and i whose difference is it immediately follows that exp max ct g exp min ct g g x x iii we proceed in two steps we establish first which is straightforward then which relies on the strong version ii of lemma in fact this is the only place where the strong version is needed we have by definition g ag let be an optimal canonizing map for since g is and the images g and vg being respectively equal to and are orthogonal it follows that g max v g g clearly we have we have g v on the other hand since it follows that g g by methods similar to the proof of proposition essentially by proposition we may rewrite this as g exp ct g max g note that combining proposition with lemma we have exp max ct g exp min ct g g x x if we take c equal to the inverse of the implicit constant in that inequality we may assume that ct g ct g in particular since we then have ct g obviously this remains true for now take some i then we have ct g i ct g ct g i ct g where i and are the restricted weights introduced in lemma ii this implies that g exp exp max ct g max ct g x combining the two estimates the conclusion follows products of maps the goal of this section is to prove proposition which is the main part of the affine version of schema the general strategy is the same as in section in smi or in section in we reduce the problem to proposition by considering the action of g v on a suitable exterior power a rather than on the spaces vi as in section we start by proving the following result whose role in smi was played by proposition even though the new version involves slightly different inequalities the proof is quite similar proposition for every c there is a positive constant c with the following property take any pair g h of maps in g v such that g c and h c then gh is asymptotically contracting along proof let c and let g h be a pair of maps in g v such that g c and h c for some constant c to be specified later the first thing to note is that since the property of being depends only on the linear part lemma and proposition i reduce the problem to the case where g h now proposition gives us max ct g min ct g log g c x x taking c small enough we may assume that ct g max c of course a similar estimate holds for h ct h max c let g h be the vector defined in corollary then we deduce from that for every pair of restricted weights and we have g h ct g ct h kk g h ct g ct h k max c adding together the three estimates and we find that for every such pair we have g h on the other hand we have which says that jd gh conv g h now take any w from proposition it then follows that stabilizes both and hence we still have w g h g h thus the difference takes positive values on every point of the orbit g h hence it also takes positive values on every point of its convex hull in particular we have jd gh we conclude that gh is indeed asymptotically contracting along we now establish the correspondence between affine and proximal properties we introduce the integers p dim dim q dim d dim a dim v q for every g g v we may define its exterior power g a a the euclidean structure of a induces in a canonical way a euclidean structure on lemma i let g g v be a map asymptotically contracting along then g is prox imal and the attracting resp repelling space of g depends on nothing but ag resp vg p g ag g x a x ii for every c whenever is a pair of maps that are also asymptotically contracting along is a c p pair of proximal maps iii for every c there is a constant c with the following property for every map g g v that is also asymptotically contracting along we have g g if in addition g c we have g g recall that and stand respectively for the proximal and affine contraction strengths see definitions and iv for any two subspaces and of a we have this is similar to lemma in and to lemma in smi but now there is the additional complication of needing to distinguish between and asymptotic contraction the proofs of points ii and iii however still remain very similar to the corresponding proofs in we chose to reproduce them here in particular in order to correct a small mistake in we erroneously claimed that we could take c which stemmed from a confusion between g and its canonized version g proof i let g be a map asymptotically contracting along from proposition as already noted in the proof of proposition it follows that r r a g every eigenvalue of is smaller in modulus than every eigenvalue of g let be the eigenvalues of g acting on a counted with multiplicity and ordered by nondecreasing modulus we then have r r ag on the other hand we know that the eigenvalues of g counted with multiplicity are exactly the products of the form where ip as the two largest of them by modulus are and it follows that g is proximal as for the expression of e s and e u it follows immediately by considering a basis that trigonalizes ii take any pair i j let be an optimal canonizing map for the pair agi then we have agi and by proposition vgj in the euclidean structure we have chosen is orthogonal to hence is orthogonal to the hyperplane x a x by the previous point it follows that is a canonizing map for the pair p gi gj as and similarly for the conclusion follows iii let c and let g g v be a map that is also asymptotically contracting along let be an optimal canonizing map for g and let then it is clear that g and g so it is sufficient to prove the statement for g let sp resp be the singular values of restricted to resp to vg so that and g since the spaces and are stable by g and orthogonal we get that the singular values of on the whole space a are sp note however that unless this list may fail to be sorted in nondecreasing order on the other hand we know that the singular values of are products of p distinct singular values of since p is orthogonal to we may once again analyze the singular values separately for each subspace we know that the singular value corresponding to e s is equal to sp we deduce that k u k is equal to the maximum of the remaining singular values in particular it is larger than or equal to sp on the other hand if is the largest eigenvalue of then we have det sp where are the eigenvalues of or equivalently of g sorted by nondecreasing modulus the second equality holds because g hence is asymptotically contracting along so that its eigenvalues are sorted in the correct order it follows that p g up sp g sp which is the first estimate we were looking for now if we take c small enough we may suppose that then we have which means that the singular values of are indeed sorted in the correct order hence sp is actually the largest singular value of u and the inequality becomes an equality the second estimate follows iv see lemma iv in we also need the following technical lemma which generalizes lemma in and lemma in smi lemma there is a constant with the following property let be any two affine parabolic spaces such that haus then they form a pair of course the constant is arbitrary we could replace it by any number larger than proof the proof is exactly the same as the proof of lemma in mutatis mutandis proposition for every c there is a positive constant c with the following property take any pair g h of maps in g v suppose that we have c and c then gh is still and we have vgh vg g i v v h c gh h a gh a g g ii s c gh h iii gh g h points ii and iii are a generalization of proposition in and proposition in smi the proof is very similar together they give the main part of the affine version of schema as for point i it generalizes corollary in and corollary in smi but its statement is now stronger as it involves the linear contraction strength as such it can no longer be obtained as a corollary of the affine version instead it must be proved independently using proposition remark note that point ii involves but in point i we have simply written h instead in fact in the linear case the distinction between h and becomes irrelevant as they both have the same linear contraction strength by proposition iv however their affine contraction strengths can be different proof of proposition let us fix some constant c small enough to satisfy all the constraints that will appear in the course of the proof let g h be a pair of maps satisfying the hypotheses first of all note that if we assume that c then proposition i ensures that g h g and are all asymptotically contracting along let us prove i by proposition i it follows that g and h hence g and h are proposition i and ii then implies that the pair g h satisfies the hypotheses of proposition hence we have gh g g y y h c x gh h now remember remark that the attracting resp repelling flag of a map g g carries the same information as its linear ideally expanding resp contracting space even more precisely we may deduce from proposition that the orbital map from g to the orbit of in the grassmanian of v descends to a smooth embedding of the flag variety since the flag variety is compact the embedding is in particular the desired inequalities follow if we take c c then proposition tells us that gh is asymptotically contracting along we may also apply proposition to the pair hence gh is also asymptotically contracting along and by proposition ii we deduce that gh is the remainder of the proof works exactly like the proof of proposition in or of proposition in smi namely by applying proposition to the maps g and let us check that and satisfy the required hypotheses by lemma i and are proximal by lemma ii the pair is c p if we choose c c it follows by lemma iii that g and h if we choose c sufficiently small then and are c p sufficiently contracting to apply proposition thus we may apply proposition it remains to deduce from its conclusions the conclusions of proposition from proposition i using lemma i iii and iv we get agh a g g which shows the first line of proposition ii by applying proposition to instead of we get in the same way the second line of proposition ii let be an optimal canonizing map for the pair ag by hypothesis we have but if we take c sufficiently small the two inequalities that we have just shown together with lemma allow us to find a map with k k and a gh it follows that the composition map gh is the last inequality namely proposition iii now is deduced from proposition ii by using lemma iii additivity of margulis invariant the goal of this section is to prove propositions and which explain how the margulis invariant behaves under group operations respectively inverse and composition these two propositions are the key ingredients in the proof of the main theorem proposition is a generalization of proposition i in the proof is similar and fairly straightforward compare it also with the results of section proposition is a generalization of proposition ii in it gives the asymptotic dynamics part in the affine version of schema the proof takes up the majority of this section to estimate m gh the idea is to introduce two vectors mgh g and mgh h such that we have by definition m gh mgh g mgh h we then first find an intermediate vector called mg gh g that we prove to be close to m g lemma then we prove that mgh g is close to this intermediate vector lemma proposition for every map g g v we have m m g remark note that m g is by definition an element of the space which again by definition is the set of fixed points of l zg a from this it is straightforward to deduce that is invariant by ng a hence induces a linear involution on which does not depend on the choice of a representative of in proof first of all note that if is a canonizing map for g then is a canonizing map for indeed assume that g exp jd g ge gu is in canonical form then we have jd g since by definition exchanges the dominant weyl chamber and its negative and since is symmetric the action of preserves it remains to show that or more precisely any representative ng a of commutes with we use the fact that the group w that we defined as the quotient ng a a is also equal to the quotient nk a a see formulas and hence ng a w zg a w zk a a nk a a ka now recall that we have an orthogonal decomposition z v z let us show that all three components are invariant by for this is obvious since is invariant by for this follows from remark this is obviously the case for v now by definition the group a acts trivially on v and by construction k acts on v by orthogonal transformations indeed the euclidean structure was chosen in accordance with lemma hence which is the orthogonal complement of in v is also invariant by the desired formula now immediately follows from the definition of the margulis invariant proposition for every c there are positive constants c and c with the following property let g h g v be a pair of maps with and all c along then gh is and we have km gh m g m h k c the basic idea of the proof is the same as in smi or however in the proof of lemma in smi a key point is that since the factors that are introduced to construct diagram namely ggh and gg gh are linear parts of they automatically have bounded norm but in the general case this last deduction fails see remark this issue forced us to completely reorganize the proof the new proof though still technical is more elegant it is more symmetric for instance we got rid of the lopsided diagram and of the confusing series and it is also more structured as it cleanly separates into an algebraic part comprising lemma involving a combination of canonical projections which are all bounded by lemma and an analytic part comprising lemma its corollary and lemma involving projections between spaces which are close to each other with the angle controlled by the contraction strengths of g and h hence which introduce only a small error proof let c we choose some constant c small enough to satisfy all the constraints that will appear in the course of the proof for the remainder of this section we fix g h g v a pair of maps such that g and are c along the following remark will be used throughout this proof remark we may suppose that the pairs agh ahg ag and a hg are all indeed recall that by proposition we have a gh a g g s c gh h and similar inequalities with g and h interchanged on the other hand by hypothe sis ag is if we choose c sufficiently small these four statements then follow from lemma proof of proposition continued if we take c c then proposition ensures that gh is to estimate m gh we decompose the induced map gh gh agh into a product of several maps we begin by decomposing the product gh into its factors we have the commutative diagram gh gh hg g gh h indeed since hg is the conjugate of gh by h and we have h gh ahg and g ahg agh next we factor the map g hg agh through the map g ag ag which is better known to us we have the commutative diagram gh hg g g g g where is the projection onto g parallel to vg vg it commutes with g because ag vg and vg are all invariant by now we decompose again every diagonal arrow from the last diagram into two factors for any two maps u and v we introduce the notation u v au av we call resp the projection onto g gh resp ahg g still parallel to vg vg to justify this definition we must check that ag gh and similarly ahg g is supplementary to indeed by remark is transverse to ag hence by proposition and proposition ii supplementary to thus a g g gh and a vg ag vg vg ag gh then we have the commutative diagrams gh g gh g g and hg hg g finally and this is new in comparison to smi we would like to replace and by some projections we define gh to be the projection onto g gh parallel to vg obviously it induces a bijection between agh and ag gh we define g to be the projection onto hg g parallel to vhg obviously it induces a bijection between ahg and ahg g the reason they are is that by lemma they actually commute with canonical identifications see remark below for more details we then make the decompositions gh gh g g and the last three steps can be repeated with h instead of the way to adapt the second step is straightforward for the third step we factor hg ah through ah hg and agh ah through agh h for the fourth step we project respectively along and along vgh vh combining these four decompositions we get the lower half of diagram we left out the expansion of h we leave drawing the full diagram for especially brave readers let us now interpret all these maps as endomorphisms of to do this we choose some optimal canonizing maps gh g respectively of g of gh of hg of the pair ag and of the pair ahg this allows us to define ggh hgh gg gh to be the maps that make the whole diagram commutative now let us define mgh g ggh x x mgh h hgh x x ggh gg gh hgh gh g gh hg g hg gh gh g hg g g gh g g diagram g h gh where for any x vaff is the affine space parallel to and passing through the origin since gh is the conjugate of hg by g and the elements of g v defined in an obvious way whose restrictions to are ggh and hgh stabilize the spaces g and h are thus quasiand a by lemma gh gh translations it follows that these values mgh g and mgh h do not depend on the choice of x compare this to the definition of a margulis invariant definition we it immediately follows that have m gh ggh hgh x x for any x m gh mgh g mgh h thus it is enough to show that kmgh g m g k and kmgh h m h k note that while the vectors mgh g and mgh h are elements of the maps ggh and hgh are extended affine isometries acting on the whole subspace we shall prove the estimate for g the proof of the estimate for h is analogous we proceed in two steps first we introduce the vector mg gh g gg gh x x and we show lemma that it differs from mgh g only by a for any x bounded constant second we show lemma that it is very close to m g these two lemmas together imply the conclusion remark in contrast to actual margulis invariants the values mgh g and mgh h do depend on our choice of canonizing maps choosing other canonizing maps would force us to subtract some constant from the former and add it to the latter remark the fourth decomposition step above namely is what makes the whole proof much cleaner in smi we had a map called which was almost a quasitranslation lemma in smi in the current proof this map decomposes into two pieces that are much easier to deal with is now a bounded just like and thus falls in with the algebraic part while is now almost the identity lemma and thus stays in the analytic part lemma we have kmg gh g m g k proof by lemma the maps and are let us show that their norms are bounded by a constant that depends only on c obviously this implies the conclusion let us start with by definition we have g g hg g g hg g hg hg g g hg where is the projection onto parallel to now is actually an orthogonal projection hence it has norm and g hg is bounded by remark similarly we have hg g which is bounded to deal with note that g gh ag and gh ag ag gh are inverse to each other we deduce that gh gh g gh gh g gh gh gh g and we conclude as previously similarly we have g gh and we conclude in the same way lemma the estimates i ii id k agh ag id k ahg ag hold as soon as the respective sides are smaller than some constant depending only on corollary we have i ii id k g id k iii id k g iv id k g in light of remark this corollary can be seen as a simpler version of lemma in smi indeed the old corresponds by to the new while the old corresponds to alone proof points i and ii immediately follow from the lemma combined with proposition ii provided that we take c small enough for points iii and iv simply apply the lemma to the pair g h it is easy to check that this pair still satisfies the hypotheses of proposition we then get id k v vg g gh id k v vg g hg in each case the second inequality follows from proposition i the first inequality is an application of the lemma which is licit provided c is small enough since proposition ii propagates the required upper bound to g the proof of lemma might seem slightly technical but there is an easy intuition behind it essentially the idea is that if you jump back and forth between two spaces that almost coincide going both times in directions whose angle with the two spaces is not too shallow then you can t end up very far from your starting point proof of lemma i by remark we know that gh k conjugating everything by this map it is thus sufficient to show that for every x gh we have gh x x g kxk where by we mean the inverse of the bijection gh ag gh let us first estimate the quantity gh x xk to begin with let us push everything forward by the map gh writing y gh x we have gh gh x x k y yk gh x k kyk sin y y by remark we know that g gh k hence we may pull everything back by gh again we conclude that gh x xk x g gh kxk gh ag agh a g gh x xk agh a g kxk let us now estimate the quantity gh x gh x we introduce the notation z gh x and we define to be the unique linear automorphism of a satisfying gh x if x a g x x if x so that and but ag gh from the inequalities g k c and gh k it is easy to deduce that both norms and k are bounded by a constant that depends only on now we obviously have hence z z k tan z z k now we have z z ag gh agh a g since and are bounded since z gh if the side is small enough we may assume that z is smaller than some fixed constant say for every we obviously have tan it follows that z z k agh a g z k now since and are bounded we deduce that z zk a gh a g kzk it remains to estimate kzk in terms of kxk we have kzk kxk gh x xk kxk agh a g kxk by taking agh ag small enough we may assume that kzk we conclude that gh x gh x agh a g kxk adding together and the desired inequality follows ii the proof is completely analogous lemma we have kmgh g mg gh g k g the proof is somewhat similar to the proof of the second half of lemma in smi proof recall that mgh g ggh x x where x can be any element of the affine space let o be the origin of vaff the intersection of the line with the affine space vaff in other terms o v by definition o is an element of we then have for every extended affine map f vaff f x f x o f o let us take x gg gh o gg gh we may then write mgh g x x gg gh x x gg gh x gg gh x gg gh o gg gh o o gg gh o o o z z z ii i iii since ggh the middle term ii is then by definition equal to mg gh g so that mgh g mg gh g i iii now we have k iii k o ok id kkok id k by corollary i g it remains to estimate i we set y gg gh o let us calculate the norm of this vector kyk gg gh g g g to justify the third line remember that we have seen in the proof of lemma that the four maps and are all bounded now we have k i k y y y y id y o id o id kyk g g id by as ky by corollary ii and iv and g by proposition iii g joining together and the conclusion follows margulis invariants of words we have already studied how contraction strengths proposition and margulis invariants proposition behave when we take the product of two mutually sufficiently contracting maps the goal of this section is to generalize these results to words of arbitrary length on a given set of generators this is a straightforward generalization of section in smi and of section in definition take k generators gk consider a word g with length l on these generators and their inverses for every m we have im k and we say that g is reduced if for every m such that m l we have im we say that g is cyclically reduced if it is reduced and also satisfies il proposition for every c there is a positive constant c with the following property take any family of maps gk g v satisfying the following hypotheses every gi is any pair taken among the maps gk is except of course if it has the form gi for some i for every i we have gi c and c take any nonempty cyclically reduced word g with im k for every m then g is and we have m g l x m where is the constant introduced in proposition the proof proceeds by induction with proposition and proposition providing the induction step proof the proof is exactly the same as the proof of proposition in mutatis mutandis let us just present one small improvement the proof relies on the following lemma lemma every cyclically reduced word g can be decomposed as a product of two cyclically reduced subwords and both nonempty that is m l in this was proved by contradiction in a somewhat obscure way let us reformulate the proof so that it is constructive and hopefully more comprehensible proof we may take m to be the smallest positive index such that il such an index always exists and is at most equal to l since the word is reduced then the first subword is actually of the form and any word of this form is automatically cyclically reduced as soon as it is reduced as for it is cyclically reduced by construction construction of the group here we prove the main theorem the reasoning is similar to that of section in and almost identical to that of section in smi the main difference is the substitution of instead of which in particular requires us to invoke proposition which had no equivalent in smi in the final proof also since we have now developed the purely linear theory in a systematic way section the relationship between linear properties and affine properties becomes clearer in particular in the second bullet point of the final proof let us recall the outline of the proof we begin by showing lemma that if we take a group generated by a family of sufficiently contracting maps that have suitable margulis invariants it satisfies all of the conclusions of the main theorem except we then exhibit such a group that is also and thus prove the main theorem the idea is to ensure that the margulis invariants of all elements of the group lie almost on the same obviously if maps every element of to its opposite proposition makes this impossible so we now exclude this case assumption the representation is such that the action of on is not trivial this is precisely condition i from the main theorem more precisely is the set of all vectors that satisfy i a and what we say here is that some of them also satisfy i b see example in smi for examples of representations that do or do not satisfy this condition thanks to assumption we choose once and for all some nonzero vector v that is a fixed point of which is possible since is an involution we also choose a vector collinear to v and such that k lemma take any family gk g v satisfying the hypotheses and from proposition and also the additional condition for every i m gi then these maps generate a free group acting properly discontinuously on the affine space vaff proof the proof is exactly the same as the proof of lemma in mutatis mutandis the constant denoted as c or in the earlier paper corresponds to what we now call c or respectively the orthogonal projection z parallel to d now becomes the orthogonal projection a parallel to we may now finally prove the main theorem we follow the same strategy as in the proof of the main theorems of and of smi with a few additional tweaks proof of main theorem first note that assumption guarantees that satisfies the hypotheses of the main theorem the two other assumptions were for free assumption is just the weaker condition i a and assumption is an even weaker condition that follows from i a we find a positive constant c and a family of maps gk g v with k that satisfy the conditions through and whose linear parts generate a subgroup of g then we apply lemma we proceed in several stages we begin by using a result of benoist we apply lemma in to g t k as defined in remark point iii of that remark assures that it is indeed a nonempty open convex cone this gives us for any k a family of maps g which we shall see as elements of g v by identifying g with the stabilizer of such that i every and every is asymptotically contracting along in particular by proposition ii it is this is ii for any two indices i and signs such that i the pair y i is transverse iii any single generates a group iv all of the generate together a subgroup of since in our case t is finite benoist s item v is not relevant to us a comment about item i benoist s theorem only works with elements so it forces every to be not only but actually of which we make no use a comment about item ii since we have taken benoist s to be the whole group g we have so that is the full flag variety so benoist s theorem actually gives us the stronger property that the pair x y i where x is some element of the open weyl chamber is transverse in the full flag variety once again we only need the weaker version using remark condition ii above may be restated in the following way for any two indices i and signs such that i the pair of parabolic spaces v i is transverse clearly every pair of transverse spaces is for some finite c and here we have a finite number of such pairs hence if we choose some suitable value of c which we fix for the rest of this proof the hypothesis follows from condition iii it follows that any algebraic group containing some power of some generator must actually contain the generator itself this allows us to replace every by some power without sacrificing condition iv clearly conditions i ii and iii are then preserved as well if we choose n large enough we may suppose thanks to proposition ii that the numbers are as small as we wish this gives us in fact we shall suppose that for every i we have smain c for an even smaller constant smain c to be specified soon to satisfy we replace the maps by the maps gi i for i k where is a canonizing map for we need to check that this does not break the first three conditions indeed for every i we have gi even better since the affine map gi has i by construction canonical form gi has the same geometry as meaning that agi and hence the gi still satisfy the hypotheses and but now we have m gi this is as for contraction strength along we have gi gi i i agi i ag i k smain c and similarly for recall that k hence the quantity k k k depends only on c in fact it is equal to the norm of the matrix it follows that if we choose smain c c k then the hypothesis is satisfied we conclude that the group generated by the elements gk acts properly discontinuously by lemma is free by the same result nonabelian since k and has linear part in references abels properly discontinuous groups of affine transformations a survey geom dedicata ams abels margulis and soifer the auslander conjecture for dimension less than preprint abels margulis and soifer on the zariski closure of the linear part of a properly discontinuous group of affine transformations j differential geometry abels margulis and soifer the linear part of an affine group acting properly discontinuously and leaving a quadratic form invariant geom dedicata auslander the structure of compact locally affine manifolds topology benoist actions propres sur les espaces annals of mathematics benoist asymptotiques des groupes geom and funct bq benoist and quint random walks on ductive groups ergebnisse to appear available http borel and tits groupes publications de l borel and tits l article groupes publications de l dgk danciger and kassel proper affine action of coxeter groups in preparation eberlein geometry of nonpositively curved manifolds university of chicago press fried and goldman affine crystallographic groups adv in hall lie groups lie algebras and representations an elementary introduction springer international publishing second edition helgason geometric analysis on symmetric spaces amer math second edition reat humphreys linear algebraic groups knapp lie groups beyond an introduction margulis free properly discontinuous groups of affine transformations dokl akad nauk sssr margulis complete affine locally flat manifolds with a free fundamental group soviet milnor on fundamental groups of complete affinely flat manifolds adv in smi smilga proper affine actions in representations submitted available at smilga proper affine actions on semisimple lie algebras annales de l institut fourier tits d un groupe sur un corps quelconque journ reine angw
| 4 |
artificial intelligence fabio massimo zanzotto university of rome tor vergata oct abstract little by little newspapers are revealing the bright future that intelligence ai is building intelligent machines will help everywhere however this bright future has a dark side a dramatic job market contraction before its unpredictable transformation hence in a near future large numbers of job seekers will need support while catching up with these novel unpredictable jobs this possible job market crisis has an antidote inside in fact the rise of ai is sustained by the biggest knowledge theft of the recent years learning ai machines are extracting knowledge from unaware skilled or unskilled workers by analyzing their interactions by passionately doing their jobs these workers are digging their own graves in this paper we propose intelligence as a fairer paradigm for intelligence systems will reward aware and unaware knowledge producers with a scheme decisions of ai systems generating revenues will repay the legitimate owners of the knowledge used for taking those decisions as modern robin hoods researchers should for a fairer intelligence that gives back what it steals introduction we are on the edge of a wonderful revolution intelligence ai is breathing life into helpful machines which will relieve us of our need to perform repetitive activities cars are taking their steps in our urban environment and their younger brothers that is assisted driving cars are already a commercial reality robots are vacuum cleaning and mopping the of our houses have conquered our new our smartphones and from there they help with everyday tasks such as managing our agenda answering our factoid questions or being our learning companions in medicine computers can already help in formulating diagnoses by looking at data doctors generally neglect intelligence is preparing a wonderful future where people are released from the burden of repetitive jobs the bright intelligence revolution has a dark side a dramatic mass unemployment that will precede an unpredictable job market transformation people and see http hence governments are frightened nearly every week newspapers all over the world are reporting on possible futures where around one of actual jobs will disappear alarming reports foresee that more than one billion people will be unemployed worldwide by releasing people from repetitive jobs intelligent machines will replace many workers chatbots are slowly replacing call center agents trains are already reducing the number of drivers in our trains cars are to replace cab drivers in our cities drones are expanding automation in managing delivery of goods by drastically reducing the number of delivery people and these are only examples as even more cognitive and artistic jobs are challenged intelligent machines may produce music jingles for commercials write novels produce news articles and so on intelligent risk predictors may replace doctors chatbots along with massive open online courses may replace teachers and professors coders risk being replaced by machines too nobody s job is safe as we face this overwhelming progress of intelligence surprisingly the rise of intelligence is supported by the unaware mass of people who risk seeing their jobs replaced by machines these people are giving away their knowledge which is used to train these wonderful machines this is an enormous and legal knowledge theft taking place in our modern era along with those aware programmers and intelligence researchers who set up the learning modules of these intelligent machines an unaware mass of people is providing precious training data by passionately doing their job or simply performing their activity on the net answering an email an interaction on a messaging service leaving an opinion on a hotel and so on are all simple everyday activities people are doing this data is a goldmine for intelligence machines learning systems transform these interactions in knowledge for the intelligence machines and the knowledge theft is completed by doing their normal everyday activity people are digging the grave for their own jobs as researchers in intelligence we have a tremendous responsibility building intelligent machines we can work with rather than intelligent machines that steal our knowledge to do our jobs we need to ways to support job seekers as they train to catch up with these novel unpredictable jobs we need to prepare an antidote as we spread this poison in the job market this paper propose artificial intelligence as a novel paradigm for a responsible intelligence this is a possible antidote to the poisoning of the job market the idea is simple giving the right value to the knowledge producers ai is an umbrella for researchers in intelligence working with this underlying idea hence promotes interpretable learning machines and therefore intelligence systems with a clear knowledge lifecycle for systems it will be clear whose the knowledge has been used in a deployment or in situations this is a way to give the rightful credit and revenue to the original knowledge producers we need a fairer intelligence the rest of the paper is organized as follows section describes the enabling paradigms of ai section sketches some simple proposals for a better future then section draws some conclusions ai enabling paradigms transferring knowledge to machines with programming with learning from repeated experience since the beginning of the digital era programming is the preferred way to teach to machines artificial programming languages have been developed to have a clear tool to tell machines what to do according to this paradigm whoever wants to teach machines how to solve a new task or how to be useful has to master one of these programming languages these people called programmers have been teaching machines for decades and have made these machines extremely useful nowadays it is to think staying a single day without using the big network of machines programmers have contributed to building as not all the tasks can be solved by programming autonomous learning has been reinforced as an alternative way of controlling the behavior of machines in autonomous learning machines are asked to learn from experience with the paradigm of programming we have asked machines to go to school before these machines have learned to walk through trial and error this is why machines have always been good in solving very complex cognitive tasks but very poor in working with everyday simple problems the paradigm of autonomous learning has been introduced to solve this problem in these two paradigms who should be paid for transferring knowledge to machines and how should they be paid in the programming paradigm roles are clear programmers are the teachers and machines are the learners hence programmers could be payed for their work in the autonomous learning paradigm the activity of programmers is to the selection of the most appropriate learning model and of the examples to show to these learning machines from the point of view of programming is a fair paradigm as it keeps humans in the loop although machines which have been taught exactly what to do can hardly be called artificial intelligence on the contrary autonomous learning is an unfair model of transferring knowledge as the real knowledge is extracted from data produced by unaware people hence little seems to be done by humans and machines seem to do the whole job yet knowledge is stolen without paying explainable artificial intelligence and explainable machine learning explaining the decisions of learning machines is a very hot topic nowadays dedicated workshops or sessions in major conferences are in areas of application for example medicine thrust in intelligent machines can not be blind as decisions can have a deep impact on humans hence understanding why a decision is taken become extremely important however what is exactly an explainable machine learning model is still an open debate in explainable machine learning can play a crucial role in fact seen from another perspective explaining machine learning decisions can keep humans in the loop in two ways giving the last word to humans and explaining what data sources are responsible for the decision in the case the decision power is left in the hand of very specialized professionals that use machines as advisers this is a clear case of ai yet this is to highly specialized knowledge workers in some area the second case instead is fairly more important in fact machines that take decisions or work on a task are constantly using knowledge extracted from data spotting which data have been used for a decision or for a action of the machine is very important in order to give credits to who has produced these data in general data are produced by anyone and everyone not only by knowledge workers hence understanding why a machine takes a decision may become a way to keep everybody in the loop of intelligence convergence between symbolic and distributed knowledge representation explaining machine learning decisions is simpler in image analysis o better in all those cases where the system representation is similar what is represented in fact for example neural networks interpreting images are generally interpreted by visualizing how subparts represent salient subparts of target images both input images and subparts are tensors of real numbers hence these networks can be examined and understood however large part of the knowledge is expressed with symbols both in natural and languages combination of symbols are used to convey knowledge in fact for natural languages sounds are transformed in letters or ideograms and these symbols are composed to produce words words then form sentences and sentences form texts discourses dialogs which ultimately convey knowledge emotions and so on this composition of symbols into words and of words in sentences follow rules that both the hearer and the speaker know hence symbolic representations give a clear tool to understand whose knowledge is used in machines in current intelligence systems symbols are fading away erased by tensors distributed representations distributed representations are pushing deep learning models towards amazing results in many tasks such as image recognition image generation and image captioning machine translation syntactic parsing and even game playing at human level there is a strict link between distributed representations and symbols the being an approximation of the second the representation of the input and the output of these networks is not that from their internal representation for this strict link is a tremendous opportunity to track how symbolic knowledge in the knowledge lifecycle in this way symbolic knowledge producers can be rewarded for their unaware work ai a simple proposal for a better future a peasant of the late century would have never imagined that after years yoga trainer pet caretaker and ayurveda massage therapist just to cite technology unrelated jobs are common jobs it is also extremely likely that any wise politician of that period had the same lack of imagination even though had more time to spend to imagine the future and less pressure on job loss today we are in a situation similar to the end of the century but we have a complication the speed of the ai revolution as it was for the peasants and politicians we can hardly imagine what s next on the job market we can see some trends but it is hard to exactly imagine what are the skills needed for being part of the labor force of the future yet the ai revolution is overwhelming and risks elimination of many jobs in the near future this may happen before our society envisage a clear path for relocating workers we urge a strategy for the immediate the intelligence revolution is based on an enormous knowledge theft skilled and unskilled workers do their own everyday jobs and leave important traces these traces are the training examples that machines can use to learn hence intelligence using machine learning is stealing these workers knowledge by learning from their interactions these unaware workers are basically digging the graves for their own jobs the knowledge produced by workers and used by machines is going to produce revenues for machine owners for years this is a major problem since only a very small fraction of the population can from this revenue source and the real owners of the knowledge are not participating to this redistribution of wealth the model we propose with intelligence seeks to give back part of the revenues to the unaware knowledge producers the key idea is that any interaction a machine does has to constantly repay whoever has produced the original knowledge used to do that interaction to obtain repayment we need to work on a major issue determine a clear knowledge lifecycle which performs a compete tracking of the knowledge from its initial production to the decision processes of the machine hence we need to promote intelligence models that are explainable and that track back to the initial training examples that originated a decision in this way it is clear why the decision is made and who has to be rewarded with a fraction of the that the decision is producing managing ownership in the knowledge poses big technological and moral issues and it is certainly more complex than simply using knowledge while forgetting what the source is each interaction has to be tracked and assigned to a individual hence the issues are a clear of people in the web is mandatory second privacy can become an overwhelming legal issue finally to pursue as an ecosystem for fair intelligence solutions we need to invest in the following enabling technologies and legal aspects explainable artificial intelligence which is a must because in order to reword knowledge producers systems need to exactly know who is responsible for a decision symbiotic symbolic and distributed knowledge representation models which are needed as a large part of knowledge is expressed with symbols trusted technologies as the knowledge should be clear and correctly tracked virtual identity protocols and mechanisms because systems need exactly who has to be reworded privacy preserving protocols and mechanisms as although systems need to know who should be rewarded and why privacy should be preserved studying extensions of copyright to unaware knowledge production which can be the legal solution to safeguard the unaware knowledge producers conclusions job market contraction is the dark side of the shining future promised by intelligence ai systems unaware skilled and unskilled knowledge workers are digging graves for their own jobs by passionately doing their normal everyday work learning ai are extracting knowledge from their interactions this is a gigantic knowledge theft of the modern era in this paper we proposed intelligence as a fairer ai approach as modern robin hoods researchers should for a fairer intelligence that gives back what it steals as skilled and unskilled workers are producing the knowledge which intelligence is making on we need to give back a large part of this to its legitimate owners references david aha trevor darrell michael pazzani darryn reid claude sammut and peter stone proceedings of the ijcai workshop on explainable intelligence xai august peter c austin jack v tu jennifer e ho daniel levy and douglas s lee using methods from the and literature for disease and prediction a case study examining of heart failure subtypes journal of clinical epidemiology dzmitry bahdanau kyunghyun cho and yoshua bengio neural machine translation by jointly learning to align and translate arxiv preprint roberta beccaceci francesca fallucchi cristina giannone francesca spagnoulo and fabio massimo zanzotto education with living artworks in museums in csedu pages briot hadjeres and pachet deep learning techniques for music survey arxiv preprint naom chomsky aspect of syntax theory mit press cambridge massachussetts michael chui james manyika and mehdi miremadi where machines could replace where they can t yet mckinsey quarterly july lorenzo ferrone and fabio massimo zanzotto towards compositional distributional semantic models in proceedings of coling the international conference on computational linguistics technical papers pages dublin ireland august dublin city university and association for computational linguistics lorenzo ferrone fabio massimo zanzotto and xavier carreras decoding distributed tree structures in statistical language and speech processing third international conference slsp budapest hungary november proceedings pages patrizia ferroni fabio massimo zanzotto noemi scarpato silvia riondino umberto nanni mario roselli and fiorella guadagni risk assessment for venous thromboembolism in ambulatory cancer patients a machine learning approach medical decision making chelsea gohd your next teacher could be a robot february ian goodfellow jean mehdi mirza bing xu david sherjil ozair aaron courville and yoshua bengio generative adversarial nets in advances in neural information processing systems pages andrew gray mohammad ali yiqi gao j hedrick and francesco borrelli semiautonomous vehicle control for road departure and obstacle avoidance ifac control of transportation systems pages kaiming he xiangyu zhang shaoqing ren and jian sun identity mappings in deep residual networks arxiv preprint eric c and jonathon l miner robotic sweeper cleaner with dusting pad march us patent alice kerly phil hall and susan bull bringing chatbots into education towards natural language negotiation of open learner models systems kim malioutov varshney and weller proceedings of the icml workshop on human interpretability in machine learning whi arxiv august konstantina kourou themis p exarchos konstantinos p exarchos michalis v karamouzis and dimitrios i fotiadis machine learning applications in cancer prognosis and prediction computational and structural biotechnology journal yann lecun yoshua bengio and hinton deep learning nature hod lipson and melba kurman driverless intelligent cars and the road ahead mit press zachary chase lipton the mythos of model interpretability corr todd litman autonomous vehicle implementation predictions victoria transport policy institute jerome m lutin alain l kornhauser and eva masce the revolutionary development of vehicles and implications for the transportation engineering profession institute of transportation engineers ite journal volodymyr mnih koray kavukcuoglu david silver alex graves ioannis antonoglou daan wierstra and martin riedmiller playing atari with deep reinforcement learning arxiv preprint megan murphy ginni rometty on the end of programming bloomberg businessweek september a plate distributed representations and nested compositional structure phd thesis a plate holographic reduced representations ieee transactions on neural networks g revathi and vr sarma dhulipala smart parking systems and sensors a survey in computing communication and applications iccca international conference on pages ieee schmidhuber deep learning in neural networks an overview neural networks david silver aja huang chris j maddison arthur guez laurent sifre george van den driessche julian schrittwieser ioannis antonoglou veda panneershelvam marc lanctot et al mastering the game of go with deep neural networks and tree search nature karen simonyan and andrew zisserman very deep convolutional networks for image recognition arxiv preprint charles taylor andrew parker shek lau eric blair andrew heninger eric ng enrico dibernardo robert witman michael stout et al robot vacuum cleaner june us patent app miroslav trajkovic antonio j colmenarez srinivas gutta and karen i trovato computer vision based parking assistant january us patent iwan ulrich francesco mondada and nicoud autonomous vacuum cleaner robotics and autonomous systems oriol vinyals l ukasz kaiser terry koo slav petrov ilya sutskever and hinton grammar as a foreign language in cortes lawrence lee sugiyama and garnett editors advances in neural information processing systems pages curran associates oriol vinyals alexander toshev samy bengio and dumitru erhan show and tell a neural image caption generator in proceedings of the ieee conference on computer vision and pattern recognition pages richard wallace the anatomy of pages springer netherlands dordrecht david weiss chris alberti michael collins and slav petrov structured training for neural network parsing arxiv preprint joseph weizenbaum computer program for the study of natural language communication between man and machine communications of the acm kelvin xu jimmy ba ryan kiros kyunghyun cho aaron courville ruslan salakhutdinov richard s zemel and yoshua bengio show attend and tell neural image caption generation with visual attention arxiv preprint will y zou richard socher daniel m cer and christopher d manning bilingual word embeddings for machine translation in emnlp pages
| 2 |
test ideals in rings with finitely generated algebras mar alberto chiecchio florian enescu lance edward miller and karl schwede abstract many results are known about test ideals and f for rings in this paper we generalize many of these results to the case when the symbolic rees algebra ox ox ox is finitely generated or more generally in the log setting for in particular we show that the f numbers of x at are discrete and rational we show that test ideals x can be described by alterations as in and hence show that splinters are strongly f in this setting recovering a result of singh we demonstrate that multiplier ideals reduce to test ideals under reduction modulo p when the symbolic rees algebra is finitely generated we prove that type stabilization still holds we also show that test ideals satisfy global generation properties in this setting introduction test ideals were introduced by hochster and huneke in their theory of tight closure within positive characteristic commutative algebra after it was discovered that test ideals were closely related to multiplier ideals a theory of test ideals of pairs was developed analogous to the theory of multiplier ideals however unlike multiplier ideals test ideals were initially defined even without the hypothesis that kx was see for a similar theory of multiplier ideals but the kx hypothesis is useful for test ideals and indeed a number of central open questions are still unknown without it the goal of this paper is generalize results from the hypothesis that kx is to the setting where the local section ring r ox ox ox also known as the symbolic rees algebra is finitely generated most notably perhaps the most important open problem within tight closure theory is the question whether weak and strong f are equivalent or more generally whether splinters and strong f are equivalent from the characteristic zero perspective splinters weak and strong f are competing notions of singularities analogous to klt singularities that are all known to coincide in the case these are known to be equivalent under the kx hypothesis and under some other conditions previously singh announced a proof that splinters with r finitely generated are strongly f we recover a new proof of this result and in fact show something stronger we prove that the big test ideal is equal to the image of a construction involving alterations theorem a theorem corollary suppose that x is a normal f integral scheme and that on x is an effective such that s r is finitely generated then there exists an alteration y x from a normal y factoring through mathematics subject classification key words and phrases anticanonical test ideal multiplier ideal the fourth named author was supported in part by the nsf frg grant dms nsf career grant dms and a sloan fellowship chiecchio enescu miller and schwede x proj s so that x image oy kx ox if x is of finite type over a perfect field one may take y to be regular by alternately one may take to be a finite map in which case y is almost certainly not regular as a consequence we obtain that x image oy kx ox y where runs over all alterations with y x factoring through x alternately one can run over all finite maps if additionally x is of finite type over a perfect field then one can run over all regular alterations factoring through x actually we prove a stronger theorem allowing for triples x at but we don t include it here to keep the statement simpler note that in characteristic zero the same intersection over regular alterations characterized multiplier ideals by at least after observing remark inspired by the analog with multiplier ideals there has been a lot of interest in showing that the jumping numbers of test ideals are rational and without limit points at this point we know that the f numbers are discrete and rational for any f scheme x with kx qcartier we also know discreteness if r is finitely generated and x spec r is the spectrum of a graded ring on the other hand we know that the jumping numbers of j x at are discrete and rational if r is finitely generated see remark we prove the following theorem b theorem proposition suppose that r is a pair such that r is finitely generated then for any ideal a r the f numbers of r at are rational and without limit points we prove the discreteness result in two ways first pass to the local section ring symbolic rees algebra where the pullback of is we then show that the test ideal of the symbolic rees algebra restricts to the test ideal of the original scheme alternately in section we prove the discreteness result for projective varieties by utilizing the theory developed by the first author and urbinati in particular we show a global generation result for test ideals theorem which immediately implies the test ideal result another setting where the hypothesis is used is in the study of ideals recall that if r is f normal and with index not divisible by p then it follows from that the images of the map homr r r stabilize for sufficiently large this stable image gives a canonical scheme structure to the locus of a variety we generalize this to the case that r is finitely generated which includes the case where the index of is divisible by p theorem c corollary theorem theorem suppose that r is an f normal domain and that b is a with not divisible by if the algebra r is finitely generated then the image of the evaluation at map homr r pe b r r stabilizes for e sufficiently divisible test ideals and algebras again we give several different proofs of this fact utilizing different strategies as above finally we also show theorem d theorem suppose that x is a normal variety over an algebraically closed field of characteristic zero further suppose that is a such that r is finitely generated and also suppose that a ox is an ideal and t is a rational number then j x at p xp atp for p this should be compared with where the analogous result is shown under the hypothesis that kx is numerically this numerically condition is somewhat orthogonal to the finite generation of r in particular if r is finitely generated and is numerically then it is not difficult to see that is see also remark a previous version of this paper included an incorrect statement in lemma this version which also corrects the published version fixes the statement by making it weaker fortunately we only needed the weaker statement in all our applications acknowledgements the authors would like to thank tommaso de fernex christopher hacon nobuo hara mircea and anurag singh for several useful discussions we would also like to thank juan felipe for several useful comments on a previous draft of this paper finally we would like to thank the referee for numerous valuable comments and for pointing out a mistake in a lemma previously lemma now removed preliminaries in this section we recall the basic properties that we will need of test ideals local section as well as the theory of positivity for divisors as developed by urbinatichiecchio we conclude by stating a finite generation result for local section rings of threefolds in positive characteristic as a consequence of recent breakthroughs in the mmp setting throughout this paper all rings will be assumed to be noetherian of equal characteristic p and f which implies that they are excellent and have dualizing complexes all schemes will be assumed to be noetherian f separated and have dualizing complexes for us a variety is a separated integral scheme of finite type over an f field for any scheme x we use f x x to denote the absolute frobenius morphism we also make the following universal assumption q q f x x this holds for all schemes of essentially finite type over an f field or even of essentially of finite type over an f local ring frequently we will also consider divisors on schemes x whenever we talk about divisors on x we make the universal assumption that x is normal and integral in particular whenever we consider a pair r or x then r or x is implicitly assumed to be normal we make one remark on some nonstandard notation that we use if r is a normal domain and d is a weil divisor on x spec r then we use r d to denote the fractional ideal h x ox d k r called divisorial symbolic rees algebras in the commutative algebra literature chiecchio enescu miller and schwede test ideals and f we now recall the definitions and basic properties of test ideals while test ideals were introduced in we are technically talking about the test ideal from the particular definition of the test ideal presented here can be found in definition among other places definition test ideals suppose that r is an f normal domain is a a r is a ideal sheaf and t is a real number the test ideal x at is the unique smallest nonzero ideal j r such that for every e and every e homr r pe r homr r r we have that j j if then we leave it out writing x at if a r or t then we write r it is not obvious that the test ideal exists however pit can p be shown that there exists c r such that for each d r we have that c e dr where varies e over p homr r pe r see lemma this element c is then called a big r at element we then immediately obtain the following construction of the test ideal lemma with notation as in definition if c is a big r at element then xx r at cr e ranges over elements of p e also range over elements of homr r pe r where again one may homr r pe r alternately one may replace e with e finally we also have that for any sufficiently large cartier divisor d that x e tre ox kx pe kx d r at proof for the first statement it is easy to see that c is contained in any ideal satisfying e the condition j j for all p homr r pe r hence so is the sum thus the sum is the smallest such ideal p e e for the second statement replacing p with obviously we have the containment notice that if c is a test element then so is dc for any d hence pe for all the one can form the p original sum with cd for some d so that da inclusion follows for the e statement notice that if j is the sum for e then we still have j j the final characterization of the test ideal follows immediately from the fact that e e ox kx pe kx divx c ox h omox ox ox we notice that any difference coming from the fact that we round down instead of round up can be absorbed into the difference between d and divx c we also recall some properties of the test ideal for later use lemma suppose that r at is as in definition then a the formation of r at commutes with localization and so one can define x at for schemes as well test ideals and algebras b if s t then r as r at c for any t there exists an so that if s t t then x at x as d if f r and h v f is the corresponding cartier divisor then f x at x at ox x h at proof part a follows immediately from lemma part d follows similarly use the projection formula part b is obvious also from lemma for part c this is exercise let us quickly sketch the proof since we do not know of a reference where this is addressed in full generality choose c a test element p it is p easy to choose c that works for all all s t t we then write r at cdr for d some element in a this sum is a finite sum say for e to e let then r at m x x cdr e now runs over homr r pe r then see that pe t pe for e r at m x x m x x e where we m x x m and so e p r the other containment was handled in b finally we make one more definition related to test ideals definition a triple x at as in definition is called strongly f if r at we briefly also recall some formalities of maps and connections with divisors lemma suppose that x spec r is an f normal scheme a there is a bijection between effective divisors such that pe kr and elements of homr r r modulo by units we use the following notation for this correspondence b if homr r r corresponds to a then the map d c corresponds to the divisor div d if homr r r and e r r then d using the bijection of a if is any effective then the elements of homr r pe homr r r modulo multiplication by units are in bijection with divisors with and of course with pe kr proof a is just theorem b is a straightforward exercise see exercise c is theorem e d is also not difficult to check see for instance definition chiecchio enescu miller and schwede local section rings of divisors symbolic rees algebras suppose that x is an f normal integral noetherian scheme and is a on x then one can form m s r x ox additionally for any integer n we use s n r x to denote the nth veronese subalgebra note that there is a canonical map ox s and dually a map spec s x of schemes note s may not be noetherian if s or equivalently y spec s is noetherian then we also have proj s x these maps are very well behaved outside of codimension we recall that the map proj s x is called the of it is a small f projective morphism y x such that the strict transform is and f moreover such a map exists if and only if s is noetherian lem when s is finitely generated both spec s and proj s are normal see for instance and also see lemma suppose that s is finitely generated and w x is a closed subset of codimension then w and w are also codimension in spec s and proj s respectively additionally is an isomorphism outside a closed codimension subset of x and if is integral then is an outside a set of codimension as a consequence if d is any on x then we have canonical pullbacks d and proof since s is a symbolic rees algebra of a module of rank the map proj s x is small lem the case for can be verified locally on x we begin under the assumption that is integral let u spec r x and s s u suppose that w has a codimension component whose support is defined by a prime height ideal q in by lemma q r is height zero or but this is impossible since it defines a subset of w a set of codimension for the case when is not integral we observe that the result holds for the veronese algebra s n r x for sufficiently divisible but s is a finite s n algebra see the proof of lemma and the result follows the fact that is a bundle at least outside a set of codimension follows immediately from the fact that in that case is an integral cartier divisor and so the section ring s looks locally like ox t outside a set of codimension remark when x is separated the pullback coincides with the pullback of de fernex and hacon see remark we will be very interested in proving that various section rings are finitely generated and so recall lemma with notation as defined at the start of section a s is finitely generated if and only if s n is finitely generated for some equivalently any n b suppose that g y x is a finite dominant map from another normal integral noetherian scheme y let t r y then t is finitely generated if s is finitely generated proof part a is exactly lemma although it can also be found in numerous other sources for b we do not know a good reference but we sketch a proof here by a we may assume that is integral it is also harmless to assume that x spec a is test ideals and algebras affine and hence so is y spec b then we can pass to the category of commutative rings so that s and t are actually rings and not sheaves of rings in particular we suppress all notation that we might otherwise need we have the diagram a s first choose a single element c a so that s a t and t b t here we identify the ts it follows that s c t c is finite and hence integral and k s k t is a finite extension as well let t to be the integral closure of s inside t we want to show that t t which will complete the proof recall we already assumed that is integral let w x be a closed set of codimension outside of which is cartier consider the functor h x w applied to all of the rings or sheaves of rings involved as t is a direct sum of reflexive oy h x w t h y w t is just the global sections of t by hartog s lemma for reflexive sheaves thus h x w t is identified with t since x and y are affine on the other hand w is a codimension subset of spec s outside of which t and t obviously agree hence h x w t h x w t h spec s w t but t is normal and so we also have that h spec s w t t we have just shown that t t as desired we also will need to understand the canonical divisors of spec s and proj lemma continuing with notation from the start of section assuming that s is finitely generated then kproj s kx if additionally is a weil divisor then locally on the base we have that kspec s kx in particular if b then kspec s so that kspec s b thus if then s is proof recall that is an outside a set of codimension and hence makes sense the computation of kspec s can be found in theorem the initial statement that kproj s kx is obvious since is small positivity for divisors in this section we will recall some definitions and results of let us recall that if f y x is a morphism of schemes a coherent sheaf f on y is relatively globally generated or f generated if the natural map f f f is surjective if y is a normal scheme and d is a weil divisor on y it might be that for example oy d is f generated but oy is not to account for such pathologies we have to work asymptotically we will say that a d is relatively asymptotically globally generated or f if oy md is f generated for all positive m sufficiently divisible let f y x be a projective morphism of normal noetherian schemes a divisor d on x is f if for every f a on y d a is f if x spec k we will simply say that d is nef def the d is f if for every ample a on y there exists b such that bd a is f and the algebra of local sections r x d is finitely generated if x spec k we will say that d is ample def notice that when d is these notions coincide with the usual ones of nefness and amplitude we remark that amplitude for weil divisors called chiecchio enescu miller and schwede is given by two conditions a positivity one which is based on the fact that the regular ample cone is the interior of the nef cone and a technical one on the finite generation of the algebra of local sections these two conditions are independent in particular there are examples of weil divisors a satisfying the positivity condition but with algebra of local sections r x a not finitely generated example these notions of positivity behave very much like in the world for example if a is an ample weil and d is a globally generated divisor d a is ample lem i lemma let e be a on a normal noetherian projective scheme x over a field if the algebra of local section r x e is finitely generated then there exists a cartier divisor l such that l e is an ample weil divisor proof notice that n l e is ample for n if and only if l e is ample lem b so without loss of generality we can assume that r x e is generated in degree and that e is integral let h be an ample cartier divisor by definition there exists m such that ox mh e is globally generated there is a surjection ox e ox e ox ne where the last equality is a consequence of the assumption on the finite generation of the algebra of local sections thus for each n ox n is globally generated that is mh e is asymptotically globally generated moreover since h is cartier r x mh e is also finitely generated by lem i mh e h m h e is ample the main characterization of the above positivity is in terms of their let x is a normal projective noetherian scheme over an algebraically closed field k and let d be a with s r x d finitely generated let proj s x notice that d d is then d is if and only if so is d theorems and using this characterization urbinati and the first author proved fujita vanishing for locally free sheaves cor let x be a normal projective noetherian scheme over an algebraically closed field k let a be an ample on x and let f be a locally free coherent sheaf on x there exists an integer m a f such that h i x f ox ma d for all positive m divisible by m a f all nef cartier divisors d and all i pullback of weil divisors let f y x be a proper birational morphism of normal noetherian separated schemes in de fernex and hacon introduced a way of pulling back a weil divisor on x via f for any weil divisor d on x the of d along f denoted by f d is the weil divisor on y such that oy d ox oy def the negative sign appearing is so that when d is effective we are pulling back the ideal defining it as a subscheme the pullback of d along f is f d lim inf m f m d f md lim m m m test ideals and algebras the above is the infimum limit over m is a limit over m and an lem and def moreover the above definition of f coincides with the usual one whenever d is prop remark if f y x is a small projective birational morphism then f d d this notion of pullback is not quite functorial unfortunately let f y x and g v y be two birational morphisms of normal noetherian separated schemes and d be a weil divisor on x the divisor f g d f d is effective and moreover if ox oy is an invertible sheaf f g d f d lem lemma let x be a normal noetherian scheme let d be a weil divisor on x such that s r x d is finitely generated let x proj s x and let d then ox md ox ox md for all positive m sufficiently divisible in particular ox md ox is reflexive for m sufficiently divisible proof since d is see lemma oy md is generated for all positive m sufficiently divisible that is the natural map ox md ox md is surjective for positive m sufficiently divisible since is small ox md ox md for all integers m this is for a proof see lemma thus for all positive m sufficiently divisible we have a surjection ox md ox md notice that ox ox md is isomorphic to the quotient of ox md by its torsion caution since ox md is torsion free the above surjection induces a surjection ox ox md ox md on the other hand since is small for all integers m ox md ox ox md since ox ox md is we have a natural inclusion ox ox md ox md lemma let x be a normal noetherian separated scheme let d be a weil divisor on x such that r x is finitely generated and let proj r x x let f g y x be any birational morphism factoring as y proj r x x with y a normal noetherian separated scheme then for any positive m sufficiently divisible f md md md therefore f d d proof this is an application of lemma which we now explain consider the following chain of equalities the first and last equalities are by definition and the third is by lemma since m is sufficiently divisible oy md ox oy ox ox oy ox md oy oy md this proves the first statement the final statement is a consequence of the fact that d for m sufficiently divisible by our finite generation hypothesis chiecchio enescu miller and schwede g f lemma suppose we have a composition of birational morphisms f y between normal varieties and d is a weil on x then f d f proof to check the identity it suffices to show orde f d orde f d for each prime divisor e on y the generic point of each prime divisor e on y gives rise to a dvr ae me for any sufficiently divisible positive integer m set md the power of me agreeing with ox ae so by definition orde f d lim md a similar calculation computes f d on y finally as g is birational f d keeps the same coefficients on divisors that are not contracted by g thus orde f d orde f d for each prime divisor e on y as desired we define multiplier ideals in a way which is a slight generalization of the one of definition let x be a normal variety an algebraically closed q aover k field of characteristic zero an divisor and i jk a formal product of fractional ideal sheaves the collection of the data of x and i will be called a triple q and it will be denoted by x i we say that the triple is effective if i jkak where all the jk s are ideals and ak for all remark notice that we do not assume that kx is remark a triple is effective if and only if x i is an effective pair in the sense of definition definition let x i be an effective triple and let m be a positive integer let f y x be a log resolution q of the pair x ox kx i definition and theorem let i jkak be a formal product and oy jk ox we define the sheaf x ak gk jm x i oy f m kx m remark the reason for this new notation is that our notation is slightly more general than the one of in particular de fernex and hacon did not include a boundary divisor term this might cause some confusion since the reader might think one could absorb the divisor into the ideal i indeed what is a divisor but a formal combination of height ideals unfortunately this does not yield the same object and in particular does not yield the usual multiplier ideal even when kx is the difference is that asymptotics are already built into kx whereas no asymptotics are built into i in in particular let i denote the formal product of ideals corresponding to in the obvious way then in general jm x i jm x i i we have x ak gk jm x i i oy f mkx f m x oy f mkx f ak gk m m x ak gk oy f mkx m jm x i here the first containment is lemma and the second is a consequence of remark test ideals and algebras lemma let x i be an effective triple the sheaf jm x i is a coherent sheaf of ideals on x and its definition is independent of the choice of f proof the proof proceeds as in the proof of lemma let x i be an effective triple the set of ideal sheaves jm x i has a unique maximal element proof for any positive integers m q jm x i jmq x i by remark by the previous lemma the two ideals can be computed on a common resolution the unique maximal ideal exists by noetherianity definition let x i be an effective triple we will call the unique maximal element of jm x i the multiplier ideal of the triple x i and we will denote it by j x i remark in the case when we write j x i j x i and then our definition agrees with the one in corollary working in characteristic zero suppose r is finitely generated and x proj r just as before let a be any ideal sheaf on x then we have j x at j x a ox t proof let projx r x d x x and it is enough to show that for every m satisfying the result of lemma j x ti ox jm x ti q let f y x be a log resolution of x i factoring through x let i jkak and let jk oy o since oy i oy ox i f is a log resolution of x i let g y x since x is a log pair the multiplier ideal j x ti ox is x tak gk j x ti ox oy g kx on the other hand for each m the multiplier ideal jm x ti is x jm x ti oy f m kx tak gk m for each m satisfying ox m ox ox m by lemma f m kx g m kx g m kx mg kx therefore for each m satisfying ox m ox ox m x tak gk jm x ti oy f m kx m x tak gk oy kx x tak gk oy kx j x ti ox remark with the assumptions of corollary it follows immediately that the jumping numbers of j x at are rational and without limit points recall that the chiecchio enescu miller and schwede jumping numbers are real numbers such that j x j x for any it also follows that try j x at image oy kx ox y where runs over all alterations factoring through x x such that aoy oy is invertible finite generation of local section rings for threefolds in characteristic p of course one might ask how often it even happens that a section ring r x d is finitely generated for rational surface singularities of any characteristic it is known that d is always locally torsion in the divisor class group see theorem and so obviously r d is finitely generated however for threefolds rational singularities are not enough by an example of cutkosky even if they are additionally log canonical of course in characteristic zero the finite generation of these section rings holds for klt x of any dimension by the minimal model program theorem and of course is closely linked with the existence of flips using the recent breakthroughs on the minimal model program for threefolds in characteristic p one can prove finite generation of r x d in some important cases again in dimension characteristic p the proof is essentially the same as it is in characteristic zero see exercises and but we reproduce it here for the reader s convenience theorem let x be a klt pair of dimension with kx over an algebraically closed field k of char p then for any d the algebra r x d is finitely generated proof let x x be a small of x which exists by theorem b b to be the strict transforms of and d on b then we notice that set and d b d b is b b d b is klt for m by theorem since k b x x big over x we see that m b d b oxb n kxb is finitely generated since is small this implies that m ox n kx d is finitely generated as well since the algebras are the same however by taking a high veronese and recalling that kx l is and so locally contributes nothing to finite generation we conclude that ox nd is finitely generated as desired of course this also implies that strongly f pairs have finitely generated local section algebras since they are always klt for an appropriate boundary by stabilization discreteness and rationality via rees algebras in this section we aim to prove discreteness and rationality of jumping numbers of test ideals as well as stabilization results under the hypothesis that the algebra s r is finitely generated we first notice that we can extend maps on r to maps on note that this argument is substantially simpler than what the fourth author and tucker did to obtain similar results for finite maps in test ideals and algebras lemma suppose that r is an f normal domain d is a weil divisor on spec r with associated algebra s r d then for any map r r we have an induced map s s and a commutative diagram r s r where is the projection map onto degree zero s proof first note that we give s a z structure so that our induced map will be homogeneous the idea is then simple given an integer i s i r ipe d we want to show that r ipe d r id but this is obvious since it holds in codimension and all the sheaves are reflexive finally we simply have send s to zero if i is not divisible by pe this completes the proof in fact it is not difficult to see that every homogeneous map on s comes from r in this way lemma suppose r is an f normal domain d a weil divisor on spec r and s r d suppose we have a homogeneous map s s again we give s the z then is induced from r r as in lemma proof choose z s i r pe id invert an element u r to make d cartier e and principal and then z f y p where y generates r id and the element f r s is in degree zero in s we see that e ump z f um p e e y p r pe id e e e e hence um z ump z f um p y p f um p the point is that we can choose the same y regardless of the choice of z hence is completely determined by lemma is key in the following proposition which lets us relate maps in general on r and proposition suppose that r is an f normal domain d is a weil divisor and the algebra s r d is finitely generated and in particular an f noetherian ring further suppose that g is an effective weil divisor on spec r with pullback g gs on spec then we have a commutative diagram homs s gs s homs f e s s es homr r g r homr f e r r er chiecchio enescu miller and schwede here the map is projection onto the coordinate s r and is the map which restricts homs s s to r s and then projects onto s furthermore the maps and are surjective proof we first handle the commutativity given homs s s we see that es on the other hand er as well hence we have commutativity of the right square the commutativity of the left square is obvious since gs is pulled back from spec r to see that is surjective for any homr r r construct as in lemma obviously similarly lemma implies the surjectivity of the map as an immediate corollary we obtain a stabilization result similar to corollary suppose that r is an f normal domain and that b is a weil divisor if the algebra r b is finitely generated then the image of the map homr r pe b r r stabilizes for e proof set s r b and consider the diagram of proposition since ks b is cartier we see that the images of eval ese homs s pe b s s stabilize see for instance but then since as in proposition surjects we see that the image of eval homs s pe b r homr r pe b r r ee r but the image of is the coincides with that of homr r pe b r same as the image of ee s homs s pe b s however the ese have stable image as we have already observed and the result follows later in theorem we will obtain the same result for whose is not divisible by for now though we move on to discreteness and rationality of f numbers generalizing theorem from the case of a graded ring theorem suppose that r is a normal domain and is an effective such that r is finitely generated then for any ideal a r the f numbers of r at are rational and without limit points proof first let r be a separable extension of normal f domains corresponding x spec r such that is an integral divisor this to a map of schemes spec x is easy the idea is to simply take roots of generators of dvrs if one has to take a pth root use type equations see lemma let tr k x k x t be the trace map and then recall that tr x ar x at by the main result of it immediately follows that if the f numbers of test ideal x at are discrete and rational so are the f numbers of x at additionally by adding a cartier divisor h to we can assume that is effective since x h at x at ox by lemma d finally note that test ideals and algebras and so r is finitely generated by lemma the upshot of this entire paragraph is of course that we may now without loss of generality assume that is an integral effective divisor next choose c r that is a test element for both r and s r the choice of such a c is easy simply choose a test element so that additionally is cartier on x v c away from v c s looks locally like r t which will certainly be strongly f over wherever r is strongly f let h be the cartier divisor corresponding to c and consider the commutative diagram as p e homs s pe h s e eval s p homs s s homr r pe h r homr r r eval the sum over e of the images of the bottom rows is equal to r at and the sum over e of the images of the top row is equal to s as t since surjects by proposition we immediately see that s as t r at but now observe that ks by lemma but then the f numbers of s as t are discrete and rational by the result follows we immediately obtain the following using the aforementioned breakthroughs in the mmp corollary suppose that r is strongly f of dimension and of finite type over an algebraically closed field of characteristic p then the f numbers r at are rational and without limit points for any choice of and ideal a proof since r is strongly f there exists a divisor so that kr is qcartier and so that r is klt by the result then follows from theorem and theorem of course we also obtain discreteness and rationality of f numbers r at for any r a ring of finite type over an algebraically closed field k of characteristic p such that there exists a so that r is klt a more general type result in corollary we used a compatibility of the formation of rees algebras to prove that the images of homr r r r stabilize for large e if s r is finitely generated in this short section we generalize this result to the case of at least whose is not divisible by as an alternate strategy one could try to prove compatibilities analogous to proposition for rees algebras of unfortunately this gets quite messy instead we take a different approach utilizing proj we first prove the result for varieties and then we handle the finitely generated case via the small map x x we do restrict ourselves to the case where the of kx is not divisible by we realize that the methods we discuss here can apply to more general situations but there are then several potential competing definitions for what the stable image should be proposition suppose that r is a pair such that kr is suppose that the weil index of kr is not divisible by p and that pe kr is an integral weil divisor then e ne r image homr r pne r r stabilizes for large chiecchio enescu miller and schwede proof fix m so that m kr is a cartier divisor the main idea is that module homr r pne r only takes on the values of finitely many sheaves at least up to twisting by line bundles in particular multiples of r m kr we also take advantage of the fact that it is sufficient to show that the images stabilize partially up the chain claim fix and consider n then e ne homr r pne r r factors through e e homr e r e r hence it is sufficient to show that the images of homr r pne r in homr e r e r stabilize proof of claim one simply notices that p e e pne ne and hence r pne contains r e ization occurs simply by restriction of scalars e thus the claimed we continue on with the main proof note that pt mod m is eventually periodic then choose a linear function a for c r c r z such that a e mod m is constant set m r pre mod m kr r e mod m kr and note that for any a a e r a e r homr a e r a e kr a e a e r a e mod m kr r kr a e a e m r kr by inverting an element of r if necessary we may assume that m kr thus by utilizing this we have maps t a e a e m t e t e if these maps are frobenius pushforwards of each other ta or at least up to a unit then we can apply the standard theorem e m but this may be checked in to conclude that the images stabilize in codimension since all sheaves are reflexive and so maps between them are determined in codimension however after localizing to reduce to codimension all the complicated twisting we have done is irrelevant furthermore in codimension r is gorenstein with kr and is with index not divisible by p since its weil index was not test ideals and algebras divisible by p our chain of maps then just turns into a e homr homr e r e r o r a e r o a e a e kr a e o tr e e kr e o e p e tr c e p c e the bottom horizontal map is then obtained via e p e e p c e tr c e p c e note the inclusion can be identified with multiplication by a defining equation for p c e p e pce this is independent of a and so the maps in our chain are really the same up to pushforward as claimed note that this completes the proof even though we only proved stabilization of images for a subset of ne these images are descending and our subset is infinite now we are in a position to prove corollary in the more general situation theorem suppose that r is an f normal domain and that b is a with not divisible by if the algebra r b is finitely generated then the image of the evaluation at map homr r pe b r r stabilizes for e sufficiently divisible proof for this proof we will phrase our maps in terms of the trace r pe kr b we thus fix an e so that pe kr b is an integral weil divisor let x proj r b x spec r be as before we observe that b is and also still has not divisible by p by lemma hence the images e ox p e kx b ne ox pne kx b ox pe kx b ox stabilize by proposition in fact the same argument even shows that the images even stabilize in any finite stage such as in ox pne kx b however the terms and maps in this chain take on finitely many values up to twisting by large cartier multiples of kx b as argued in proposition our goal is to thus show that these images stabilize after pushing forward by chiecchio enescu miller and schwede claim if one applies to obtaining ox kx b ox kx b ox then the chain of images in ox ox still stabilizes proof of claim choose d so that e ox kx b ox e kx b image is equal to the stable image which we denote by for all n note by proposition there are finitely many conditions observe that there are only finitely many up to twisting by large multiples of kx b the fact that kx b is ample implies that there exists an so that for any n e ox pne kx b ox p e kx b image e ox p e kx b image e ox pne kx b ox p e kx b image e but the map factors through e ox kx b ox e kx b e ox p e kx b ox p e kx b which has image e by our assumption that is the stable image applied e to the choice of n n it follows that e n and so by composition e surjects for all e surjects for every n and e every c d note n does not depend on c thus since e ox p e kx b e is the image of we see that e ox pne kx b ox p e kx b image e ox pne kx b ox p e kx b image for all n and all c this clearly proves the desired stabilization we return to the proof of theorem but this is trivial once we observe that ox pne kx b ox pne kx b since is small hence the proof is complete stabilization and discreteness via positivity in the previous section we showed the discreteness and rationality of f numbers via passing to the local section algebra a symbolic rees algebra where we already knew discreteness and rationality in this section we recover the same discreteness result in the projective setting by using the methods of which allow us to apply asymptotic vanishing theorems to weil divisors indeed we first prove global generation results for test ideals by employing similar methods to setting let x be a normal projective variety of characteristic p an effective weil a an ideal sheaf on x and t q we make no assumptions about kx being test ideals and algebras assume g is a line bundle such that there are global sections xm h x a g which globally generate a g and then let symc xm denote the cth symmetric power of the vector space xm i observe that symc xl h x ac g c globally generates ac g c thus we have a surjection of sheaves symc xt g ac lemma if the of is not divisible by p and t with p not dividing b then there is a cartier divisor h and a finite set of integers es such that pei is integral pei t z and x at equals s x image e i symt p ei xm g p at p ei ei ox pei kx h trei ox ox pei kx h at some level this result is obvious the only technicalities involve showing that the various rounding choices we make all give the same result in the end since we can absorb any differences into the test element a local generator of h we include a complete proof but we invite the reader to skip over it if they are already familiar with this type of argument proof the statement in the end is local and so trivializing g it suffices to show that x at s x trei at p ei ox pei kx h pick an effective cartier divisor corresponding to the vanishing locus of a test element so that for any integer x e tre ox pe kx x at for all cartier h prop this equality also holds for any h as one can always pick a cartier h so that h h and one obtains inclusions x e tre ox pe kx x at x tre ox pe kx x tre ox pe kx h x at e e next consider the claim which will allow us restrict to those e which are multiples of claim for any weil divisor h there exists a cartier divisor g such that for any integer b and for any integer m we have e o k h x x x e m m m ox m kx chiecchio enescu miller and schwede proof to prove the claim first note that by lemma among many other places b e if d alp al p then m pb m b p where again the l is an upper bound for the number of generators of a note that d works for any b set then g div d h and notice that e o k h trb x x x b b tr da ox kx ox h e m b trb p ox kx ox h e m ox m kx ox e m ox m kx now applying m m proves the claim now we return to the proof of the lemma the claim and our previous work implies that for a sufficiently large cartier divisor h and g depending on h we have that p e x at tre f e o pe kx h e x ox pe kx tr a e e e ox pe kx tr a t x a p e and therefore that x at tre ox pe kx pick a cartier divisor h so that h d where d a we have that x e x at tre ox pe kx h x tre p e ox pe kx h x tre p e ox pe kx h x tre p e ox pe kx d x tre p e x tre ox pe kx ox pe kx e x at in particular x at x tre p e ox pe kx h since the of is not divisible by p pe kx td is an integral divisor for e sufficiently divisible hence by choosing our sufficiently divisible and noting that our scheme is noetherian and so the above sum is finite we obtain our desired result test ideals and algebras remark while it is certainly possible to generalize this to handle t r or to handle such that pe kx is not integral those generalizations are not the ones we need in particular we will need a power of kx times a locally free sheaf theorem suppose x is normal and projective r x is finitely generated and has not divisible by p and fix t there exists a cartier l such that x aw ox l is globally generated when w t with p not dividing b proof choose a line bundle g ox g such that a g is globally generated by sections xm h x a g by lemma there is a cartier divisor h and integers es such that the test ideal x aw is equal to s x e t ei image symc xm g p i ox pei kx h ox which is globally generated if each summand is fix now a a globally generated ample cartier divisor we claim it suffices to find a cartier divisor such that ox pei kx wg h ox d a ox pei kx wg h pei d a is globally generated the equality in the displayed equation follows from the projection formula indeed assuming this global generation choose l a with d dim x and note that the image of a globally generated sheaf is still globally generated we will find a single that works for all w since r x kx is finitely generated we can use lemma to find a cartier divisor m so that m kx is an ample weil divisor moreover we can find an ample cartier divisor n such that n tg is ample notice that for all w t w n n tg is ample n wg t t this observation is what lets us replace tg with wg set m n by lem i kx wg m kx n wg is an ample weil divisor fix w we now show that the regularity with respect to a of ox pe kx wg h pe d a is zero for each e ei which guarantees by mumford s theorem thm the desired global generation it suffices now to show that h i x ox ox pe a which by the projection formula and the fact that doesn t change the underlying sheaves of abelian groups is the same as showing h i x ox pe kx wg h pe d i a for i d and d dim x since we may assume that e h pe d i a is nef therefore because kx wg is ample weil and l h pe d i a is nef and cartier we may apply the version of fujita vanishing thm to obtain the vanishing desired in this completes the proof remark for a discussion of how to choose l effectively chiecchio enescu miller and schwede remark indeed it is not hard to choose l effectively summarizing the proof above fix an ample cartier g so that a ox g is globally generated fix a to be a globally generated ample cartier divisor and fix m cartier so that m kx is ample and choose an ample cartier n so that n tg is ample then we can take l d a a m we now turn to the promised results on discreteness and rationality proposition suppose now that x is normal and projective and r x is finitely generated then for any ideal sheaf a on x the jumping numbers of x at are without limit points proof first assume that has not divisible by it follows from an appropriately generalized version of the argument of lemma that x at x for all hence for every real number t there is a rational number w with p not dividing p with x at x aw now fix it follows from theorem that there exists a cartier divisor l such that x aw ox l is globally generated for every w with w and where p does not divide b but then by our previous discussion we also see that x at ox l is globally generated for every t the discreteness follows since now for t h x x at ox l h x ox l form a decreasing chain of subspaces of a finite dimensional vector space h x ox l and of course by the global generation hypothesis if h x x l h x x l then x x this proves the result when has not divisible by ox kx next assume that pd has divisible by fix a map ox kx inducing a map on the fraction fields t k x k x as in theorem this map induces a possibly weil divisor rt pd kx with d t x pd rt a p t x at choose a cartier divisor g so that pd rt pd g is effective and notice that it also has not divisible by next observe that pd rt pd g pd pd kx pd g kx pd g f g and hence that r x pd rt pd g is finitely generated note that the is cartier and thus harmless so we are really taking the pd th veronese of r x d hence by what we have already shown the f numbers of x pd rt a p t have no limit points therefore by applying t via we see that the f numbers of x g at also have no limit points but then by the f numbers of x at have no limit points proving the theorem global generation and stabilization of we now give another proof of corollary in the projective setting theorem suppose that x is a projective pair such that r x is finitely generated and kx has not divisible by then the images image ox pe kx ox stabilize for e sufficiently large and divisible we use x to denote this stable image test ideals and algebras proof choose a globally generated ample cartier divisor a and a cartier divisor l such that l kx is an ample weil divisor by for each e such that pe kx is integral set x image ox pe kx ox then fixing d dim x x ox da l image ox pe da l pe l kx ox we immediately notice that ox pe da l pe l kx is with respect to a and hence its image x ox da l is globally generated as the global generating sections all lie in h x ox da l which is finite dimensional and as the form a descending chain of ideals as e increases we see that stabilizes for e sufficiently large and divisible as claimed as an immediate corollary of the proof we obtain corollary suppose again that x is a projective pair of dimension d such that r x is finitely generated and that kx has not divisible by if l is a cartier divisor such that l kx is an ample weil divisor and if a is a globally generated ample cartier divisor then x ox da l is globally generated alterations in this section we give a description of the test ideal x at under the assumption that r is finitely generated this generalizes from the case that is as a consequence we obtain a generalization of a result of singh s also compare with before starting in on this let us fix notation and recall the following from section setting suppose that is a on an f normal scheme x r is finitely generated with x proj r x suppose that a is an ideal sheaf on x and t is a real number we have already seen that we can pullback to x by where it becomes a divisor see lemma suppose further that y x is any alteration that factors through x as y then we define kx kx or equivalently we define is as in even though is not birational see section recall of course that if y x is a small alteration meaning that the locus of has codimension in y then this coincides with the obvious pullback operation more generally if y x and is y x factors through both and x is any alteration and y birational then we define kx to be in the next lemma and later in the section we use the notion and notation of parameter test modules kx at x at for a concise introduction and more about their relation to test ideals please see section result was announced years ago but has not been distributed chiecchio enescu miller and schwede lemma working in setting if m z is such that tm z that is integral and such that the veronese of the symbolic rees algebra r x is generated in degree then x at ox kx atm m x ox kx atm m proof we know by lemma that for any sufficiently large cartier d and any that x e trex ox pe kx x at choose d divisors such that d is cartier and kx d is cartier since x at p e e o pe k d d x x x trx a p e e o d pe k x x x trx a p e e o k d o k tr f x x x x x p e e e a o k o k f tr x x x x x p e e e f a o p k tr x x x x x at p e we see that x at trex ox kx ox kx this is already very close claim we can choose a cartier so that e ox ox kx atm ox kx m for all e e e proof of claim checking this assertion is easy we can certainly knock into atm m p by multiplication by a cartier divisor handling the other multiplication is a little tricke ier likewise certainly we can multiply ox kx into ox pm kx but then notice that ox kx ox kx a by our finite generation hypothesis this proves the claim returning to the proof we see that x at p e e o k o k d d f tr x x x x x p e e e p p e o k atm m f tr o k o x x x x x x p e e p p e e o k pe k k atm m o k o f tr x x x x x x x x x ox kx atm m tm ox k x a m p e p e e k e atm m o p f tr x x x x x at which proves the lemma test ideals and algebras remark it is tempting try to use lemma to give another proof of discreteness and rationality of f numbers by appealing to however this doesn t seem to work in particular in the authors did not prove discreteness and rationality of f jumping numbers for x bs at as mixed test ideals were not handled one could probably easily recover discreteness of f numbers via the usual arguments of gauge boundedness for cartier algebras at least in the case when x is finite type over a field for additional reading on mixed test ideals and their pathologies we invite the reader to look at the really convenient thing about lemma for our purposes is the following lemma using the notation of lemma suppose that y x is an alteration from a normal y where if we write b ox kx then b oy oy is an invertible sheaf then ty kx where kx is defined as in the text below setting proof this is easy indeed we already know that factors through the normalized blowup of b by the universal property of blowups on the other hand oy kx b oy as a result we immediately obtain the following theorem suppose that x is a normal f integral scheme and that on x is an effective such that s r is finitely generated suppose also that a is an ideal sheaf and t is a rational number then there exists an alteration y x from a normal y factoring through x proj s and with g divy a so that x at image oy kx ox this may be taken independently of t if desired if a is locally principal for instance if a ox then one may take to be a small alteration if desired alternately if x is essentially of finite type over a perfect field then one may take y regular by as a consequence we obtain that x at image oy kx tg ox y where runs over all alterations with a oy oy is invertible or all such regular alterations if x is of finite type over a perfect field proof most of the result follows immediately from theorem a combined with lemma and lemma indeed simply choose m such that tm is an integer and the condition of lemma is satisfied then apply theorem a to find an alteration tm m such that the image of the above map is x tx ox kx a consider the alterations that occur in the intersection y following theorem a might seem to require that we only consider that factor through the normalized blowup of a ox kx for divisible however it is easy to see that other y s can be dominated by those that factor through this blowup and the further blowups certainly have smaller images one also must handle the case of varying t which is not quite done in theorem a in our generality there the authors treated x at while here we need x bs at however the argument there essentially goes through verbatim alternately this is the same argument as in the only remaining part of the statement that doesn t follow immediately is the assertion in the case when a is locally principal however in the proof of theorem a the chiecchio enescu miller and schwede alteration needed can always be taken to be a finite cover of the normalized blowup of the ideal in this case the normalized blowup of ox kx atm which coincides with the normalized blowup of ox kx this normalized blowup is of course x in our setting in the above proof our constructed y was definitely not finite over x this is different from where the simplest constructed y definitely was finite over fortunately we can reduce to the case of a finite y at least when a ox corollary suppose that x is a normal f integral scheme and that on x is an effective such that s r is finitely generated then there exists a finite map y x from a normal y factoring through x proj s such that x image oy kx ox proof let y x be a small alteration satisfying from theorem ally assume that kx is integral for simplicity of notation next let y be the stein factorization of since y y is small we see that oy ky kx oy ky kx and the result follows question can one limit oneself to separable alterations in theorem in particular is there always a separable alteration y x with x image oy kx ox the analogous result is known if kx is by however in our proof is definitely not separable because we rely on theorem a which uses frobenius to induce certain vanishing results it is possible that this could be replaced by cohomology killing arguments as in for instance as a special case we recover a result of anurag singh that was announced years ago corollary singh suppose that x is an f splinter and r is finitely generated then x is strongly f proof indeed if x is a splinter then for any finite morphism y x the map h omox oy ox ox surjects however h omox oy ox oy ky kx and the trace map to ox is identified with the map hence using corollary we see that x x ox since for us x always denotes the big test ideal this proves that x is strongly f it would be natural to try to use the above to show that splinters are strongly f for varieties of characteristic p using the fact that such klt varieties satisfy finite generation of their anticanonical rings theorem the gap is the following question suppose that r is a normal f domain that is also a splinter does there exist a on spec r such that kx is and that spec r is klt the analogous result on the existence of for strongly f varieties was shown in of course the fact that splinters are in fact derived splinters in characteristic p would likely be useful in particular we do obtain the following test ideals and algebras corollary suppose that r is an f three dimensional splinter which is also klt for an appropriate and that r is finite type over an algebraically closed field of characteristic p then r is strongly f reduction from characteristic zero the goal of this section is to show that multiplier ideals j x at reduce to test ideals xp atp after reduction to characteristic p at least if r is finitely generated we begin with some preliminaries on the reduction process let x be a scheme of finite type over an algebraically closed field k of characteristic zero a and a ox an ideal sheaf one may choose a subring a k which is finitely generated over z over which x and a are all defined denote by xa and aa oxa the models of x and a over a for any closed point s spec a we denote the corresponding reductions xs and as oxs defined over the residue field k s which is necessarily finite in the simple case where a z if xa spec z xn for i fm and p z is prime the scheme xp spec fp xn mod p fm mod p warning in what follows we abuse terminology in the following way by p we actually mean the set of closed points of an open dense set u spec a furthermore if we start with x as above by xp for p we actually mean some xs for some closed point s in the aforementioned u this is a common abuse of notation and we do not expect it will cause any confusion it does substantially shorten statements of theorems lemma suppose x is a normal variety over an algebraically closed field k of characteristic zero for any so that r x is finitely generated we have r x p r xp for p in particular if is a and r x is finitely generated setting x proj r x and x x we have r x p r xp p r xp and so this ring is also finitely generated this means x p xp we denote both by proof note that r x p makes sense for p as r x is finitely generated and r x p is naturally finitely generated by the reduction of the generators of r x the problem is that potentially l the algebra r x p may not be the symbolic rees algebra local section algebra oxp throughout this proof we will constantly need to choose p or technically restrict to a smaller open subset u of spec a first we record a claim that is certainly well known to experts claim for any weil divisor d and prime p potentially depending on d we claim that ox d p oxp dp as sheaves of oxp proof of claim to see this we prove that ox d p is reflexive and agrees outside a codimension subset with oxp dp of course since ox d is reflexive ox d h omx h omx ox d ox ox is an isomorphism but this isomorphism is certainly preserved via reduction to characteristic p so ox d p is reflexive at least for p choose a closed set z x of codimension defined with no additional coefficients other than the ones already needed to define d and x so that is cartier note that dp is cartier and so oxp dp is locally free and agrees with d p the claim follows chiecchio enescu miller and schwede we return to the proof of the lemma next define x to be the blowup of ox for some m sufficiently divisible then since x x is small so is xp and note that xp is still the blowup of ox p oxp by the claim since was cartier in characteristic zero is cartier after reduction to characteristic p as well now xp is still small and notice that is relatively ample since was obtained by blowing up oxp hence m m oxp is finitely generated and has proj equal to the lemma follows immediately armed with this lemma the proof of the main theorem for this section is easy theorem suppose that x is a normal variety over an algebraically closed field of characteristic zero further suppose that is a such that r is finitely generated and also suppose that a ox is an ideal and t is a rational number then j x at p xp atp for p m kx for some sufficiently proof we know that j x at oxe m divisible m and sufficiently large log resolution of singularities by definition here we need that ox kx oxe oxe and a oxe oxe are invertible we rewrite this multiplier ideal as oxe m kx oxe kx kx a m m and observe it is equal to j x ox kx m note that since r e is independent of the choice of m at least for m is finitely generated the choice of x sufficiently divisible since kx kx is cartier we know that j x ox kx m at p xp oxp kxp m for p by but lemma shows that xp oxp kxp m xp atp combining these equalities proves the result remark theorem also implies that if xp is strongly f for all p and r is finitely generated then x is klt nobuo hara gave a talk about this result at the conference in honor of mel hochster s birthday in but the result was never published corollary suppose x is a variety over an algebraically closed field of characteristic zero that is klt in the sense of then for any and any ideal sheaf a and rational t we have that j x at p xp atp for p proof it follows from the minimal model program and in particular theorem that r is finitely generated the result follows immediately from theorem test ideals and algebras references aberbach and maccrimmon some results on test elements proc edinburgh math soc no bhatt derived splinters in positive characteristic compos math no birkar existence of flips and minimal models for in char p to appear in annales scientifiques de l ens birkar cascini hacon and mckernan existence of minimal models for varieties of log general type amer math soc no birkar and waldron existence of mori fibre spaces for in char p blickle test ideals via algebras of maps algebraic geom no blickle and smith discreteness and rationality of f michigan math j special volume in honor of melvin hochster blickle and smith f of hypersurfaces trans amer math soc no blickle and schwede maps in algebra and geometry commutative algebra springer new york pp blickle schwede takagi and zhang discreteness and rationality of f jumping numbers on singular varieties math ann no blickle schwede and tucker f via alterations amer j math no boucksom de fernex favre and urbinati valuation spaces and multiplier ideals on singular varieties cascini tanaka and xu on base point freeness in positive characteristic chiecchio about a minimal model program without flips chiecchio and urbinati ample weil divisors algebra cutkosky weil divisors and symbolic algebras duke math j no de fernex docampo takagi and tucker comparing multiplier ideals to test ideals on numerically varieties bull lond math soc no de fernex and hacon singularities on normal varieties compos math no de jong smoothness and alterations inst hautes sci publ math no demazure anneaux normaux introduction la des ii travaux en cours vol hermann paris pp ein lazarsfeld smith and varolin jumping coefficients of multiplier ideals duke math j no fujino schwede and takagi supplements to ideal sheaves higher dimensional algebraic geometry rims bessatsu res inst math sci rims kyoto pp gabber notes on some geometric aspects of dwork theory vol i ii walter de gruyter gmbh kg berlin pp goto herrmann nishida and villamayor on the structure of noetherian symbolic rees algebras manuscripta math no hacon and xu on the three dimensional minimal model program in positive characteristic amer math soc no hara geometric interpretation of tight closure and test ideals trans amer math soc no electronic hara and yoshida a generalization of tight closure and multiplier ideals trans amer math soc no electronic chiecchio enescu miller and schwede hartshorne algebraic geometry new york graduate texts in mathematics no hartshorne generalized divisors on gorenstein schemes proceedings of conference on algebraic geometry and ring theory in honor of michael artin part iii antwerp vol pp hartshorne and speiser local cohomological dimension in characteristic p ann of math no hochster foundations of tight closure theory lecture notes from a course taught on the university of michigan fall hochster and huneke tight closure invariant theory and the theorem amer math soc no hochster and huneke infinite integral extensions and big algebras ann of math no hochster and huneke f test elements and smooth base change trans amer math soc no huneke and lyubeznik absolute integral closure in positive characteristic adv math no katzman lyubeznik and zhang on the discreteness and rationality of f jumping coefficients algebra no katzman schwede singh and zhang rings of frobenius operators math proc cambridge philos soc no exercises in the birational geometry of algebraic varieties and mori birational geometry of algebraic varieties cambridge tracts in mathematics vol cambridge university press cambridge with the collaboration of clemens and corti translated from the japanese original kunz on noetherian rings of characteristic p amer j math no lazarsfeld positivity in algebraic geometry i ergebnisse der mathematik und ihrer grenzgebiete folge a series of modern surveys in mathematics results in mathematics and related areas series a series of modern surveys in mathematics vol springerverlag berlin classical setting line bundles and linear series lipman rational singularities with applications to algebraic surfaces and unique factorization inst hautes sci publ math no lyubeznik f applications to local cohomology and in characteristic p reine angew math lyubeznik and smith strong and weak f are equivalent for graded rings amer j math no lyubeznik and smith on the commutation of the test ideal with localization and completion trans amer math soc no electronic the locus in positive characteristic a celebration of algebraic geometry clay math vol amer math providence ri pp on the constancy regions for mixed test ideals algebra schwede f algebra number theory no schwede test ideals in rings trans amer math soc no schwede and smith globally f and log fano varieties adv math no schwede and tucker on the behavior of test ideals under finite morphisms algebraic geom no schwede and tucker test ideals of ideals computations jumping numbers alterations and division theorems j math pures appl no schwede tucker and zhang test ideals via a single alteration and discreteness and rationality of f numbers math res lett no test ideals and algebras singh splinter rings of characteristic p are math proc cambridge philos soc no singh private communication smith the multiplier ideal is a universal test ideal comm algebra no special issue in honor of robin hartshorne takagi an interpretation of multiplier ideals via tight closure algebraic geom no watanabe some remarks concerning demazure s construction of normal graded rings nagoya math j xu on the theorem of in positive characteristic inst math jussieu no tasis in dorado address department of mathematics and statistics georgia state university atlanta ga usa address fenescu department of mathematical sciences university of arkansas fayetteville ar address department of mathematics university of utah s e room salt lake city ut address schwede
| 0 |
feb character degrees of some avinoam mann it was shown by that any set of powers of a prime p which includes can occur as the set of character degrees of some it becomes then of interest to see to what extent that remains true if we consider a particular class of isaacs construction yields groups of nilpotency class here we consider the other extreme recall that a group of order pn is said to be of maximal class if its nilpotency class cl g is n see lgm for the well developed theory of these groups such a group has a factor group of order and therefore it has irreducible characters of degree it was suggested in the last section of that there are further restrictions on the possible character degrees set of a group of maximal class the present note verifies that conjecture indeed under weaker assumptions than maximal class for some results it suffices to assume that the derived subgroup has index this last assumption has quite a few consequences for the structure of the given group and a secondary aim of this note is to derive some of them see theorem and propositions and these results are going to be applied in about character degrees our results are theorem let g be a in which if g has an irreducible character of degree p then it has such a character of degree at most p we remark that satisfying are of maximal class and their character degrees are and for all odd primes however there exist groups satisfying which are not of maximal class and which have irreducible characters of degrees has constructed for all primes p groups of maximal class whose character degrees are p p showing that the bound of theorem is best possible it is also easy to see that there exist of that type theorem let g be a of maximal class if g has irreducible characters of degree higher than then it has such characters of degree at most p theorem let g be a satisfying if and g g p equivalently if g then g has irreducible characters of degree for the proofs we first quote some of the theory of groups of maximal class the proofs of which can be found in lgm and in hu let x be a of maximal class of class c say we write xi x for the terms of the lower central series for i c and cx these notations will be applied to each group of maximal class that we will encounter below if that group will be denoted by h we will let hi denote the corresponding subgroups typeset by avinoam mann of h etc returning to x we have cx xi for i c we call the major centralizer of x it is a regular therefore if a b then a b p a b p see hu for the theory of regular if then xip this holds also for order except for one group the wreath product of two groups of order p in which xip for all i if pp then xip for i if cx then x is termed exceptional if x is metabelian it is not exceptional in a group all maximal subgroups different from are of maximal class finally x is of maximal class iff it contains an element x x such that x next about groups with a derived subgroup of index since all normal subgroups of index contain the commutator subgroup it follows that is the only normal subgroup of that index the factor group is elementary abelian and if x and y say are elements of g that are independent modulo then they generate g and the factor group g is generated by the image of x y therefore g has index in g and it is the only normal subgroup of that index similarly g g is generated by the images of x y x and x y y therefore g has index either or and in the first case it is the only normal subgroup of that index recall that d x denotes the minimal number of generators of the group theorem let g be a such that and and let h be a maximal subgroup of then a is of maximal class and d h b if d h p then h h p hence h equals either pd h or pd h c if d h p then cp wrcp and is elementary abelian of order p p d if g g p then g g p and all maximal subgroups h of g save one satisfy h g e if g g then h g and h g if k is another maximal subgroup then h k and in particular h k f if g contains a maximal subgroup h then either h is metacyclic or g contains at most one maximal subgroup k h such that k g if g then gp g note the first claim in a was already pointed out in exercise of be proof for a we may as well assume that h then g hh xi for some element x and commutation with x induces on h an endomorphism with image g h and kernel z g since p we have g moreover the same argument shows that all quotients of g have centres of order p hence g is of maximal class if h then k is a of maximal class and order at least since l is abelian it is the major centralizer of it is known that in such groups k p k a subgroup of index pp and that usually lp k p the only exception to this equality occurring when k cp wrcp when l is elementary abelian of order pp in the other cases we have d h d l logp l logp lp p if h then d h p and by the above this is the only case in which strict inequality is possible moreover in that case pp and it is known that groups of maximal class of these orders satisfy p p which proves b character degrees of some when g g p write c cg g this is a maximal subgroup and c c g let h be another maximal subgroup by lemma of c h p c h but c h g and g therefore c h g since c g this is possible only if h g since d g we have g hu b and since g an argument of implies that is of maximal class see theorem c of which shows that g g if g g and h is a maximal subgroup then h g because is of maximal class let m and n be two subgroups lying properly between g and g and let c cg and d cg then m and n are normal and c and d are maximal if c d then c c g contradiction thus the p subgroups like m and n determine p distinct maximal subgroups each maximal subgroup is obtained in this way moreover c m thus c g but c g therefore c to prove f we suppose that g contains the maximal subgroup h and two other maximal subgroups k and l whose commutator factor groups have orders at least if h p then h is either cyclic or metacyclic thus we may assume that h p we write n h p h if p then n h then n on the other hand since is a group of maximal class of order at least its maximal subgroup is also of maximal class and of order at least and this implies that k n therefore n k n k n similarly n which leads to the contradiction k n g finally if g let h g then z h g g and h is one of the two groups of order but of these two the one of exponent can not be the central factor group of any group is incapable therefore h has exponent p which is our claim for p we have p hence proposition let g be a such that and then g and that subgroup has index all maximal subgroups of g have at most three generators and either one of them is metacyclic or g contains a maximal subgroup h such that h if g then g is of maximal class proof most of this is either stated in theorem or was derived during the proof of part f there the last claim is the case p of the fact that if in a g the factor group g is of maximal class then so is g proposition let g be a of order at least in which then all maximal subgroups of g with at most one exception have irreducible characters of degree if an exceptional maximal subgroup exists then all other maximal subgroups m satisfy m proof since g has irreducible characters of degree by proposition if some maximal subgroup h does not have such characters and m is another maximal subgroup then m that means that m and this shows that m has irreducible characters of degree qed an exceptional maximal subgroup may or may not exist if g is a of maximal class which is metabelian but in which the subgroup is not abelian avinoam mann then all irreducible characters of all maximal subgroups of g have degrees or on the other hand in the groups of maximal class and order p p constructed by pp or hu or the ones of order constructed by slattery in p it is not difficult to show that the maximal subgroups h in or e in have only irreducibles of degrees and or and p in the proofs of theorems to we separate between groups of maximal class and others proof of theorem for groups of maximal class and of theorem we may assume that by a famous result of a g has characters of degrees and p only iff either z g or g contains an abelian maximal subgroup for groups of maximal class the first possibility means that and then anyway g has an abelian maximal subgroup moreover if g has an abelian maximal subgroup that subgroup must be thus the assumption that g has characters of degree deg p means that now gp p is properly contained in in h gp we have therefore h has irreducibles of degree exceeding but hp z and hp therefore the irreducible characters of have degrees at most p and those of h have degrees at most p next let g have characters of degree deg if then z g p and all irreducibles of g have degree at most thus to prove our claim we may assume that according to pa a x has only characters of degree at most for p odd iff one of the following four possibilities occur i g contains an abelian subgroup of index ii g contains a maximal subgroup h such that z h iii z g iv z g and if h z g is a maximal subgroup of g then z h z g in a group of maximal class iv is impossible because cg g is a maximal subgroup and iii means that ii means that either g is an exceptional group of order at most or that and i means that is abelian thus if g has an irreducible character of degree at least then note that by our assumption and then is which implies that there are two indices i j such that gi and gj first assume that i j and let h then h the last inequality shows that therefore thus h violates i iv and it has an irreducible character of degree at least but h p h h p thus z h and again the characters of h have degrees at most p and the characters of h have degrees at most p if i j we take h and obtain p with and proceed as before proof of the rest of theorem and of theorem since g is not of maximal class neither is g bl let i be the first index such that g g thus i then g g theorem a shows that if h is a maximal subgroup of g then h g the fact that g has characters of degree p implies that z g therefore g z g g assume first that i and let k g then in character degrees of some k g no maximal subgroup is abelian and the centre has index therefore k has an irreducible character such that deg on the other hand therefore deg p if i then g g let n g have index p in g let k and proceed as before if g then we saw in theorem that if c and d are two maximal subgroups then c this implies that c and thus l does not have abelian maximal subgroups and z l because g g g and g is the only normal subgroup of index it follows that has irreducible characters of degree bigger that p and that degree must be because is an abelian subgroup of index qed references be groups of prime power order vol de gruyter berlin bl on a special class of acta math hu endliche gruppen i springer berlin character theory of finite groups academic press san diego sets of as irreducible character degrees proc amer math soc lgm and the structure of groups of prime power order oxford university press oxford minimal characters of gp th more on normally monomial in preparation groups whose irreducible representations have degrees dividing pac j math character degrees of normally monomial maximal class in character theory of finite groups the isaacs conference contemporary mathematics american mathematical society providence maximal class with large character degree gaps preprint
| 4 |
preprint version an semantical identifier using radial basis neural networks and reinforcement learning napoli pappalardo and tramontana sep published on proceedings of the xv workshop dagli oggetti agli agenti bibitex inproceedings http proceedings of the xv workshop dagli oggetti agli agenti an semantical identifier using radial basis neural networks and reinforcement learning napoli christian and pappalardo giuseppe and tramontana emiliano published version copyright c uploaded under policies an semantical identifier using radial basis neural networks and reinforcement learning christian napoli giuseppe pappalardo and emiliano tramontana department of mathematics and informatics university of catania viale doria catania italy napoli pappalardo tramontana to the huge availability of documents in digital form and the deception possibility raise bound to the essence of digital documents and the way they are spread the authorship attribution problem has constantly increased its relevance nowadays authorship attribution for both information retrieval and analysis has gained great importance in the context of security trust and copyright preservation this work proposes an innovative driven machine learning technique that has been developed for authorship attribution by means of a preprocessing for and timeperiod related analysis of the common lexicon we determine a bias reference level for the recurrence frequency of the words within analysed texts and then train a radial basis neural networks rbpnn classifier to identify the correct author the main advantage of the proposed approach lies in the generality of the semantic analysis which can be applied to different contexts and lexical domains without requiring any modification moreover the proposed system is able to incorporate an external input meant to tune the classifier and then by means of continuous learning reinforcement i ntroduction nowadays the automatic attribution of a text to an author assisting both information retrieval and analysis has become an important issue in the context of security trust and copyright preservation this results from the availability of documents in digital form and the raising deception possibilities bound to the essence of the digital reproducible contents as well as the need for new mechanical methods that can organise the constantly increasing amount of digital texts during the last decade only the field of text classification and attribution has undergone new developement due to the novel availability of computational intelligence techniques such as natural language processing advanced data mining and information retrieval systems machine learning and artificial intelligence techniques agent oriented programming etc among such techniques computer intelligence ci and evolutionary computation ec methods have been largely used for optimisation and positioning problems in agent driven clustering has been used as an advanced solution for some optimal management problems whereas in such problems are solved for mechatronical module controls agent driven artificial intelligence is often used in combination with advanced data analysis techniques in order to create intelligent control systems by means of multi resolution analysis ci and parallel analysis systems have been proposed in order to support developers as in where such a classification and analysis was applied to assist refactoring in large software systems moreover ci and techniques like neural networks nns have been used in in order to model electrical networks and the related controls starting by classification strategies as well as for other complex physical systems by using several kinds of hybrid approaches all the said works use different forms of modeling and clustering for recognition purposes and these methods efficiently perform very challenging tasks where other common computational methods failed or had low efficiency or simply resulted as inapplicable due to complicated model underlying the case study in general machine learning has been proven as a promising field of research for the purpose of text classification since it allows building classification rules by means of automatic learning taking as a basis a set of known texts and trying to generalise for unknown ones while machine learning and nns are a very promising field the effectiveness of such approaches often lies on the correct and precise preprocessing of data the definition of semantic categories affinities and rules used to generate a set of numbers characterising a text sample to be successively given as input to a classifier typical text classification by using nns takes advantage of topics recognition however results are seldom appropriate when it comes to classify people belonging to the same social group or who are involved in a similar business the classification of texts from different scientists in the same field of research the politicians belonging to the same party texts authored by different people using the same technical jargon in our approach we devise a solution for extracting from the analysed texts some characteristics that can express the style of a specific author obtaining this kind of information abstraction is crucial in order to create a precise and correct classification system on the other hand while data abound in the context of text analysis a robust classifier should rely on input sets that are compact enough to be apt to the training process therefore some data have to reflect averaged evaluations that concern some anthropological aspects such as the historical period or the ethnicity etc this work satisfies the above conditions of extracting compact data from texts since we use a preprocessing tool for and related analysis of the common lexicon such a tool computes a bias reference system for the recurrence frequency of the word used in the analysed texts the main advantage of this choice lies in the generality of the implemented semantical reference database text database training set known preprocessing biasing new data unknown rbpnn with reinforcement learning local external text database wordnet lexicon fig a general schema of the data flow through the agents of the developed system identifier which can be then applied to different contexts and lexical domains without requiring any modification moreover in order to have continuous updates or complete renewals of the reference data a statically trained nn would not suffice to the purpose of the work for these reasons the developed system is able to by means of continuous learning reinforcement the proposed architecture also diminishes the human intervention over time thanks to its properties our solution comprises three main collaborating agents the first for preprocessing to extract meaningful data from texts the second for classification by means of a proper radial basis nn rbnn and finally one for adapting by means of a feedforward nn the rest of this paper is as follows section ii gives the details of the implemented preprocessing agent based on lexicon analysis section iii describes the proposed classifier agent based on rbnns our introduced modifications and the structure of the reinforcement learning agent section iv reports on the performed experiments and the related results finally section v gives a background of the existing related works while section vi draws our conclusions algorithm find the group a word belongs to and count occurrences start import a speech into t ext load dictionary into w ords load group database into groups thisw ord while thisw ord do thisgroup thisw ord if thisgroup then load a different lexicon if thisw ord then w end else break end end while thisw ord do thisgroup end thisw ord end export w ords and groups stop the fundamental steps of the said analysis see also algorithm are the followings ii e xtracting semantics from lexicon figure shows the agents for our developed system a preprocessing agent extracts characteristics from given text parts see text database in the figure according to a known set of words organised into groups see reference database a rbpnn agent takes as input the extracted characteristics properly organised and performs the identification on new data after appropriate training an additional agent dubbed adaptive critic shown in figure dynamically adapts the behaviour of the rbpnn agent when new data are available firstly preprocessing agent analyses a text given as input by counting the words that belong to a priori known groups of mutually related words such groups contain words that pertain to a given concern and have been built and according to the semantic relations between words hence assisted by the wordnet http import a single text file containing the speech import word groups from a predefined database the set containing all words from each group is called dictionary compare each word on the text with words on the dictionary if the word exists on the dictionary then the relevant group is returned if the word has not been found then search the available lexicon if the word exists on the lexicon then the related group is identified if the word is unkown then a new lexicon is loaded and if the word is found then dictionary and groups are updated search all the occurrences of the word in the text when an occurrence has been found then remove it from the text and increase the group counter figure shows the uml class diagram for the software system performing the above analysis class text holds a text to be analysed class words represents the known dictionary all the known words which are organised into groups given by class groups class lexicon holds several dictionaries iii t he rbpnn classifier agent for the work proposed here we use a variation on radial basis neural networks rbnn rbnns have a topology similar to common feedforward neural networks ffnn with backpropagation training algorithms bpta the primary lexicon text get exist get words search update groups filter service search count update ffnn rbnn pnn our rbpnn fig uml class diagram for handling groups and counting words belonging to a group difference only lies in the activation function that instead of being a sigmoid function or a similar activation function is a statistical distribution or a statistically significant mathematical function the selection of transfer functions is indeed decisive for the speed of convergence in approximation and classification problems the kinds of activation functions used for probabilistic neural networks pnns have to meet some important properties to preserve the generalisation abilities of the anns in addition these functions have to preserve the decision boundaries of the probabilistic neural networks the selected rbpnn architecture is shown in figure and takes advantage from both the pnn topology and the radial basis neural networks rbnn used in each neuron performs a weighted sum of its inputs and passes it through a transfer function f to produce an output this occurs for each neural layer in a ffnn the network can be perceived as a model connecting inputs and outputs with the weights and thresholds being free parameters of the model which are modified by the training algorithm such networks can model functions of almost arbitrary complexity with the number of layers and the number of units in each layer determining the function complexity a ffnn is capable to generalise the model and to separate the input space in various classes in a variable space it is equivalent to the separation of the different in any case such a ffnn can only create a general model of the entire variable space while can not insert single set of inputs into categories on the other hand a rbnn is capable of clustering the inputs by fitting each class by means of a radial basis function while the model is not general for the entire variable space it is capable to act on the single variables in a variable space it locates closed subspaces without any inference on the remaining space outside such subspaces another interesting topology is provided by pnns which are mainly ffnns also functioning as bayesian networks with fisher kernels by replacing the sigmoid activation function often used in neural networks with an exponential function a pnn can compute nonlinear decision boundaries approaching the bayes optimal classification moreover a pnn generates accurate predicted target probability scores with a probabilistic meaning in the space it is equivalent to attribute a probabilistic score to some chosen points which in figure are represented as the size of the points finally in the presented approach we decided to combine the advantages of both rbnn and pnn using the so called fig a comparison between results of several types of nns our rbpnn includes the maximum probability selector module rbpnn the rbpnn architecture while preserving the capabilities of a pnn due to its topology then being capable of statistical inference is also capable of clustering since the standard activation functions of a pnn are substituted by radial basis functions still verifying the fisher kernel conditions required for a pnn such an architecture in the variable space can both locate subspace of points and give to them a probabilistic score figure shows a representation of the behaviour for each network topology presented above a the rbpnn structure and topology in a rbpnn both the input and the first hidden layer exactly match the pnn architecture the input neurones are used as distribution units that supply the same input values to all the neurones in the first hidden layer that for historical reasons are called pattern units in a pnn each pattern unit performs the dot product of the input pattern vector v by a weight vector w and then performs a nonlinear operation on the result this nonlinear operation gives output x that is then provided to the following summation layer while a common sigmoid function is used for a standard ffnn with bpta in a pnn the activation function is an exponential such that for the neurone the output is xj exp where represents the statistical distribution spread the given activation function can be modified or substituted while the condition of parzen window function is still satisfied for the estimator in order to satisfy such a condition some rules must be verified for the chosen window function in order to obtain the expected estimate which can be expressed as a parzen window estimate p x by means of the kernel k of f in the space s d n p pn x k d hn h r sd k x dx n p pn fig a representation of a radial basis probabilistic neural network with maximum probability selector module where hn n is called window width or bandwidth parameter and corresponds to the width of the kernel in general hn n depends on the number of available sample data n for the estimator pn x since the estimator pn x converges in mean square to the expected value p x if lim hpn x i p x lim var pn x where hpn x i represents the mean estimator values and var pn x the variance of the estimated output with respect to the expected values the parzen condition states that such convergence holds within the following conditions sup k x x lim xk x lim hnd fig setup values for the proposed rbpnn nf is the number of considered lexical groups ns the number of analysed texts and ng is the number of people that can possibly be recognised as authors units work as in the neurones of a linear perceptron network the training for the output layer is performed as in a rbnn however since the number of summation units is very small and in general remarkably less than in a rbnn the training is simplified and the speed greatly increased the output of the rbpnn as shown in figure is given to the maximum probability selector module which effectively acts as a output layer this selector receives as input the probability score generated by the rbpnn and attributes to one author only the analysed text by selecting the most probable author the one having the maximum input probability score note that the links to this selector are weighted with weights adjusted during the training hence the actual input is the product between the weight and the output of the summation layer of the rbpnn lim nhnd in this case while preserving the pnn topology to obtain the rbpnn capabilities the activation function is substituted with a radial basis function rbf an rbf still verifies all the conditions stated before it then follows the equivalence between the w vector of weights and the centroids vector of a radial basis neural network which in this case are computed as the statistical centroids of all the input sets given to the network we name f the chosen radial basis function then the new output of the first hidden layer for the neurone is w xj f where is a parameter that is intended to control the distribution shape quite similar to the used in the second hidden layer in a rbpnn is identical to a pnn it just computes weighted sums of the received values from the preceding neurones this second hidden layer is called indeed summation layer the output of the summation unit is x xk wjk xj j where wjk represents the weight matrix such weight matrix consists of a weight value for each connection from the pattern units to the summation unit these summation layer size for a rbpnn the devised topology enables us to distribute to different layers of the network different parts of the classification task while the pattern layer is just a nonlinear processing layer the summation layer selectively sums the output of the first hidden layer the output layer fullfills the nonlinear mapping such as classification approximation and prediction in fact the first hidden layer of the rbpnn has the responsibility to perform the fundamental task expected from a neural network in order to have a proper classification of the input dataset of analysed texts to be attributed to authors the size of the input layer should match the exact number nf of different lexical groups given to the rbpnn whereas the size of the pattern units should match the number of samples analysed texts ns the number of the summation units in the second hidden layer is equal to the number of output units these should match the number of people ng we are interested in for the correct recognition of the speakers figure reinforcement learning in order to continuously update the reference database for our system a statically trained nn would not suffice for the purpose of the work since the aim of the presented system is having an expanding database of text samples for classification and recognition purpose the agent driven identification should dynamically follow the changes in such a database when a new entry is made then the related feature set and biases change it implies that also the rbpnn should be properly managed in order to ensure a continuous adaptive control for reinforcement learning moreover for the considered domain it is desirable that a human supervisor supply suggestions expecially when the system starts working the human activities are related to the supply of new entries into the text sample database and to the removal of misclassifications made by the rbpnn we used a supervised control configuration see figure where the external control is provided by the actions and choices of a human operator while the rbpnn is trained with a classical backpropagation learning algorithm it is also embedded into an reinforcement learning architecture which back propagates learning by evaluating the correctness of the choices with respect to the real word let be the error function for the results supported by human verification or the vectorial deviance for the results not supported by a positive human response this assessment is made by an agent named critic we consider the filtering step for the rbpnn output to be both critic a human supervisor acknowledging or rejecting rbpnn classifications or adaptive critic an agent embedding a nn that in the long run simulates the control activity made by the human critic hence decreasing human control over time adaptive critic needs to learn and this learning is obtained by a modified backpropagation algorithm using just as error function hence adaptive critic has been implemented by a simple feedforward nn trained by means of a traditional gradient descent algorithm so that the weight modification is the is the activation of neuron is the input to the neurone weighted as x wij j the result of the adaptive control determines whether to continue the training of the rbpnn with new data and whether the last training results should be saved or discarded at runtime this process results in a continuous adaptive learning hence avoiding the classical problem of nn polarisation and overfitting figure shows the developed learning system reinforcement according to the literature straight lines represent the data flow training data fed to the rbpnn then new data inserted by a supervisor and the output of the rbpnn sent to the critic modules also by means of a delay operator z functional modifications operated within the system are represented as slanting arrows the choices made by a human supervisor critic modify the adaptive critic which adjust the weight of its nn the combined output of critic and adaptive critic determines whether the rbpnn should undergo more training epochs and so modify its weights z critic features data rbpnn fig the adopted supervised learning model reinforcement slanting arrows represent internal commands supplied in order to control or change the status of the modules straight arrows represent the data flow along the model z represents a time delay module which provides delayed outputs characteristics see section ii then such results have been given to the classification agent the total number of text samples was and we used of them for training the classification agent and for validation the text samples both for training and validation were from different persons that have given a speech from einstein to lewis as shown in figure given the flexible structure of the implemented learning model the word groups are not fixed and can be modified added or removed over time by an external tuning activity by using the count of words in a group instead of a counts the system realises a statistically driven classifier that identifies the main semantic concerns regarding the text samples and then attributes such concerns to the most probable person the relevant information useful in order to recognise the author of the speech is usually largely spread over a certain number of word groups that could be indication of the cultural extraction heritage field of study professional category etc this implies that we can not exclude any word group a priori while the rbpnn could learn to automatically enhance the relevant information in order to classify the speeches figure shows an example of the classifier performances for results generated by the rbpnn before the filter implemened by the probabilistic selector since the rbpnn results have a probability between and then the shown performance is when a text was correctly attributed or not attributed to a specific person figure shows the performances of the system when including the probabilistic selector for this case a boolean selection is involved then correct identifications are represented as false positive identifications as black marks and missed identifications as white marks for validation purposes figure left and right shows results according to e e y iv e xperimental setup the proposed rbpnn architecture has been tested using several text samples collected from public speeches of different people both from the present and the past era each text sample has been given to the preprocessing agent that extract some adaptive critic where e identifies the performance the classification result and y the expected result lower e negative values identify an excess of confidence in the attribution of a text to a person while greater e positive values identify a lack of confidence in that sense fig the obtained performance for our classification system before left and after right the maximum probability selector choice the mean grey color represents the correct classifications while white color represents missed classification and black color false classifications the system was able to correctly attribute the text to the proper author with only a of missing assignments r elated w orks several generative models can be used to characterise datasets that determine properties and allow grouping data into classes generative models are based on stochastic block structures or on infinite hidden relational models and mixed membership stochastic blockmodel the main issue of models is the type of relational structure that such solutions are capable to describe since the definition of a class is generally the reported models risk to replicate the existing classes for each new attribute added such models would be unable to efficiently organise similarities between the classes cats and dogs as child classes of the more general class mammals such classes would have to be replicated as the classification generates two different classes of mammals the class mammals as cats and the class mammals as dogs consequently in order to distinguish between the different races of cats and dogs it would be necessary to further multiply the mammals class for each one of the identified race therefore such models quickly lead to an explosion of classes in addition we would either have to add another class to handle each specific use or a mixed membership model as for crossbred species another paradigm concerns the latent feature relational model a bayesian nonparametric model in which each entity has boolean valued latent features that influence the model s relations such relations depend on covariant sets which are neither explicit or known in our case study at the moment of the initial analysis in the authors propose a sequential forward feature selection method to find the subset of features that are relevant to a classification task this approach uses novel estimation of the conditional mutual information between candidate feature and classes given a subset of already selected features used as a classifier independent criterion for evaluating feature subsets in data from the simulation of battery energy storage are used for classification purposes with recurrent nns and pnns by means of a theoretical framework based on signal theory while showing the effectiveness of the neural network based approaches in our case study classification results are given by means of a probability hence the use of a rbpnn and an training achieved by reinforcement learning vi c onclusion this work has presented a system in which an agent analyses fragments of texts and another agent consisting of a rbpnn classifier performs probabilistic clustering the system has successfully managed to identify the most probable author among a given list for the examined text samples the provided identification can be used in order to complement and integrate a comprehensive verification system or other kinds of software systems trying to automatically identify the author of a written text the rbpnn classifier agent is continuously trained by means of reinforcement learning techniques in order to follow a potential correction provided by an human supervisor or an agent that learns about supervision the developed system was also able to cope with new data that are continuously fed into the database for the adaptation abilities of its collaborating agents and their reasoning based on nns acknowledgment this work has been supported by project prime funded within por fesr sicilia framework and project prisma funded by the italian ministry of university and research within pon framework r eferences napoli pappalardo tramontana and simplified firefly algorithm for image search in ieee symposium series on computational intelligence ieee gabryel and nowicki creating learning sets for control systems using an evolutionary method in proceedings of artificial intelligence and soft computing icaisc ser lncs vol springer pp bonanno capizzi gagliano and napoli optimal management of various renewable energy sources by a new forecasting method in proceedings of international symposium on power electronics electrical drives automation and motion speedam ieee pp nowak and analysis of the active module mechatronical systems in proceedings of mechanika icm kaunas lietuva kaunas university of technology press pp napoli pappalardo and tramontana a hybrid predictor for qos control and stability in proceedings of ai ia advances in artificial intelligence springer pp bonanno capizzi sciuto napoli pappalardo and tramontana a novel toolbox for optimal energy dispatch management from renewables in igss by using wrnn predictors and gpu parallel solutions in power electronics electrical drives automation and motion speedam international symposium on ieee pp nowak and multiresolution derives analysis of module mechatronical systems mechanika vol no pp napoli pappalardo and tramontana using modularity metrics to assist move method refactoring of large systems in proceedings of international conference on complex intelligent and software intensive systems cisis ieee pp pappalardo and tramontana suggesting extract class refactoring opportunities by measuring strength of method interactions in proceedings of asia pacific software engineering conference apsec ieee december tramontana automatically characterising components with concerns and reducing tangling in proceedings of computer software and applications conference compsac workshop quors ieee july doi pp napoli papplardo and tramontana improving files availability for bittorrent using a diffusion model in ieee international workshop on enabling technologies infrastructure for collaborative enterprises wetice june pp giunta pappalardo and tramontana aspects and annotations for controlling the roles application classes play for design patterns in proceedings of asia pacific software engineering conference apsec ieee december pp calvagna and tramontana delivering dependable reusable components by expressing and enforcing design decisions in proceedings of computer software and applications conference compsac workshop quors ieee july doi pp giunta pappalardo and tramontana aodp refactoring code to provide advanced modularization of design patterns in proceedings of symposium on applied computing sac acm tramontana detecting extra relationships for design patterns roles in proceedings of asianplop march capizzi napoli and an innovative hybrid neurowavelet method for reconstruction of missing data in astronomical photometric surveys in proceedings of artificial intelligence and soft computing icaisc springer pp bonanno capizzi sciuto napoli pappalardo and tramontana a cascade neural network architecture investigating surface plasmon polaritons propagation for thin metals in openmp in proceedings of artificial intelligence and soft computing icaisc ser lncs vol springer pp napoli bonanno and capizzi exploiting solar wind time series correlation with magnetospheric response by using an hybrid approach proceedings of the international astronomical union vol no pp capizzi bonanno and napoli hybrid neural networks architectures for soc and voltage prediction of new generation batteries storage in proceedings of international conference on clean electrical power iccep ieee pp napoli bonanno and capizzi an hybrid approach for prediction of solar wind iau symposium no pp capizzi bonanno and napoli a new approach for batteries modeling by local cosine in power electronics electrical drives automation and motion speedam international symposium on june pp duch towards comprehensive foundations of computational intelligence in challenges for computational intelligence springer pp capizzi bonanno and napoli recurrent neural networkbased control strategy for battery energy storage in generation systems with intermittent renewable energy sources in proceedings of international conference on clean electrical power iccep ieee pp haykin neural networks a comprehensive foundation prentice hall mika ratsch jason scholkopft and muller fisher discriminant analysis with kernels in proceedings of the signal processing society workshop neural networks for signal processing ix ieee specht probabilistic neural networks neural networks vol no pp deshuang and songde a new radial basis probabilistic neural network model in proceedings of conference on signal processing vol ieee zhao huang and guo optimizing radial basis probabilistic neural networks using recursive orthogonal least squares algorithms combined with algorithms in proceedings of neural networks vol ieee prokhorov santiago and ii adaptive critic designs a case study for neurocontrol neural networks vol no pp online available http javaherian liu and kovalenko automotive engine torque and ratio control using dual heuristic dynamic programming in proceedings of international joint conference on neural networks ijcnn pp widrow and lehr years of adaptive neural networks perceptron madaline and backpropagation proceedings of the ieee vol no pp sep park harley and venayagamoorthy optimal neurocontrol for synchronous generators in a power system using neural networks ieee transactions on industry applications vol no pp sept nowicki and a snijders estimation and prediction for stochastic blockstructures journal of the american statistical association vol no pp xu tresp yu and peter kriegel infinite hidden relational models in in proceedings of international conference on uncertainity in artificial intelligence uai airoldi blei xing and fienberg mixed membership stochastic block models in advances in neural information processing systems nips curran associates miller griffiths and jordan nonparametric latent feature models for link prediction in advances in neural information processing systems nips curran associates vol pp somol haindl and pudil conditional mutual information based feature selection for classification task in progress in pattern recognition image analysis and applications springer pp bonanno capizzi and napoli some remarks on the application of rnn and prnn for the simulation of advanced battery energy storage in proceedings of international symposium on power electronics electrical drives automation and motion speedam ieee pp
| 9 |
from traces to proofs proving concurrent programs safe apr chinmay subodh shibashis and department of computer science and engineering indian institute of technology delhi email chinmay svs shibashis sak flagi f alse flagj f alse t in scheduling is the cardinal reason for difficulty in proving correctness of concurrent programs a powerful proof strategy was recently proposed to show the correctness of such programs the approach captured dataflow dependencies among the instructions of an interleaved and execution of threads these dependencies were represented by an inductive graph idfg which in a nutshell denotes a set of executions of the concurrent program that gave rise to the discovered dependencies the idfgs were further transformed in to alternative finite automatons afas in order to utilize efficient tools to solve the problem in this paper we give a novel and efficient algorithm to directly construct afas that capture the dependencies in a concurrent program execution we implemented the algorithm in a tool called prooftrapar to prove the correctness of finite state cyclic programs under the sequentially consistent memory model our results are encouranging and compare favorably to existing tools pi w hile true flagi while flag j true t j section flagi pj w hile true flagj while flagi true t i section flagj fig peterson s algorithm for two processes pi and pj shown to hold true on unbounded number of traces trace is a sequence of events corresponding to an interleaved execution of processes in the program generated due to unbounded number of unfoldings of the loops notice that events at control locations and are on events from control locations and respectively in any finite prefix of a trace of pi interleaved execution of pi and pj up to the events corresponding to control location or the last instance of event at control location and the last instance of event at control location can be ordered in only one of the following two ways either appears before or appears after this has resulted in partitioning of an unbounded set of traces to a set with mere two traces when appears before then the final value of the variable t is i thus making the condition at control location to be true in the other case when appears after the final value of the variable t is j thereby making the condition at control location evaluate to true hence in no trace both the conditions are false simultaneously this informal reasoning indicates that both processes can never simultaneously enter in their critical sections thus proof of correctness for peterson s algorithm can be demonstrated by picking two traces as mentioned above from the set of infinite traces and proving them correct in general the intuition is that a proof for a single trace of a program can result in pruning of a large set of traces from consideration to convert this intuition to a feasible verification method there is a need to construct a formal structure from a proof of a trace such that the semantics of this structure includes a set of all those traces that have proof arguments equivalent to proof of inductive data flow graphs idfg was proposed in to capture among the events of a trace and to perform trace partitioning all traces that have the same idfg i ntroduction the problem of checking whether or not a correctness property specification is violated by the program implementation is already known to be challenging in a sequential let alone when programs are implemented exploiting concurrency the central reason for greater complexity in verification of concurrent implementations is due to the exponential increase in the number of executions a concurrent program with n threads and k instructions per thread can have nk k n executions under a sequentially consistent sc memory model a common approach to address the complexity due to the exponential number of executions is trace partitioning in a powerful proof strategy was presented which utilized the notion of trace partitioning let us take peterson s algorithm in figure to convey the central idea behind the trace partitioning approach in this algorithm two processes pi and pj coordinate to achieve an exclusive access to a critical section cs using shared variables a process pi will if pj has expressed interest to enter its cs and t is j in order to prove the mutual exclusion me property of peterson s algorithm we must consider the boolean conditions of the while loops at control locations and the me property is established only when at most one of these conditions is false under every execution of the program me must be d the lines of while allowed the use of any sequential verification method to construct a proof of a given trace the paper does not comment on the performance and the feasibility of their approach due to the lack of an implementation the second contribution of this paper is an implementation in the form of a tool prooftrapar we compare our implementation against other tools in this domain such as threader and lazycseq winners in the concurrency category of the software verification competitions held in and prooftrapar on average performed an order of magnitude better than threader and times better than the paper is organized as follows section ii covers the notations definitions and programming model used in this paper section iii presents our approach with the help of an example to convey the overall idea and describes in detail the algorithms for constructing the proposed alternating finite automaton along with their correctness proofs this section ends with the overall verification algorithm with the proof of its soundness and completeness for finite state concurrent programs section iv presents the experimental results and comparison with existing tools namely threader and section v presents the related work and section vi concludes with possible future directions w init true r w r w y w r w t x a w y w y t x y t x r w abc bac b abc bac acb cab bca cba d d c fig comparison with must have the same proof of correctness in every iteration of their approach a trace is picked from the set of all traces that is yet to be covered by the idfg an idfg is constructed from its proof the process is repeated until all the traces are either covered in the idfg or a is found an intervening step is involved where the idfg is converted to an alternating finite automaton afa while we explain afa in later sections it suffices to understand at this stage that the language accepted by this afa and the set of traces captured by the corresponding idfg is the same their reason for this conversion is to leverage the use of operations such as subtraction complement on the set of traces though the goal of paper is verification of concurrent programs which is the same as in this work our work has some crucial differences i an afa is constructed directly from the proof of a trace without requiring the idfg construction ii the verification procedure built on directly constructed afa is shown to be sound and complete are used to obtain the proof of correctness of a trace iii to the best of our knowledge we provide the first implementation of the proof strategy discussed in the example trace of figure a highlights the key difference between idfg to afa conversion of and the direct approach presented in this work note that all three events a b and c are data independent hence every resulting trace after permuting the events in abc also satisfies the same set of and for a hoare triple w abc y t x r w figure b shows the set of traces admitted by an afa obtained from idfg shown in figure d after the first iteration as computed by this set clearly does not represent every permutation of abc consequently more iterations are required to converge to an afa that represents all traces admissible under the same set of and in contrast the afa that is constructed directly by our approach from the hoare triple w abc y t x r w admits the set of traces shown in figure c hence on this example our strategy terminates in a single iteration to summarize the contributions of this work are as follows ii p reliminaries a program model we consider concurrent programs composed of a fixed number of deterministic sequential processes and a finite set of shared variables sv a concurrent program is a quadruple p p a i d where p is a finite set of processes a ap p p is a set of automata one for each process specifying their behaviour d is a finite set of constants appearing in the syntax of processes and i is a function from variables to their initial values each process p p has a disjoint set of local variables lvp let expp bexpp denote the set of expressions boolean expressions ranged over by exp and constructed using shared variables local variables d and standard mathematical operators each specification automaton ap is a quadruple qpinit assrnp where qp is a finite set of control states qpinit is the initial state and assrnp qp is a relation specifying the assertions that must hold at some control state each transition in is of the form q opp q where opp assume lock x here evaluates exp in the current state and assigns the value to x where x sv lvp assume is a blocking operation that suspends the execution if the boolean expression evaluates to false otherwise it acts as nop this instruction is used to encode control path conditions of a program lock x where x sv is a blocking operation that suspends the execution if the value of x is not equal to otherwise it assigns to x operation unlock is achieved by assigning to this shared variable each of these operations are deterministic in nature we present a novel algorithm to directly construct an afa from a proof of a sequential trace of a finite state possibly cyclic concurrent program this construction is used to give a sound and complete verification procedure along p a qq qb b qc e wp op wp assume x wp op wp op qp qa t assume assume q b qr a q p qd qs c r qe assume assume qu def def qf qu def if op if op assert def def if op lock x def if op if op assume def of the operation op terminates and the resulting program state satisfies given a formula variable x and expression e let denote the formula obtained after substituting all free occurrences of x by e in we assume an equality operator over formulae that represents syntactic equality every formula is assumed to be normalized in a conjunctive normal form cnf we use true false to syntactically represent a logically valid unsatisfiable formula weakest precondition axioms for different program statements are shown in figure here empty sequence of statements is denote by skip we have the following properties about weakest preconditions property if wp op and wp op then wp op and wp op note that this property holds only when s is a deterministic operation which is true in our programming model property let and be the formulas such that logically implies then for every operation op the formula wp op logically implies wp op we say that a formula is stable with respect to a statement s if wp s is logically equivalent to in this paper we use weakest preconditions to check the correctness of a trace with respect to some safety assertion a trace reaching up to a safety assertion is safe if the execution of starting from the initial state i either blocks does not terminate because of not satisfying some path conditions or terminates and the resulting state satisfies the following lemmas clearly define the conditions using weakest precondition axioms for declaring a trace either safe or unsafe detailed proofs of these are given in appendix a and in b here denote the trace obtained by replacing every instruction of the form assume by assert in lemma for a trace an initial program state i and a safety property if wp i is unsatisfiable then the execution of starting from i either does not terminate or terminates in a state satisfying lemma for a trace an initial program state i and a safety property if wp i is satisfiable then the execution of starting from i terminates in a state not satisfying s qf def fig weakest precondition axioms qt d def if op skip def true turn true turn fig specification of peterson s algorithm execution of any two same operations from the same states always give the same behaviour in all examples of this paper we use symbolic labels to succinctly represent program operations for example figure shows the specification of two processes in peterson s algorithm labels a b p denote operations in the program variable res is introduced to specify the mutual exclusion property as a safety property a process pi sets this variable to i inside its critical section assertions assert res i is checked in pi before leaving its critical section if these assertions hold in every execution of these two processes then the mutual exclusion property holds these assertions are shown as qf and qu in figure and they need to be checked at state qf and qu respectively a tuple say t of n elements can be represented as a function such that t k returns the k th element of this tuple given a function f un f un a b denotes another function same as of f un except at a where it returns a parallel composition in the sc memory model given a concurrent program p p a i d consisting of n processes p pn we define an automaton a p q q init assrn to represent the parallel composition of p in the sc memory model here q qpn is the set of states ranged over by q q init qpinit qpinit is the initial n state and transition relation models the interleaving semantics formally q opj q iff there exists a j such that q j qpj q q j j and qpj opj j for a state q let t q assrnpi q i i if t q is not empty then assrn q is the conjunction of assertions in the set t q relation assrn captures the assertions which need to be checked in the interleaved traces of p as our interest lies in analyzing those traces which reach those control points where assertions are specified we mark all those states where the relation assrn is defined as accepting states every word accepted by a p represents one sc execution leading to a control location where at least one assertion is to be checked alternating finite automata afa b weakest precondition alternating finite automata are a generalization of nondeterministic finite automata nfa an nfa is a five tuple sf with a set of states s ranged over by s an initial state a set of accepting states sf and a transition function p s for any state s of this nfa the given an operation op op p and a postcondition formula the weakest precondition of op with respect to denoted by wp op is the weakest formula such that starting from any program state s that satisfies the execution s op set of words accepted by s is inductively defined as acc s a s a acc where acc s for all s sf here the existential quantifier represents the fact that there should exist at least one outgoing transition from s along which gets accepted an afa is a six tuple sf with and sf s denoting the alphabet initial state and the set of accepting states respectively s is the set of all states ranged over by s and s p s is the transition function the set of words accepted by a state of an afa depends on whether that state is an existential state from the set or a universal state from the set for an existential state s the set of accepted words is inductively defined in the same way as in nfa for a universal state s the set of accepted words are acc s a s a acc with acc s for all s sf notice the change in the quantifier from to in the diagrams of afa used in this paper we annotate universal states with symbol and existential states with symbol for a state s let succ s a s s a s be the set of of for an automaton a let l a be the language accepted by the initial state of that automaton for any denote the length of and rev denote the reverse of wp op amap s is an existential state and s rmap where is the longest sequence wp amap s amap s l iteral ssn s if sk if s wp op amap s and is an existential state l iteral elf ssn s or amap s sk rmap s rmap sk c ompound ssn otherwise fig transition function used in the definition op op is the alphabet ranged over by op here op is the set of instructions used in program p symbol acts as an identity element of concatenation and wp s is the largest set of states ranged over by s a every state is annotated with a formula and a prefix of denoted by amap s and rmap s respectively state is the initial state such that amap rmap b s iff either of the following two conditions hold s such that amap is wp op amap s rmap s rmap and is the largest suffix of rmap s such that formula amap s is stable with respect to s such that amap s or amap s rmap s rmap amap and c a state s s is an existential state universal state iff amap s is a literal compound formula sf s is a set of accepting states such that s sf iff wp rmap s amap s is same as amap s amap s is stable with respect to rmap s and function s is defined in figure following point any state added to s is either annotated with a smaller rmap or a smaller formula compared to the states already present in further every formula and trace is of finite length hence the set of states s is finite by point of this construction a state s where amap s is a compound formula is always a universal state irrespective of whether amap s is a conjunction or a disjunction of clauses the reason behind this decision will be clear shortly when we will use this afa to inductively construct the weakest precondition wp note that we assume every formula is normalized in cnf figure shows an example trace abapqprcs of peterson s algorithm this trace is picked from the peterson s iii o ur a pproach the overall approach of this paper can be described in the following steps i given a concurrent program p construct all its interleaved traces represented by automaton a p as defined in subsection ii pick a trace and a safety property say to prove for this trace iii prove correct with respect to using lemma and lemma and generate a set of traces which are also provably correct let us call this set t iv remove set t from the set of traces represented by a p and repeat from step ii until either all the traces in p are proved correct or an erroneous trace is found step iii of this procedure correctness of can be achieved by checking the unsatisfiability of wp i however we are not only interested in checking the correctness of but also in constructing a set of traces which have a similar reasoning as of therefore instead of computing wp directly from the weakest precondition axioms of figure we construct an afa from and step iv is then achieved by applying automatatheoretic operations such as complementation and subtraction on this afa notion of universal and existential states of afa helps us in finding a set of sufficient dependencies used in the weakest precondition computation so that any other trace satisfying those dependencies gets captured by afa subsequent subsections covers the construction properties and use of this afa in detail constructing the afa from a trace and a formula definition an afa constructed from trace of a program p and a formula sf amap where if a is a e a p abapqprcs s s a p true a p abapqpr p false abapq false turn false false a e a p c r a p res c abapqprc a abapq false turn a abapq turn b q a p q ab false false turn ab ab ab p t a p turn ab b q a p b false a false abap fig afa of trace given in figure b and false assume turn false assume turn fig a trace from peterson s algorithm hmap s specification in figure to prove correct with respect to the def safety formula we first construct which will later help us to derive wp this afa is shown in figure for a state s amap s is written inside the rectangle representing that state and rmap s is written inside an ellipse next to that state we show here some of the steps illustrating this construction by definition we have amap and rmap abapqprcs for initial state in a transition s op created by rule l iteral ssn the state is annotated with the weakest precondition of an operation op taken from rmap s with respect to amap s operation op is picked in such a way that amap s is stable with respect to every other operation present after op in rmap s such transitions capture the inductive construction of the weakest precondition for a given and trace transition s in figure is created by this rule as wp s amap amap and rmap rmap in any transition created by rule c ompound ssn say from s to sk the states are annotated with the subformulae of amap s for example transitions and transition a follows from the rule l iteral ssn note that rmap is empty and hence by point of definition is an accepting state following the same reasoning states and are also set as accepting states rule l iteral elf ssn adds a self transition at a state s on a symbol op op such that amap s is stable with respect to op for example transitions op where op op s a p the following lemma relates rmap s at any state to the set of words accepted by s in this afa lemma given a l a p and let be the afa satisfying definition for every state s of this afa the condition rev rmap s acc s holds a detailed proof of this lemma is given in appendix this lemma uses the reverse of rmap s in its statement because if s sf amap s hmap sk k hmap sk k hmap s base case if s sk and amap s amap sk k c onj case if s sk and amap s amap sk k d isj case if s op l case fig rules for hmap construction the weakest precondition of a sequence is constructed by scanning it from the end this can be seen in the transition rule l iteral ssn as a corollary rev is also accepted by this afa because by definition rmap is constructing the weakest precondition from after constructing the rules given in figure are used to inductively construct and assign a formula hmap s to every state s of figure shows the afa of figure where states are annotated with formula hmap s this formula is shown in the ellipse beside every state for better readability we do not show rmap s in this figure following rule base case hmap of and are set to false whereas hmap is set to false by rule l case hmap of and are also set to false after applying rule d isj case for transition hmap is set to false similarly using rule c onj case we get hmap as false finally hmap is also set to false hmap constructed inductively in this manner satisfies the following property lemma let be an afa constructed from a trace and a post condition as in definition then for every state s of this afa and for every word accepted by state s hmap s is logically equivalent to wp rev amap s here we present the proof outline detailed proof is given in appendix first consider the accepting states of for example states and of figure following the definition of an accepting state and by the adding transition rule l iteral elf ssn a e a p false s res c false true a p false s a p false false false turn c r a p false turn false false a e a p b q a p false false false false false turn b q a p turn b false a false p false turn false false false a false p t a p q false false algorithm converting universal to existential states while preserving lemma data input afa op sf amap result modified afa let s be a state in afa such that s s sk hmap s is unsatisfiable and amap s amap sk let unsatcore s p sk such that unsatcore s iff hmap hmap is a minimal unsat core of hmap sk create an empty set u foreach sn unsatcore s do create a new universal state su and add it to the set u set amap su amap set hmap su hmap add a transition by setting su end remove transition s sk convert s to an existential state add a transition from s on by setting s u where u is the set of universal states created one for each element of unsatcore s fig hmap construction for the running example enlarging the set of words accepted by ery word accepted by such an accepting state s satisfies wp rev amap s amap s therefore setting hmap s as amap s for these accepting states as done in rule base case completes the proof for accepting states converting universal states to existential states figure shows an example trace abcde obtained from the parallel composition of some program p figure shows the afa constructed for and as s t z x from lemma we get wp as false note that the wp s t and wp z x are unsatisfiable we have two ways to derive the unsatisfiability of wp one is due to the operation d and the other is due to the operation a followed by operation in this example any word that enforces either of these two ways will derive false as the weakest precondition for example the sequence adcbe is not accepted by the afa of figure but the condition wp rev false follows from wp d false which is already captured in the afa of figure note that states and in figure are annotated with unsatisfiable hmap assertion it seems sufficient to take any one of these branches to argue the unsatisfiability of hmap because hmap by definition is a conjunction of hmap and hmap therefore if we convert a universal state to an existential state then the modified afa will accept adcbe let us look at algorithm to see the steps involved in this transformation this algorithm picks a universal state s such that amap s is a conjunction of clauses and only a subset of its successors are sufficient to make hmap s unsatisfiable state of figure is one such state for each such minimal subsets of its successors this algorithm creates a universal state as shown in line of this algorithm it is easy to see that hmap su is also unsatisfiable before adding su transition in afa this algorithm sets amap su as amap by construction every word accepted by su must be accepted by each of these states satisfy lemma hence lemma continues to hold for these newly created universal states as well now consider a newly created transition s u in line for any state u amap s logically implies amap because represents a subset of the original successors now consider a state s with transition s sk created using rule c ompound ssn and let be a word accepted by by construction s must be a universal state and hence must be accepted by each of sk as well using this lemma inductively on successor states sk induction on the formula size we get wp amap si hmap si for all i now we can apply property depending on whether amap s is a conjunction or a disjunction of amap sk by replacing amap s with amap sk amap sk and hmap s with hmap sk hmap sk completes the proof note that making s as a universal state when amap s is either a conjunction or a disjunction allowed us to use property in this proof otherwise if we make s an existential state when amap s is a disjunction of formulae then we can not prove this lemma for states where hmap s is constructed using rule d isj case this lemma serves two purposes first it checks the correctness of a trace a safety property for which this afa was constructed if hmap i is unsatisfiable as in our peterson s example trace then is declared as correct second it guarantees that every trace accepted by this afa that is present in the set of all traces of p is also safe and hence we can skip proving their correctness altogether removing such traces is equivalent to subtracting the language of this afa from the language representing the set of all traces then a natural question to ask is if we can increase the set of accepted words of this afa while preserving lemma a e a p p false false false false s t d false a b c e a b c d e false s t z x false z e a b d a a b c d e fig example trace false s y x false b c d e true a p a false false false res c p c r a p fig afa for given in figure s op s op iff false false false false a e a p s a p false turn a false false false su false turn a false q false false false false turn b q a p fig afa of figure after modification algorithm algorithm to check the safety assertions of a concurrent program p input a concurrent program p pn with safety property map assrn result yes if program is safe else a counterexample let a p bet the automaton that represents the set of all the sc executions of p as defined in section ii set tmp l a p while tmp is not empty do let tmp with as a safety assertion to be checked let be the afa constructed from and if i hmap is satisfiable then is a valid counterexample violating return else let be the afa modified by proposed transformations tmp tmp rev where rev rev l end end return yes hmap s and hmap are unsatisfiable s is a literal and op amap s amap rule nsat or hmap s and hmap are valid s is a literal and wp op amap s rule fig rules for adding more edges of s viz sk as s is now an existential state any word accepted by s say is accepted by at least one state in u say using lemma on hmap is logically equivalent to wp rev amap using unsatisfiability of hmap s and hmap and the monotonicity property of the weakest precondition property we get that hmap s is logically equivalent to wp rev amap s this transformation is formally proved correct in appendix adding more transitions to using the monotonicity property of the weakest precondition we further modify by adding more transitions for any two states s and such that amap s and amap are literals both hmap s and hmap are unsatisfiable and there exists a symbol a can be as well such that wp a amap s logically implies amap an edge labeled a is added from s to this transformation also preserves lemma following the same monotonicity property property used in the previous transformation similar argument holds when hmap s and hmap are valid and amap wp a amap s holds the rules of adding edges are shown in figure figure shows the afa of figure modified by above transformations rule rule nsat adds an edge from to on symbol because hmap and hmap are unsatisfiable and wp amap logically implies amap same rule also adds a self loop at on operation p and a self loop at on operation a transformation by algorithm removes the transition from to and all other states reachable from now consider a trace rev abpqparcs that is accepted by this modified afa in figure but was not accepted by the original afa of figure note that wp abpqparcs is unsatisfiable and this is a direct consequence of lemma because of the transformations presented in this we do not need to reason about this trace separately this transformation is formally proved correct in appendix putting all things together for safety verification in algorithm all the above steps are combined to check if all the sc executions of a concurrent program p satisfy the safety properties specified as assertions proof of the following theorem is given in appendix theorem let p pn be a finite state program with or without loops with associated assertion maps assrnpi all assertions of this program hold iff algorithm returns yes if the algorithm returns a word then at least one assertion fails in the execution of program prooftrapar threader handle larger number of interleavings these optimizations also selectively check a representative set of traces among the set of all interleavings por based methods were traditionally used in bug finding but recently they have been extended efficiently using abstraction and interpolants for proving programs correct the technique presented in this paper using afa can possibly be used to keep track of partial orders in por based methods in a formalism called concurrent trace program ctp is defined to capture a set of interleavings corresponding to a concurrent trace ctp captures the partial orders encoded in that trace corresponding to a ctp a formula is defined such that is satisfiable iff there is a feasible linearization of the partial orders encoded in ctp that violates the given property our afa is also constructed from a trace but unlike ctp it only captures those different interleavings which guarantee the same proof outline recently in a formalism called has been proposed to capture the set of relations in a set of executions this relation is then used for multiple tasks such as synchronization synthesis bug summarization and predicate refinement since the afa constructed by our algorithm can also be represented as a boolean formula universal states correspond to conjunction and existential states correspond to disjunction that encodes the ordering relations among the participating events it will be interesting to explore other usages of this afa along the lines of fig comparison with threader and time in seconds iv e xperimental e valuation we implemented our approach in a prototype tool prooftrapar this tool reads the input program written in a custom format in future we plan to use parsers such as cil or llvm to remove this dependency individual processes are represented using finite state automata we use an automata library libfaudes to carry out operations on automata as this library does not provide operations on afa mainly complementation and intersection we implemented them in our tool after constructing the afa from a trace we first remove transitions from this afa this is followed by adding additional edges in afa using proposed transformations instead of reversing this afa as in line of algorithm we subtract it with an nfa that represents the reversed language of the set of all traces this avoids the need of reversing an afa note that we do not convert our afa to nfa but rather carry out intersection and complementation operations needed for language subtraction operation directly on afa our tool uses the theorem prover to check the validity of formulae during afa construction prooftrapar can be accessed from the repository https figure tabulates the result of verifying pthreadatomic category of benchmarks using our tool threader and these tools were the winners in the concurrency category of the software verification competition of threader and dash denotes that the tool did not finish the analysis within minutes numbers in bold text denote the best time of that experiment versions of these programs are labeled with except on lock and on unsafe version of qrcu quick read copy update our tool performed better than the other two tools on unsafe versions our approach took more time to find out an erroneous trace as compared to exploration by and the presence of bugs at a shallow depth seem to be a possible reason behind this performance difference introducing priorities while picking traces in order to make our approach efficient in is left open for future work vi c onclusion and f uture w ork we presented a trace partitioning based approach for verifying safety properties of a concurrent program to this end we introduced a novel construction of an alternating finite automaton to capture the proof of correctness of a trace in a program we also presented an implementation of our algorithm which compared competitively with existing tools we plan to extend this approach for parameterized programs and programs under relaxed memory models we also plan to investigate the use of interpolants with weakest precondition axioms to incorporate abstraction for handling infinite state programs r eferences brzozowski and ernst on equations for regular languages finite automata and sequential networks tcs clarke henzinger radhakrishna ryzhyk samanta and tarrach from to preemptive scheduling using synchronization synthesis in cav chandra kozen and stockmeyer alternation acm january de moura and an efficient smt solver in tacas pages bernd opitz et al event system library farzan kincaid and podelski inductive data flow graphs in popl pages flanagan and godefroid dynamic reduction for model checking software in popl pages godefroid methods for the verification of concurrent systems an approach to the problem springer gupta henzinger radhakrishna samanta and tarrach succinct representation of concurrent trace sets in popl gupta popeea and rybalchenko threader a verifier for programs in cav pages r elated w ork verifying the safety properties of a concurrent program is a well studied area automated verification tools which use model checking based approaches employ optimizations such as partial order reductions por to a ppendix inverso tomasco fischer la torre and parlato bounded model checking of c programs via lazy sequentialization in cav volume of lncs pages springer lamport how to make a multiprocessor computer that correctly executes multiprocess programs ieee trans september peled all from one one for all on model checking using representatives in cav pages wachter kroening and ouaknine verifying software with impact in fmcad pages ieee wang kundu ganai and gupta symbolic predictive analysis for concurrent programs in ana cavalcanti and dennisr dams editors fm formal methods volume of lncs pages springer berlin heidelberg a proof of lemma we prove it by induction on base case if then wp if i is unsatisfiable then i satisfies hence proved induction step n let if wp i is unsatisfiable then following cases can happen based on a a x e if wp i is unsatisfiable then wp wp a i is also unsatisfiable by substituting wp a with we get that wp i is unsatisfiable using ih on it implies that after executing from i the resultant state either does not terminate or terminates in a state satisfying if does not terminate then so does the execuction of starting from i if terminates in a state satisfying then by the definition of the weakest precondition execution of a from this state will satisfy hence proved a assume wp i is unsatisfiable then wp wp a i is also unsatisfiable by substituting wp a with we get that wp i is unsatisfiable using ih on it implies that after executing from i the resultant state either does not terminate or terminates in a state satisfying if does not terminate then the execution of from i does not terminate as well if terminates in a state satisfying then the execution of a blocks and hence the execution of does not terminate if terminates in a state satisfying but does not hold then must hold execution of assume acts as nop instruction and the resultant state satisfies hence proved a lock x as weakest precondition of lock x is obtained from the weakest precondition of assignment and assume instruction hence the similar reasoning works for this case b proof of lemma proof let us prove it by induction on the length of base case when the length of is and i is satisfiable then i does not satisfy hence proved induction step n let following case can happen based on the type of a a x e if wp i is satisfiable then wp wp a i is also satisfiable by substituting wp a we get that wp i is satisfiable by ih on execution of from i terminates in a state not satisfying by definition of the weakest precondition the state reached after executing a from this state does not satisfy hence proved a assume wp i is satisfiable then wp wp assume i is also satisfiable by substituting wp assume we get that wp i is satisfiable by ih on execution of from i terminates in a state not satisfying in other words and holds in the state reached after executing from i therefore after executing assume the resultant state satisfies and hence proved a lock x to the combination of above two cases an accepting state it should have a successor state such that s op is a transition by transition rule l iteral ssn rmap s rmap such that wp amap s amap s by transition rule l iteral elf ssn s will have self loop transitions on all symbols in applying ih on gives that rev rmap acc because of the transition s op acc s this along with gives us rmap acc s rearranging this and using we get rev rmap acc s or equivalently rev rmap s acc s hence proved proof of lemma proof we use induction for this proof same as in the previous proof let us use the following ordering on the states of for any two states s and s if s or if lengths are same then amap s is a sub formula of amap any two states which are not related by this order put them in any order to make as a total order it is clear that the smallest state in this total order must be one of the accepting state now we are ready to proceed by induction using this total order base case by definition of the accepting state in afa construction point of definition and the self loop transition rule rule l iteral elf ssn we know that for every word acc s wp amap s amap s rule base case of figure sets hmap s same as amap s for such states hence the statement of this lemma follows for the accepting states induction step we pick a state s such that one of the following holds s is a universal state by construction there should be states sk such that s sk is a transition let be a word accepted by s then by the definition of accepting set of words of a universal states must be accepted by each of by our induction ordering sk are smaller than s and hence we apply ih on them to get that wp rev amap si hmap si for i two cases arise based on whether amap s is a conjunction of amap si for i following rule c onj case we set hmap s hmap si and wp rev amap s hmap s then follows from the property using conjunction of the weakest precondition amap s is a disjunction of amap si for i following rule c onj case we set hmap s hmap si and wp rev amap s hmap s then follows from the property using disjunction of the weakest precondition proof of lemma proof we use induction for this proof let us use the following ordering on the states of for any two states s and s if s or if lengths are same then amap s is a sub formula of amap any two states which are not related by this order put them in any order to make as a total order it is clear that the smallest state in this total order must be one of the accepting state now we are ready to proceed by induction using this total order base case for every accepting state s sf by point of definition the condition wp op amap s amap s holds for every op el rmap s further by transition rule l iteral elf ssn of this afa a self transition must be there for all such op el rmap s and hence the condition rev rmap s acc s holds because these transitions can be taken in any order to construct the required word induction step following possibilities exist for the state s s is a universal state by construction there should be states sk such that s sk is a transition by our induction ordering sk are smaller than s and hence we apply ih on them to get that rev rmap si acc si for i however by the transition rule c ompound ssn rmap s rmap rmap sk and hence rev rmap s acc si for i by the definition of acc s for a universal state acc s is intersection of the sets acc si for i and hence we get the required result viz rev rmap s acc s s is an existential state if s is an accepting state then base case holds here consider the case when s is not s is an existential state if s is an accepting state then the same argument as used in the base case holds if s is not an accepting state then the only outgoing transition from s is of the form s op by rule l iteral ssn now consider a word acc s must be of the form where wp amap s amap s because of the self transitions constructed from rule l iteral elf ssn and acc therefore wp rev amap s rev amap s rev amap s rev amap s using rev wp op amap s using weakest precondition definition rev amap using transition rule l iteral ssn as acc this is same as hmap by applying ih on as hmap s is same as hmap as done in rule l case we prove this case as well accepted by state s hmap s is logically equivalent to wp rev amap s proof as a result of adding edges in this transformation we can not use the ordering among states as done for earlier proofs this is because now a transition s op s does not guarantee that the states in the set s are smaller then s and hence it will not be possible to apply ih directly therefore in this proof we apply induction on the length of accepted by some state induction step let s and acc s such that m either s or s if s and acc s then there exists a state such that s op and acc where and wp amap s amap s based on this transition s op we have the following s op was added by the this transformation virtue of one of the following conditions hmap s and hmap are unsatisfiable and wp op amap s amap rule rule nsat by ih on we have wp rev amap is logically equivalent to hmap using property conjunction part and the assumption wp op amap s amap we get wp rev wp op amap s is unsatisfiable and same as hmap s using wp rev wp amap s is unsatisfiable and same as hmap s by replacing we get the required proof hmap s and hmap are valid and amap wp op amap s rule rule by ih on we have wp rev amap is logically equivalent to hmap using property disjunction part and the assumption amap wp op amap s we get wp rev wp op amap s is valid and same as hmap s using and or replacing we get the required result and hence proved if this transition was already in we can use the same reasoning as used in the proof of lemma to show that wp rev amap s is logically equivalent to hmap s if s then similar argument goes as in the proof of lemma because no new transition gets added from these states as a result of this transformation proof of correctness of lemma let be an automaton constructed from a trace and a post condition as defined in definition and further modified by algorithm then for every state s of this afa and for every word accepted by state s hmap s is logically equivalent to wp rev amap s proof proof of this lemma is very similar to the proof of lemma given in appendix here we only highlight the changes in the proof note that this transformation converts some universal states to existential states let s be one such state that was converted from universal to existential state let s sk was the original transition in the afa which got modified to s sun where sui are newly created universal states in line of algorithm by construction hmap sui is unsatisfiable for each of these sun let be a word accepted by s after converting it to existential state by acceptance conditions must be accepted by at least one state say sum in the set sun by ih on sum we get wp amap sum hmap sum further by construction amap s implies amap sum this fact along with the monotonicity property of the weakest precondition property we get that wp amap s is unsatisfiable and hence same as hmap s proof of correctness of lemma let be an trace and a post condition further modified by adding for every state s of this automaton constructed from a as defined in definition and edges as discussed above then afa and for every word proof of theorem proof let us first prove that this algorithm terminates for finite state programs for finite state programs the number of possible assertions used in the construction of afa are finite and hence only a finite number of different afa are possible it implies the termination of this algorithm following lemma and the fact that amap every word accepted by this afa equivalently written as acc satisfies wp rev hmap by lemma and the fact that rmap we get rev acc combining and we get wp rev rev hmap or equivalently wp hmap if i hmap is satisfiable line then i wp is satisfiable as well following lemma we got a valid error trace which is returned in line if i hmap is unsatisfiable then by lemma this trace is provably correct now we apply transformations of section on the afa to increase the set of words accepted by it the final afa is then reversed and subtracted from the set of executions seen so far lemma ensures that for all such words the condition i wp holds and therefore none of them violate starting from the initial state therefore in every iteration only correct set of executions are being removed from the set of all executions therefore when this loop terminates then all the executions have been proved as correct
| 6 |
results for expansions apr ulyanov aoshima and fujikoshi abstract we get the computable error bounds for generalized expansions for quantiles of statistics provided that the computable error bounds for type expansions for distributions of these statistics are known the results are illustrated by examples introduction and main results in statistical inference it is of fundamental importance to obtain the sampling distribution of statistics however we often encounter situations where the exact distribution can not be obtained in closed form or even if it is obtained it might be of little use because of its complexity one practical way of getting around the problem is to provide reasonable approximations of the distribution function and its quantiles along with extra information on their possible errors it can be made with help of and type expansions recently the interest for type expansions stirred up because of intensive study of var value at risk models in financial mathematics and financial risk management see and mainly it is studied the asymptotic behavior of the expansions mentioned above it means that accuracy of approximation for distribution of statistics or its quantiles is given as o that is in the form of order with respect to some parameter s usually n as a number of observations p as dimension of observations in this paper we construct error bounds in other words computable error bounds for type expansions that is for an error of approximation we prove upper bounds with dependence on n p and perhaps on some moment characteristics of observations we get these bounds under condition that similar nonasymptotic results are already known for accuracy of approximation of distributions of statistics by type expansions let x be a univariate random variable with a continuous distribution function f for there exists x such that f x key words and phrases computable bounds results cornishfisher expansions this work was supported by rscf grant no ulyanov aoshima and fujikoshi which is called the lower point of f if f is strictly increasing the inverse function f is well defined and the point is uniquely determined we also speak of quantiles without reference to particular values of meaning the values given by f even in the general case when f x is not necessarily continuous nor is it strictly increasing we can define its inverse function by formula f u inf x f x u this is a nondecreasing function defined on the interval and f if f let fn x be a sequence of distribution functions and let each fn admit the type expansion ece in the powers of or fn x gk n x rk x with rk x o and gk n x g x x x g x where g x is a density function of the limiting distribution function g x an important approach to the problem of approximating the quantiles of fn is to use their asymptotic relation to those of g s let x and u be the corresponding quantiles of fn and g respectively then we have fn x g u write x u and u x to denote the solutions of for x in terms of u and u in terms of x respectively u x fn x and x u g u then we can use the ece to obtain formal solutions x u and u x in the form and x u u u u u x x x x cornish and fisher in and obtained the first few terms of these expansions when g is the standard normal distribution function g both and are called the expansions cfe concerning cfe for random variables obeying limit laws from the family of pearson distributions see hill and davis in gave a general algorithm for obtaining each term of cfe when g is an analytic function usually the cfe are applied in the following form with k or xk u u x bj u u with u o results for expansions it is known see how to find the explicit expressions for u and u as soon as we have by taylor s expansions for g g and we obtain u g u u u u u u provided that g and are smooth enough functions in the following theorems we show how xk u from could be expressed in terms of u moreover we show what kind of bounds we can get for x as soon as we have some bounds for rk x from theorem suppose that for the distribution function of a statistic u we have f x pr u x g x x where for remainder term x there exists a constant such that x let and be the upper points of f and g respectively that is pr u g then for any such that we have i ii u where g is the density function of the limiting distribution g and g u min g u theorem in the notation of theorem we assume that f x pr u x g x x a x x where for remainder term x there exists a constant such that x let t t u be a monotone increasing transform such that pr t u x g x x with x let and be the upper points of pr t u x and g respectively then for any such that we have where u g u min g u ulyanov aoshima and fujikoshi theorem we use the notation of theorem let b x be a function inverse to t b t x x then b and for such that we have b g u where max u moreover b x x x o remark the main assumption of the theorems is that for distributions of statistics and for distributions of transformed statistics we have some approximations with computable error bounds there are not many papers with this kind of results because it requires technique which is different from the asymptotic results methods and in series of papers we got results for wide class of statistics including multivariate scale mixtures and manova tests we considered as well the case of high dimensions that is the case when the dimension of observations and sample size are comparable the results were included in the book see also remark the results of theorems could not be extended to the whole range of it follows from the fact that the expansion does not converge uniformly in see corresponding example in section of remark in theorem we required the existence of a monotone increasing transform t z such that distribution of transformed statistic t u is approximated by some limit distribution g x in better way than the distribution of original statistic u we call this transformation t z the bartlett type correction see corresponding examples in section remark according to and the function b in theorem could be considered as an asymptotic expansion for up to order o proofs of main results proof of theorem by the mean value theorem g min g from and the definition of and in we get g pr u results for expansions therefore g on the other hand it follows from that g g this implies that similarly we have therefore we proved theorem i it follows from theorem i that min g u min g thus using we get statement of theorem ii proof of theorem it is easy to see that it is sufficient to apply theorem ii to the transformed statistic t u proof of theorem we obtain using now and the mean value theorem b b where is a point on the interval min b max b by theorem i we have therefore for b we get min max since by properties of derivatives of inverse functions z z y for z b y the relations and imply representation for b x follows from and examples in we gave sufficient conditions for transformation t x to be the bartlett type correction see remark above for wide class of statistics u allowing the following represantion pr u x gq x k x aj x n where o and gq x is the distribution function of chisquared distribution with q degrees of freedom and coefficients aj s p satisfy the relation aj some examples of the statistic u are ulyanov aoshima and fujikoshi as follows for k the likelihood ratio test statistic for k the trace criterion and the trace criterion which are test statistics for multivariate linear hypothesis under normality for k the score test statistic and hotelling s t statistic under nonnormality the results of were extended in and in we were interested in the null distribution of hotelling s generalized statistic defined by n trsh where sh and se are independently distributed as wishart distributions wp q ip and wp n ip with identity operator ip in rp respectively in theorem ii in we proved for all n p with k and computable error bound r x gr x q p gr x x q p x cp q n where r pq and for constant cp q we gave expicit formula with dependence on p and q therefore according to we can take in this case the bartlett type correction t z as z t z b where p q p p q p q it is clear that t z is invertable and we can apply theorem other examples and numerical calculations and comparisons of approximation accuracy see in and one more example is connected with sample correlation coefficient xn t and yn t be two vectors from an let x normal distribution n in with zero mean identity covariance matrix in and the sample correlation coefficient pn xk y k r r x y ppn pn yk xk in it was proved for n and n n bn x supx pr n r x x n results for expansions with bn it is easy to see that we can take t z as the bartlett type correction in the form t z z z then the inverse function b z t z is defined by formula p b z n z p n z o n n now we can apply theorem references bol shev asymptotically pearson transformations theor probab christoph ulyanov and fujikoshi accurate approximation of correlation coefficients by short expansion and its statistical applications springer proceedings in mathematics and statistics cornish and fisher moments and cumulants in the specification of distributions rev inst internat enoki and aoshima transformations with improved approximations proc res inst math kyoto enoki and aoshima transformations with improved asymptotic approximations and their accuracy sut journal of mathematics fisher and cornish the percentile points of distributions having known cumulants amer statist fujikoshi and ulyanov error bounds for asymptotic expansions of wilks lambda distribution journal of multivariate analysis fujikoshi and ulyanov on accuracy of approximations for location and scale mixtures journal of mathematical sciences fujikoshi ulyanov and shimizu multivariate statistics highdimensional and approximations wiley series in probability and statistics john wiley sons hoboken fujikoshi ulyanov and shimizu error bounds for asymptotic expansions of multivariate scale mixtures and their applications to hotelling s generalized journal of multivariate analysis fujikoshi ulyanov and shimizu error bounds for asymptotic expansions of the distribution of multivariate scale mixture hiroshima mathematical journal hall the bootstrap and edgeworth expansion new york hill and davis generalized asymptotic expansions of cornishfisher type ann math jaschke the in the context of approximations j risk ulyanov aoshima and fujikoshi ulyanov expansions in international encyclopedia of statistical science ed ulyanov and fujikoshi on approximations of transformed distributions in statistical applications siberian mathematical journal ulyanov and fujikoshi on accuracy of improved georgian mathematical journal ulyanov fujikoshi and shimizu nonuniform error bounds in asymptotic expansions for scale mixtures under mild moment conditions journal of mathematical sciences ulyanov wakaki and fujikoshi bound for high dimensional asymptotic approximation of wilks lambda distribution statistics and probability letters wakaki fujikoshi and ulyanov asymptotic expansions of the distributions of manova test statistics when the dimension is large hiroshima mathematical journal ulyanov faculty of computational mathematics and cybernetics moscow state university moscow russia and national research university higher school of economics hse moscow russia address vulyanov aoshima institute of mathematics university of tsukuba tsukuba ibaraki japan address aoshima fujikoshi department of mathematics hiroshima university japan address fujikoshi y
| 10 |
confidence score for neural network classifiers sep amit mandelbaum school of computer science and engineering hebrew university of jerusalem israel daphna weinshall school of computer science and engineering hebrew university of jerusalem israel abstract scores are used as proxies as for example the margin in svm classifiers the reliable measurement of confidence in classifiers predictions is very important for many applications and is therefore an important part of classifier design yet although deep learning has received tremendous attention in recent years not much progress has been made in quantifying the prediction confidence of neural network classifiers bayesian models offer a mathematically grounded framework to reason about model uncertainty but usually come with prohibitive computational costs in this paper we propose a simple scalable method to achieve a reliable confidence score based on the data embedding derived from the penultimate layer of the network we investigate two ways to achieve desirable embeddings by using either a loss or adversarial training we then test the benefits of our method when used for classification error prediction weighting an ensemble of classifiers and novelty detection in all tasks we show significant improvement over traditional commonly used confidence scores when trying to evaluate the confidence of neural network nn classifiers a number of scores are commonly used one is the strength of the most activated output unit followed by softmax normalization or the closely related ratio between the activities of the strongest and second strongest units another is the negative entropy of the output units which is minimal when all units are equally probable often however these scores do not provide a reliable measure of confidence introduction classification confidence scores are designed to measure the accuracy of the model when predicting class assignment rather than the uncertainty inherent in the data most generative classification models are probabilistic in nature and therefore provide such confidence scores directly most discriminative models on the other hand do not have direct access to the probability of each prediction instead related why is it important to reliably measure prediction confidence in various contexts such as medical diagnosis and decision support systems it is important to know the prediction confidence in order to decide how to act upon it for example if the confidence in a certain prediction is too low the involvement of a human expert in the decision process may be called for another important aspect of real world applications is the ability to recognize samples that do not belong to any of the known classes which can also be improved with a reliable confidence score but even irrespective of the application context reliable prediction confidence can be used to boost the classifier performance via such methods as or ensemble classification in this context a better confidence score can improve the final performance of the classifier the derivation of a good confidence score should therefore be part of the classifier s design as important as any other component of classifiers design in order to derive a reliable confidence score for nn classifiers we focus our attention on an empirical observation concerning neural networks trained for classification which have been shown to demonstrate in parallel useful embedding properties specifically a common practice these days is to treat one of the upstream layers of a network as a representation or embedding layer this layer activation is then used for representing similar objects and train simpler classifiers such as svm or shallower nns to perform different tasks related but not identical to the original task the network had been trained on confidence score for neural network classifiers in computer vision such embeddings are commonly obtained by training a deep network on the recognition of a very large database typically imagenet deng et these embeddings have been shown to provide better semantic representations of images as compared to more traditional image features in a number of related tasks including the classification of small datasets sharif razavian et image annotation donahue et and structured predictions hu et given this semantic representation one can compute a natural probability distribution as described in section by estimating local density in the embedding space this estimated density can be used to assign a confidence score to each test point using its likelihood to belong to the assigned class we note however that the commonly used embedding discussed above is associated with a network trained for classification only which may impede its suitability to measure confidence reliably in fact when training neural networks metric learning is often used to achieve desirable embeddings weston et al schroff et al hoffer ailon tadmor et al since our goal is to improve the probabilistic interpretation of the embedding which is essentially based on local point density estimation or the distance between points we may wish to modify the loss function and add a term which penalizes for the violation of pairwise constraints as in hadsell et al our experiments show that the modified network indeed produces a better confidence score with comparable classification performance surprisingly while not directly designed for this purpose we show that networks which are trained with adversarial examples following the adversarial training paradigm szegedy et goodfellow et also provide a suitable embedding for the new confidence score our first contribution therefore is a new prediction confidence score which is based on local density estimation in the embedding space of the neural network this score can be computed for every network but in order for this score to achieve superior performance it is necessary to slightly change the training procedure in our second contribution we show that suitable embedding can be achieved by either augmenting the loss function of the trained network with a term which penalizes for similarity loss as in eq below or by using adversarial training the importance of the latter contribution is two fold firstly we are the first to show that the density of image embeddings is improved with indirect adversarial training perturbations in addition to the improved word embedding quality shown in miyato et al by direct adversarial training perturbations secondly we show in section that adversarial training improves the results while imposing a much lighter burden of hyperparameters to tune as compared to the loss the new confidence score is evaluated in comparison to other scores using the following tasks i performance in the binary classification task of identifying each class prediction as correct or incorrect see section ii training an ensemble of nn classifiers where each classifier s prediction is weighted by the new confidence score see section iii novelty detection where confidence is used to predict whether a test point belongs to one of the known classes from the train set see section the empirical evaluation of our method is described in section using a few datasets and different network architectures which have been used in previous work when using these specific datasets our method achieves significant improvement in all tasks when compared with a more recent method which had been shown to improve traditional measures of classification confidence mc dropout gal ghahramani our score achieves better results while also maintaining lower computational costs prior work the bayesian approach seeks to compute a posterior distribution over the parameters of the neural network which is used to estimate prediction uncertainty as in mackay and neal however bayesian neural networks are not always practical to implement and the computational cost involved it typically high in accordance in a method which is referred to below as gal ghahramani proposed to use dropout during test time as a bayesian approximation of the neural network providing a cheap proxy to bayesian neural networks lakshminarayanan et al proposed to use adversarial training to improve the uncertainty measure of the entropy score of the neural network still the most basic and one of the most common confidence scores for neural networks can be derived from the strength of the most activated output unit or rather its normalized version also called softmax output or max margin a confidence score that handles better a situation where there is no one class which is most probable is the negative entropy of the normalized network s output zaragoza d buc compared these scores as well as some more complex ones tibshirani demonstrating somewhat surprisingly the empirical superiority of the two most basic methods described in the previous paragraph amit mandelbaum daphna weinshall ensembles of models have been used to improve the overall performance of the final classifier see reviews in dietterich and li et al there are many ways to train an ensemble such as boosting or bagging there are also many ways to integrate the predictions of the classifiers in the ensemble including the average prediction or voting discussed by bauer kohavi some ensemble methods use the confidence score to either weight the predictions of the different classifiers average weighting or for confidence voting novelty detection where the task is to determine whether a test point belongs to a known class label or not is another problem which becomes more relevant with the ever increasing availability of very large datasets see reviews in markou singh pimentel et al and the recent work in vinokurov weinshall this task is also highly relevant in real world applications where the classifier is usually exposed to many samples which do not belong to a known class note that novelty detection is quite different from the learning of classes with no examples as in zero shot learning palatucci et new confidence score we propose next a new confidence score we then discuss how it can be used to boost classification performance with ensemble methods or when dealing with novelty detection new confidence score for neural network classifiers our confidence score is based on the estimation of local density as induced by the network when points are represented using the effective embedding created by the trained network in one of its upstream layers local density at a point is estimated based on the euclidean distance in the embedded space between the point and its k nearest neighbors in the training set specifically let f x denote the embedding of x as defined by the trained neural network classifier let a x xjtrain denote the set of neighbors of x in the training set based on the euclidean distance in the embedded space and let y j denote the corresponding class labels of the points in a x a probability space is constructed as is customary by assuming that the likelihood that two points belong to the same class is proportional to the exponential of the negative euclidean distance between them in accordance the local probability that a point x belongs to class c is proportional to the probability that it belongs to the same class as the subset of points in a x that belong to class based on this local probability the confidence score d x for the assignment of point x to class is defined as follows pk x xjtrain y j e d x pk x xjtrain e d x is a score between to which is monotonically related to the local density of similarly labeled train points in the neighborhood of x henceforth is referred to as distance we note here that while intuitively it might be beneficial to add a scaling factor to the distance in such as the mean distance we found it to have a deteriorating effect in line with related work such as salakhutdinov hinton two ways to achieve effective embedding as mentioned is section in order to achieve an effective embedding it helps to modify the training procedure of the neural network classifier the simplest modification augments the network s loss function during training with an additional term the resulting loss function is a linear combination of two terms one for classification denoted lclass x y and another pairwise loss for the embedding denoted ldist x y this is defined as follows l x y lclass x y x y ldist x y p p x ldist where ldist xi xj is defined as xi f xj if y i y j max m xi f xj if y i y j a desirable embedding can also be achieved by adversarial training using the fast gradient method suggested in goodfellow et al in this method given an input x with target y and a neural network with parameters adversarial examples are generated using x sign lclass x y in each step an adversarial example is generated for each point x in the batch and the current parameters of the network and classification loss is minimized for both the regular and adversarial examples although originally designed to improve robustness this method related measures of density such as a count of the correct neighbors or the inverse of the distance behave similarly and perform comparably confidence score for neural network classifiers seems to improve the network s embedding for the purpose of density estimation possibly because along the way it increases the distance between pairs of adjacent points with different labels implementation details in ldist is defined by all pairs of points denoted for each training minibatch this set is sampled with no replacement from the training points in the minibatch with half as many pairs as the size of the minibatch in our experiments lclass x y is the regular cross entropy loss we note here that we also tried loss functions which do not limit the distance between points of the same class to be exactly such as those in hoffer ailon and tadmor et al however those functions produced worse results especially when the dataset had many classes finally we note that we have tried using the loss and adversarial training together while training the network but this has also produced worse results alternative confidence scores given a trained network two measure are usually used to evaluate classification confidence max margin the maximal activation after normalization in the output layer of the network entropy the negative entropy of the activations in the output layer of the network as noted above the empirical study in zaragoza d buc showed that these two measures are typically as good as any other existing method for the evaluation of classification confidence two recent methods have been shown to improve the reliability of the confidence score based on entropy mcdropout gal ghahramani and adversarial training lakshminarayanan et goodfellow et in terms of computational cost adversarial training can increase and sometimes double the training time due to the computation of additional gradients and the addition of the adversarial examples to the training set on the other hand does not change the training time but increases the test time by orders of magnitude typically both methods are complementary to our approach in that they focus on modifications to the actual computation of the network during either train or test time after all is done they both evaluate confidence using the entropy score as we show in our experiments adversarial training combined with our proposed confidence score improves the final results significantly our method computational analysis unlike the two methods described above and adversarial training our confidence score takes an existing network and computes a new confidence score from the network s embedding and output activation it can use any network with or without adversarial training or mc dropout if the loss function of the network is suitably augmented see discussion above empirical results in section show that our score always improves results over the entropy score of the given network train and test computational complexity considering the loss tadmor et al showed that computing distances during the training of neural networks have negligible effect on training time alternatively when using adversarial training additional computational cost is incurred as mentioned above while on the other hand fewer hyper parameters are left for tuning during test time our method requires carrying over the embeddings of the training data and also the computation of the k nearest neighbors for each sample nearest neighbor classification has been studied extensively in the past years and consequently there are many methods to perform either precise or approximate with reduced time and space complexity see gunadi for a recent empirical comparison of the main methods in our experiments while using either condensed nearest neighbours hart or density preserving sampling budka gabrys we were able to reduce the memory requirements of the train set to of its original size without affecting performance at this point the additional storage required for the nearest neighbor step was much smaller than the size of the networks used for classification and the increase in space complexity became insignificant with regards to time complexity recent studies have shown how modern gpu s can be used to speed up nearest neighbor computation by orders of magnitude garcia et arefin et et al also showed that approximation with recall can be accomplished times faster as compared to precise combining such reductions in both space and time we note that even for a very large dataset including for example images embedded in a dimensional space the computation complexity of the k nearest neighbors for each test sample requires at most operations this is comparable and even much faster than a single forward run of this test sample through a modern relatively small resnets he et with parameters thus our method scales amit mandelbaum daphna weinshall well even for very large datasets ensembles of classifiers there are many ways to define ensembles of classifiers and different ways to put them together here we focus on ensembles which are obtained when using different training parameters with a single training method this specifically means that we train several neural networks using random initialization of the network parameters along with random shuffling of the train points henceforth regular networks will refer to networks that were trained only for classification with the regular loss distance networks will refer to networks that were trained with the loss function defined in and at networks will refer to networks that were trained with adversarial examples as defined in ensemble methods differ in how they weigh the predictions of different classifiers in the ensemble a number of options are in common use see li et al for a recent review and in accordance are used for comparison in the experimental evaluation section softmax average simple voting weighted softmax average where each softmax vector is multiplied by its related prediction confidence score confidence voting where the most confident network gets votes and dictator voting the decision of the most confident network prevails we evaluate methods with weights defined by either the entropy score or the distance score defined in novelty detection novelty detection seeks to identify points in the test set which belong to classes not present in the train set to evaluate performance in this task we train a network with a known benchmark dataset while augmenting the test set with test points from another dataset that includes different classes each confidence score is used to differentiate between known and unknown samples this is a binary classification task and therefore we can evaluate performance using roc curves experimental evaluation version and svhn netzer et in all cases as is commonly done the data was preprocessed using global contrast normalization and zca whitening no other method of data augmentation was used for and svhn while for svhn we also did not use the additional labeled for on the other hand cropping and flipping were used for to check the robustness of our method to heavy data augmentation in our experiments all networks used elu clevert et for activation for and we used the network suggested in clevert et al with the following architecture c p c c p c c p c c p c c p c f c c n k denotes a convolution layer with n kernels of size k k and stride p k denotes a layer with window size k k and stride and f c n denotes a fully connected layer with n output units for the last layer was replaced by fc during training only we applied dropout srivastava et before each max pooling layer excluding the first and after the last convolution with the corresponding drop probabilities of with the svhn dataset we used the following architecture c c c p c c c p c c c p f c f c for the networks trained with distance loss for each batch we randomly picked pairs of points so that at least of the batch included pairs of points from the same class the margin m in was set to in all cases and the parameter in was set to the rest of the training parameters can be found in the supplementary material for the distance score we observed that the number of k nearest neighbors could be set to the maximum value which is the number of samples in each class in the train data we also observed that smaller numbers even k often worked as in this section we empirically evaluate the benefits of our proposed approach comparing the performance of the new confidence score with alternative existing scores in the different tasks described above experimental settings for evaluation we used data sets krizhevsky hinton coates et note that reported results denoted as for these datasets often involve heavy augmentation in our study in order to be able to do the exhaustive comparisons described below we opted for the scenario as more flexible and yet informative enough for the purpose of comparison between different methods therefore our numerical results should be compared to empirical studies which used similar settings we specifically selected commonly used architectures that achieve good performance close to the results of modern resnets and yet flexible enough for extensive evaluations confidence score for neural network classifiers table auc results of correct classification conf score margin entropy distance classifier acccuracy reg dist at mcd classifier acccuracy reg dist at mcd svhn accuracy reg dist at table legend leftmost column margin and entropy denote the commonly used confidence scores described in section distance denotes our proposed method described in section second line reg denotes networks trained with the entropy loss dist denotes networks trained with the distance loss defined in at denotes networks trained with adversarial training as defined in and mcd denotes when applied to networks normally trained with the entropy loss since the network trained for svhn was trained without dropout mcd was not applicable table auc results of correct classification ensemble of networks confidence score max margin entropy distance distance reg dist at reg dist svhn at reg dist at table legend notations are similar to those described in the legend of table with one distinction distance now denotes the regular architecture where the distance score is computed independently for each network in the pair using its own embedding while distance denotes the hybrid architecture where one network in the pair is fixed to be a distance network and its embedding is used to compute the distance score for the prediction of the second network in the pair well in general the results reported below are not sensitive to the specific values of the as listed above we observed only minor changes when changing the values of k and the margin as proposed in gal ghahramani we used mc dropout in the following manner we trained each network as usual but computed the predictions while using dropout during test this was repeated times for each test example and the average activation was delivered as output adversarial training we used following goodfellow et al fixing in all the experiments error prediction of labels we first compare the performance of our confidence score in the binary task of evaluating whether the network s predicted classification label is correct or not while our results are independent of the actual accuracy we note that the accuracy is comparable to those achieved with resnets when not using augmentation for or when using only the regular training data for svhn see huang et al for example performance in this binary task is evaluated using roc curves computed separately for each confidence score results on all three datasets can be seen in table in all cases our proposed distance score when computed on a suitably trained network achieves significant improvement over the alternative scores even when those are enhanced by using either adversarial training or to further test our distance score we evaluate performance over an ensemble of two networks results are shown in table here too the distance score achieves significant improvement over all other methods we also note that the difference between the distance score computed over distance networks and the entropy score computed over adversarially trained networks is now much higher as compared to this difference when using only one network as we show in section adversarial training typically leads to a decreased performance when using an ensemble of networks and relying only on the entropy score probably due to a decrease in variance among the classifiers this observation further supports the added value of our proposed confidence score as a final note we also used a hybrid architecture using a matched pair of one classification network of any kind and a second distance network the embedding defined by the distance network is used to compute the distance score for the predictions of the first classification network surprisingly this method achieves amit mandelbaum daphna weinshall figure accuracy when using an ensemble of networks with top left top right and svnh bottom the denotes the number of networks in the ensemble absolute accuracy marked on the left y is shown for the most successful ensemble methods among all the methods we had evaluated blue and yellow solid lines see text and methods which did not use our distance score including the best performing method in this set red dotted line denoted baseline differences in accuracy between the two top performers and the top baseline method are shown using a bar plot marked on the right y with standard deviation of the difference over at least repetitions the best results in both and svhn while being comparable to the best result in this method is used later in section to improve accuracy when running an ensemble of networks further investigation of this phenomenon lies beyond of the scope of the current study ensemble methods in order to evaluate the improvement in performance when using our confidence score to direct the integration of classifiers in an ensemble we used a few common ways to define the integration procedure and a few ways to construct the ensemble itself in all comparisons the number of networks in the ensemble remained fixed at our experiments included the following ensemble compositions a n regular networks b n distance networks c n at adversarially trained networks and n networks such that networks belong to one kind of networks regular distance or at and the remaining networks belong to another kind spanning all combinations as described in section the predictions of classifiers in an ensemble can be integrated using different criteria in general we found that all the methods which did not use our distance score including methods which used any of the other confidence score for prediction weighting performed less well than a simple average of the softmax activation method in section otherwise the best performance was obtained when using a weighted average method in section with weights defined by our distance score with variants we also checked two options of obtaining the distance score i each network defined its own confidence score ii in light of the advantage demonstrated by hybrid networks as shown in section and for each pair of networks from different kinds the distance score for both was computed while using the embedding of only one of networks in the pair mcdropout was not used in this section due to its high computational cost while our experiments included all variants and all weighting options only cases are shown in the following description of the results in order to improve confidence score for neural network classifiers readability the combination achieving best performance the combination achieving best performance when not using adversarial training as at entails additional computational load at train time the ensemble variant achieving best performance without using the distance score baseline ensemble average when using adversarial training without distance score additional results for most of the other conditions we tested can be found in the supplementary material to gain a better statistical significance each experiment was repeated at least times with no overlap between the networks and fig shows the ensemble accuracy for the methods mentioned above when using these datasets it can be clearly seen that weighting predictions based on the distance score from improves results significantly the best results are achieved when combining distance networks and adversarial networks with significant improvement over an ensemble of only one kind of networks not shown in the graph still we note importantly that the distance score is used to weight both kind of networks since adversarial training is not always applicable due to its computational cost at train time we show that the combination of distance networks and regular networks can also lead to significant improvement in performance when using the distance score and the hybrid architecture described in section finally we note that adversarial networks alone achieve very poor results when using the original ensemble average further demonstrating the value of the distance score in improving the performance of an ensemble of adversarial networks alone svhn results for this dataset are also shown in while not as significant as those in the other datasets partly due to the high initial accuracy they are still consistent with them demonstrating again the power and robustness of the distance score novelty detection finally we compare the performance of the different confidence scores in the task of novelty detection in this task the confidence score is used to decide another binary classification problem does the test example belong to the set of classes the networks had been trained on or rather to some unknown class performance in this binary classification task is evaluated using the corresponding roc curve of each confidence score we used two contrived datasets to evaluate performance in this task following the experimental construction suggested in lakshminarayanan et al in the first experiment we trained the network on the dataset and then tested it on both and svhn test sets in the second experiment we switched tween the datasets and changed the trained network making svhn the known dataset and the novel one the task requires to discriminate between the known and the novel datasets for comparison we computed novelty as one often does with a svm classifier while using the same embeddings novelty thus computed showed much poorer performance possibly because this dataset involves many classes one class svm is typically used with a single class and therefore these results are not included here table auc results for novelty detection confide score max margin entropy distance reg dist at reg dist at table legend left known and svhn novel right svhn known and novel results are shown in table adversarial training which was designed to handle this sort of challenge is not surprisingly the best performer nevertheless we see that our proposed confidence score improves the results even further again demonstrating its added value conclusions we proposed a new confidence score for neural network classifiers the method we proposed to compute this score is scalable simple to implement and can fit any kind of neural network this method is different from other commonly used methods as it is based on measuring the point density in the effective embedding space of the network thus providing a more coherent statistical measure for the distribution of the network s predictions we also showed that suitable embeddings can be achieved by using either a loss or somewhat unexpectedly adversarial training we demonstrated the superiority of the new score in a number of tasks those tasks were evaluated using a number of different datasets and with network architectures in all tasks our proposed method achieved the best results when compared to traditional confidence scores references arefin ahmed shamsul riveros carlos berretta regina and moscato pablo nn a amit mandelbaum daphna weinshall ware tool for fast and scalable k nn computation using gpus plos one bauer eric and kohavi ron an empirical comparison of voting classification algorithms bagging boosting and variants machine learning budka marcin and gabrys bogdan densitypreserving sampling robust and efficient alternative to for error estimation ieee transactions on neural networks and learning systems clevert unterthiner thomas and hochreiter sepp fast and accurate deep network learning by exponential linear units elus arxiv preprint coates adam lee honglak and ng andrew y an analysis of networks in unsupervised feature learning ann arbor deng dong socher li li and imagenet a hierarchical image database in dietterich thomas ensemble methods in machine learning in international workshop on multiple classifier systems pp springer donahue jeffrey anne hendricks lisa guadarrama sergio rohrbach marcus venugopalan subhashini saenko kate and darrell trevor recurrent convolutional networks for visual recognition and description in proceedings of the ieee conference on computer vision and pattern recognition pp gal yarin and ghahramani zoubin dropout as a bayesian approximation representing model uncertainty in deep learning arxiv preprint garcia vincent debreuve eric and barlaud michel fast k nearest neighbor search using gpu in computer vision and pattern recognition workshops cvprw ieee computer society conference on pp ieee goodfellow ian j shlens jonathon and szegedy christian explaining and harnessing adversarial examples arxiv preprint gunadi hendra comparing nearest neighbor algorithms in space hadsell raia chopra sumit and lecun yann dimensionality reduction by learning an invariant mapping in computer vision and pattern recognition ieee computer society conference on volume pp ieee hart peter the condensed nearest neighbor rule ieee transactions on information theory he kaiming zhang xiangyu ren shaoqing and sun jian deep residual learning for image recognition in proceedings of the ieee conference on computer vision and pattern recognition pp hoffer elad and ailon nir deep metric learning using triplet network in international workshop on pattern recognition pp springer hu hexiang zhou deng zhiwei liao zicheng and mori greg learning structured inference neural networks with label relations in proceedings of the ieee conference on computer vision and pattern recognition pp huang gao sun yu liu zhuang sedra daniel and weinberger kilian q deep networks with stochastic depth in european conference on computer vision pp springer ville teemu tasoulis sotiris elias tuomainen risto wang liang corander jukka and roos teemu fast search arxiv preprint krizhevsky alex and hinton geoffrey learning multiple layers of features from tiny images lakshminarayanan balaji pritzel alexander and blundell charles simple and scalable predictive uncertainty estimation using deep ensembles arxiv preprint li hui wang xuesong and ding shifei research and development of neural network ensembles a survey artificial intelligence review pp mackay david jc bayesian methods for adaptive models phd thesis california institute of technology markou markos and singh sameer novelty detection a neural network based approaches miyato takeru dai andrew m and goodfellow ian adversarial training methods for text classification arxiv preprint neal radford bayesian learning for neural networks volume springer science business media netzer yuval wang tao coates adam bissacco alessandro wu bo and ng andrew y reading digits in natural images with unsupervised feature learning in nips workshop on deep learning and unsupervised feature learning volume pp confidence score for neural network classifiers palatucci mark pomerleau dean hinton geoffrey e and mitchell tom learning with semantic output codes in advances in neural information processing systems pp pimentel marco af clifton david a clifton lei and tarassenko lionel a review of novelty detection signal processing salakhutdinov ruslan and hinton geoffrey learning a nonlinear embedding by preserving class neighbourhood structure in aistats volume schroff florian kalenichenko dmitry and philbin james facenet a unified embedding for face recognition and clustering in proceedings of the ieee conference on computer vision and pattern recognition pp sharif razavian ali azizpour hossein sullivan josephine and carlsson stefan cnn features an astounding baseline for recognition in proceedings of the ieee conference on computer vision and pattern recognition workshops pp srivastava nitish hinton geoffrey e krizhevsky alex sutskever ilya and salakhutdinov ruslan dropout a simple way to prevent neural networks from overfitting journal of machine learning research szegedy christian zaremba wojciech sutskever ilya bruna joan erhan dumitru goodfellow ian and fergus rob intriguing properties of neural networks arxiv preprint tadmor oren rosenwein tal shai wexler yonatan and shashua amnon learning a metric embedding for face recognition using the multibatch method in advances in neural information processing systems pp tibshirani robert a comparison of some error estimates for neural network models neural computation vinokurov nomi and weinshall daphna novelty detection in multiclass scenarios with incomplete set of class labels arxiv preprint weston jason ratle mobahi hossein and collobert ronan deep learning via embedding in neural networks tricks of the trade pp springer zaragoza hugo and d buc florence confidence measures for neural network classifiers in proceedings of the seventh int conf information processing and management of uncertainty in knowlegde based systems
| 2 |
may on the discriminating power of tests in resource january abstract since its discovery differential linear logic dll inspired numerous domains in denotational semantics categorical models of dll are now commune and the simplest one is rel the category of sets and relations in proof theory this naturally gave birth to differential proof nets that are full and complete for dll in turn these tools can naturally be translated to their intuitionistic counterpart by taking the category associated to the comonad rel becomes mrel a model of the that contains a notion of differentiation proof nets can be used naturally to extend the into the lambda calculus with resources a calculus that contains notions of linearity and differentiations of course mrel is a model of the with resources and it has been proved adequate but is it fully abstract that was a strong conjecture of bucciarelli carraro ehrhard and manzonetto in however in this paper we exhibit a moreover to give more intuition on the essence of the and to look for more generality we will use an extension of the resource also introduced by bucciarelli et al in for which is fully abstract the tests introduction the first extension of the with resources by boudol in was introducing a special resource sensitive application that may involve multisets of affine arguments each one has to be used at most one time this was a natural way to export resource sensitiveness to the functional setting however gathering no known and interesting properties confluence linearity it was not fully explored later on ehrhard and regnier working on functional interpretation of differential proof nets discovered a calculus similar to boudol s one named differential by adding to the a derivative operation n which syntactically corresponds to a linear substitution of x by n in x m it recovers the this is done through the translation n nn n xm nn n where ni are the linear arguments and n is the non linear one this more semantical view even allow for the generalisation of the operation and recover excellent semantical properties confluence taylor expansion we will adopt the syntax of that improvements from differential into boudol s calculus and we will call it the category rel of set and relations is known to model the linear logic and despite its high degree of degeneration relop rel it is a very natural construction indeed what appeared to be a degeneration is in reality a natural choice that preserves all proofs the interpretation function from proof to mrel is injective up to isomorphism but our principal interest for this category is that it models the differential linear logic and of known such category it is the simplest and more natural as for every categorical model of linear logic the interpretation of the induced a comonad from that comonad we can construct the category in the case of rel this new category mrel corresponds to the category of sets with as morphisms from a to b the relations from mf a the finite multisets over a to b it is then a model of the and of the this construction being the most natural we can do mrel is a priori one of the most natural models of the even if non well pointed it is only natural then to question on the depth of the link among the reflexive elements of mrel and the and more precisely among mrel s canonical reflexive element and the until now we knew that was adequate for the that two terms carrying the same interpretations in mrel behave the same way in all contexts but we did not know anything about the counterpart named full abstraction this question has been thoroughly studied however since has been proved resp in and fully abstract not only for both of the principal of namely the usual and kfoury s linear calculus of but also for the extension with tests of denoted therefore bucciarelli et al emit in a strong conjecture of full abstraction for the however and it is our purpose here a counter example can be found in order to exhibit this counter example we will take an unusual shortcut using full abstraction result for indeed we will prove a slightly more general theorem the failure of full abstraction for of any model that is fully abstract for due to this generalization we will not have to introduce the full description of in the core of the article it is available in annexes additionally to be considerably easier and more intuitive than the direct and usual method this way of proceeding is part of a larger study of full abstraction indeed we are looking for a mechanical way to tackle full abstraction problems in two steps first we extend the calculus with well chosen semantical objects in order to reach the definability of compact elements then we study the full abstraction question indirectly via the link between the operational equivalence of the original calculus and of its artificial extension this reduces our mix of semantic and syntactic question to a purely syntactic one allowing us to use powerful syntactic constructions tests where introduced in to have a full abstraction theorem for boudol with resources later on the principle was improved in implementing semantic objects in the syntax in order to get full abstraction of following an idea of this extension can in our context be compared to a basic exception mechanism the term q is raising the exception or test q absorbing all its non applications and the exception m is catching any exception in m by annihilating all the the most important here being the scope of the m that act as an infinite application over m notations will be used for xn n is not specified when it can be any integer and i will denotes the identity background as explained this article is directly following for this reason we need to introduce the and then the tests in the the notion of linearity is capital any term in linear position will never suffer any duplication or erasing regardless the reduction strategy linear subterms are subterms that are either the first subterm of a lambda abstraction in linear position the left side of an application that is in linear position or in the linear part of its right side the last case is the real improvement and asks for arguments to be separated in linear and non linear arguments therefore the right side of the applications will be replaced by a new kind of expression different from terms the bags bags are multisets containing some linear non banged arguments and exactly one non linear banged argument terms m n m b m bags b c mn m b c this is the syntax of modulo the macro m m for convenience the finite sums will be denoted qi and the different s are just the neutral elements of the different sums this demonic sum had to be implemented since we want the calculus to be resource sensitive and confluent thus there is no other choice than to considere the sum of all the possible outcomes sums distribute with any linear context mi mi nj j mi nj minn m ij j kj j minn m in the application each linear argument will replace one and only one occurrence of the variable thus the need of two kinds of substitutions the usual one denoted and the linear one denoted this last will act like a derivation n m n m mn m mi mn m mn m m m p m p m p this enables us to describe the nn n m n in other words nn n nn n in differential proof nets the tensor and the par can be added freely in the sense that we still have a natural interpretation in mrel and these operations can be translated in our calculus as an exception mechanism with on one side a q that raises the exception or test q by burning its applicative context whenever these applications do not have any linear component otherwise it diverges and with on the other side a m that catch the exceptions in m by burning the abstraction context of m whenever this abstraction is dummy the main difference with a usual exception system is the divergence of the catch if no exception are raised we introduce a new operators and a new kind of expression that will play the role of exception the tests terms m n q test m q r new operator immediately imply new distribution rules for the sum and the linear substitution mi mi qi qi qi j qi j m m q here is the corresponding operational semantics q q m q m q m q the intuition of q is an operator that take a test a boolean value compute it and returns an infinite with no occurrence of the abstracted variables the test m is taking a term and returns a successful test if the term is converging in a context that consists of an infinite empty application observational order and full abstraction in order to ask for full abstraction one has to specify a reduction strategy a natural choice would be the head reduction but this would make a normal form while no applicative instantiation of x allow the convergence of this term therefore the reduction strategy we are considering will not be headreduction but the this reduction will reduce subterms in linear position after the subterms in head positions the corresponding normal forms are terms and tests of the form m l nn kn l n m q q ni where every n and q must be in normal forms and can t be a sum but the li are of any kind definition m is observationally below n if for all context we have cln m which is whenever clm m is they are observationally equivalent if moreover n is observationally below m in the particular case of the we can easily restrict contexts to which is contexts whose output is a tests this will be applied systematically for simplification we will denote and the observational order and equivalence of the and and those of the bucciarelli carraro ehrhard and manzonetto were then able to prove a strong theorem relating the model to the calculus theorem is fully abstract for the with resources and tests for all closed terms m n with resources ans tests jm k jn k m n the in order to exhibit our we will use the following property fact let b a calculus and a a let m a model that is fully abstract for m is fully abstract for b iff the operational equivalences of b and a are equal on their domain intersection in our context it means that in order to prove the non full abstraction for the it is sufficient to find two terms of the that can not be separated by any context of the but that are separated by a context of the this makes the research and the proof quite easier when the terms of the involved are complex but not the context of the we are firstly exhibiting a term a of the that is observationally above the identity in the but not in the s observational order a i v v w where is the turing fix point combinator g g u g g u this term seems quite complex but modulo a reduces exactly to any of the elements of the following sum and thus can be think as an equivalent bn i u u u n this is due to the following property lemma if ai x x then a and for all i ai proof simple reduction unfolding the once in absence of tests this term has a comportment similar to in the sense that it will converges in any applicative context provided that these applications do not carry linear components in particular it converges more often than the identity lemma for all context of the if clim converges then clam converges i a proof let a context that converge on i with the context lemma and since neither i nor a has free variables we can assume that pk thus by lemma we have a u bk and clam u i pk u clm m converges but in presence of real tests its comportment appeared to be different that in the sense that it diverges under a in particular it is not observationally above the identity in lemma in the a diverges while i converges i a proof for all i ai diverges since by ai i x x i x x x x the non convergence comes with the hypothesis for the first term and is trivial for the second hence we have broken the conjecture concerning the equality between the observational and denotational orders let s break the whole conjecture theorem is not fully abstract for the with resources further works first a diligent reader will remark that we have a critical use of the demonic sum which is very powerful in this calculus and an even more diligent one will remark that an arbitrary choice have been made concerning this sum we could differentiate terms and reduced of terms and remove sums from the original syntax they just have to appear in reductions of terms the choice we made here corresponds to the one of and carries an understandable but we claim that another equivalent arises for the case with limited sum this is however a little more complicated and make it necessary to rework the material of even if everything works exactly the same way our can be translated to some related cases in particular to prove non full abstraction of scott s for the with angelic and demonic sums conjectured in for this calculus the extension with tests exists and is fully abstract for this is a trivial modification of the tests of using general demonic and angelic sums in this framework the term y plays exactly the role of a in our example with the same output in the end from a unique object that is dll we exhibit two natural constructions one in the semantical world the other in the syntactical one but they appeared do not respect full abstraction one would say that they are not that natural and that more natural one may be found but this would be to easy from the state of art we don t known more natural construction the misunderstanding comes with the concept of naturality it seems that the syntactic idea of convergence does not really correspond to the equivalent in semantical word one being a lowest fix point and the second a largest one this difference appears when working with the demonic sum that allow to check the convergence in unbounded applicative context finally we presented tests as a general tool whose importance is above the role we gave them here this result is interesting and important as it presents tests as useful tools to verify that full abstraction fails but it remains a negative result that does not justify alone any real interest for them further works will then focus on presenting positive proofs of full abstractions that are using tests following this way we already submitted a revisited proof of full abstraction of the scott s for the usual references gerard boudol the with multiplicities inria research report boudol curien lavatelli a semantics for lambda calculi with resources mathematical structures in comput sci mscs vol pp flavien breuvart a new proof of the full abstraction theorem submited antonio bucciarelli alberto carraro thomas ehrhard giulio manzonetto full abstraction for resource calculus with tests in marc bezem editor computer science logic csl international annual conference of the eacsl leibniz international proceedings in informatics lipics schloss fuer informatik dagstuhl germany pp antonio bucciarelli alberto carraro thomas ehrhard giulio manzonetto full abstraction for resource lambda calculus with tests throught taylor expansion accepted antonio bucciarelli thomas ehrhard giulio manzonetto not enough points is enough in jacques duparc thomas henzinger editors csl proceedings of computer science logic lecture notes in computer science springer pp antonio bucciarelli thomas ehrhard giulio manzonetto a relational model of a parallel and in sergei anil nerode editors logical foundations of computer science international symposium lfcs lecture notes in computer science pp daniel de carvalho lorenzo tortora de falco the relational model is injective for multiplicative exponential linear logic without weakenings corr ehrhard laurent interpreting a finitary in differential interaction nets inf comput pp thomas ehrhard laurent regnier the differential lambdacalculus theoretical computer science elsevier kfoury a linearization of the and consequences log comput pp available at http giulio manzonetto a general class of models of in mathematical foundations of computer science mfcs lecture notes in computer science springer pp pagani tranquilli parallel reduction in resource lambdacalculus aplas lncs pp michele pagani simona ronchi della rocca linearity nondeterminism and solvability fundamenta informaticae pp available at http a the model categorical model the category rel of sets and relations is known to be a model of linear logic as it is a seely category we are giving the interpretation but we will let the comutative diagrams to the reader since their comutations are trivial or like it is monoidal with tensor functor given by a b a b f g u x v y u v f x y g and with the arbitrary unit it is symetric monoidal close with a b a b if we take the evaluation ev a b a b a b b rel a b a b so it is star autonomus with as dualising object for a trvial duality this give us the interpretation of multiplicatives a b a b the category is cartesian with catesian product ai i x i x si projections i a a ai and product of morphisms i fi b i a b a fi the terminal object is this give us the interpretation of additives ai ai i a i a ai we can add a comonade d p where the functor is define by a mf a f ak bk k ai bi f the deriliction by da a a a and the digging by pa mk mk mf a this give us the interpretation of exponentials p p mf p this is a seely category and a model of linear logic since the isomorphismes is trivial and b a b is defined by al br al br but it can even be seen as a categorical model of differential linear logic by defining the natural transforamtion a a a a a we are fixing the contraction ca l r r a the l r r a the weakening wa and the so that we can define the derivative id x dx cx x x and the id x x x this derivatives are taylor if two morphisms a b are such that then wx wx finally the exponential acept since it is and ja ida ida is an isomorphism for more detail about models of dll see as for every categorical model of linear logic the exponential is a comonade and induced a cokleisly m rel rel whose objects are the set and whose morphisms from p to q are the relations between p and q the identities are the relations digp x x p and the composition f x z y z f y x y g algebraic model in order to have an algebraic model of we only need a reflexive object a triplet m app abs where m is an object of mrel app m m m mf m m and aps mf m m m such that app abs id such an object can a priory found by taking the lower fix point of m m m m m mf m m but this will just leads to the trivial empty model we will then resolve the more complicated fix point m m n n mf m where the exponent represent an infinit tensor product the lower fix point will be called an other way to see the fixpoint is to say that m have to be equal to the set of quazi everywhere empty lists of finite substets of itself its element are the recursively defined as being either the list of empty elements or with a mf and the coresponding app and abs arise imediatly from the functoriality app a a and abs a in order to be understandable we are presenting the interpretation of terms via a type system with types living in the usual presentation of the interpretation can be recoverd from the type system jm m q the type system is the following x v m x x m m x l lj m b w m b ln l m q
| 6 |
dec pliability or the whitney extension theorem for curves in carnot groups nicolas juillet and mario sigalotti abstract the whitney extension theorem is a classical result in analysis giving a necessary and sufficient condition for a function defined on a closed set to be extendable to the whole space with a given class of regularity it has been adapted to several settings among which the one of carnot groups however the target space has generally been assumed to be equal to rd for some d we focus here on the extendability problem for general ordered pairs with we analyze in particular the case r and characterize the groups for which the whitney extension property holds in terms of a newly introduced notion that we call pliability pliability happens to be related to rigidity as defined by bryant an hsu we exploit this relation in order to provide examples of carnot groups that is carnot groups so that the whitney extension property does not hold we use geometric control theory results on the accessibility of control affine systems in order to test the pliability of a carnot group in particular we recover some recent results by le donne speight and zimmermann about lusin approximation in carnot groups of step and whitney extension in heisenberg groups we extend such results to all pliable carnot groups and we show that the latter may be of arbitrarily large step introduction extending functions is a basic but fundamental tool in analysis fundamental is in particular the extension theorem established by whitney in which guarantees the existence of an extension of a function defined on a closed set of a vector space to a function of class c k provided that the minimal obstruction imposed by taylor series is satisfied the whitney extension theorem plays a significative part in the study of ideals of differentiable functions see and its variants are still an active research topic of classical analysis see for instance analysis on carnot groups with a homogeneous distance like the distance as presented in folland and stein s monograph is nowadays a classical topic too carnot groups provide a generalization of vector spaces that is both close to the original model and radically different this is why carnot groups provide a wonderful field of investigation in many branches of mathematics not only the setting is elegant and rich but it is at the natural crossroad between different fields of mathematics as for instance analysis of pdes or geometric control theory see for instance for a contemporary account it is mathematics subject classification key words and phrases whitney extension theorem carnot group rigid curve horizontal curve nicolas juillet and mario sigalotti therefore natural to recast the whitney extension theorem in the context of carnot groups as far as we know the first generalization of a whitney extension theorem to carnot groups can be found in where de giorgi s result on sets of finite perimeter is adapted first to the heisenberg group and then to any carnot group of step this generalization is used in where the authors stress the difference between intrinsic regular hypersurfaces and classical c hypersurfaces in the heisenberg group the recent paper gives a final statement for the whitney extension theorem for functions on carnot groups the most natural generalization that one can imagine holds in its full strength for more details see section the study of the whitney extension property for carnot groups is however not closed following a suggestion by serra cassano in one might consider maps between carnot groups instead of solely functions on carnot groups the new question presents richer geometrical features and echoes classical topics of metric geometry we think in particular of the classification of lipschitz embeddings for metric spaces and of the related question of the extension of lipschitz maps between metric spaces we refer to for the corresponding results for the most usual carnot groups abelian groups rm or heisenberg groups hn of topological dimension in view of theorem on lipschitz maps see theorem the most directly related whitney extension problem is the one for ch the horizontal maps of class c defined on carnot groups this is the framework of our paper simple pieces of argument show that the whitney extension theorem does not generalize to every ordered pair of carnot groups basic facts in contact geometry suggest that the extension does not hold for hn for maps from to hn it is actually known that local algebraic constraints of first order make n the maximal dimension for a legendrian submanifold in a contact manifold of dimension in fact if the derivative of a differentiable map has range in the kernel of the contact form the range of the map has dimension at most a map from to hn is ch if it is c with horizontal derivatives if its derivatives take value in the kernel of the canonical contact form in particular a ch defined on r is nowhere of maximal rank moreover it is a consequence of the theorem that a lipschitz map from to hn is derivable at almost every point with only horizontal derivatives again n is their maximal rank in order to contradict the extendability of lipschitz maps it is enough to define a function on a subset whose topological constraints force any possible extension to have maximal rank at some point let us sketch a concrete example that provides a constraint for the lipschitz extension problem it is known that rn can be isometrically embedded in hn with the exponential map for the euclidean and distances one can also consider two parallel copies of rn in mapped to parallel images in hn the second is obtained from the first by a vertical translation aiming for a contradiction suppose that there exists an extending lipschitz map f it provides on rn a lipschitz homotopy between f rn and f rn using the definition of a lipschitz map and some topology the topological dimension of the range is at least n and its n measure is positive this is not possible because of the dimensional constraints explained above see for a more rigorous proof using a different set as a domain for the function to be extended the proof in is formulated in terms of index theory and whitney theorem for curves in carnot groups purely n of hn the latter property means that the n measure of the range of a lipschitz map is zero probably this construction and some other ideas from the works on the lipschitz extension problem can be adapted to the whitney extension problem it is not really our concern in the present article to list the similarities between the two problems but rather to exhibit a class of ordered pairs of carnot groups for which the validity of the whitney extension problem depends on the geometry of the groups note that a different type of counterexample to the whitney extension theorem involving groups which are neither euclidean spaces nor heisenberg groups has been obtained by khozhevnikov in it is described in example our work is motivated by serra cassano s suggestion in his paris lecture notes at the institut henri in he proposes i to choose general carnot groups g as target space ii to look at ch curves only c maps from r to g with horizontal derivatives as we will see the problem is very different from the lipschitz extension problem for r g and from the whitney extension problem for g r indeed both such problems can be solved for every g while the answer to the extendibility question asked by serra cassano depends on the choice of more precisely we provide a geometric characterization of those g for which the ch extension problem for r g can always be solved we say in this case that the pair r g has the ch extension property examples of target carnot groups for which ch extendibility is possible have been identified by zimmerman in where it is proved that for every n n the pair r hn has the ch extension property the main component of the characterization of carnot groups g for which r g has the ch extension property is the notion of pliable horizontal vector a horizontal vector x identified with a vector field is pliable if for every p g and every neighborhood of x in the horizontal layer of g the support of all ch curves with derivative in starting from p in the direction x form a neighborhood of the integral curve of x starting from p for details see definition and proposition this notion is close but not equivalent to the property of the integral curves of x not to be rigid in the sense introduced by bryant and hsu in as we illustrate by an example example we say that a carnot group g is pliable if all its horizontal vectors are pliable since any rigid integral curve of a horizontal vector x is not pliable it is not hard to show that there exist carnot groups of any dimension larger than and of any step larger than see example on the other hand we give some criteria ensuring the pliability of a carnot group notably the fact that it has step theorem we also prove the existence of pliable groups of any positive step proposition our main theorem is the following theorem the pair r g has the ch extension property if and only if g is pliable the paper is organized as follows in section we recall some basic facts about carnot groups and we present the ch condition in the light of the theorem in section we introduce the notion of pliability we discuss its relation with rigidity and we show that pliability of g is necessary for the ch extension property to hold for r g theorem the proof of this result goes by assuming that a horizontal vector nicolas juillet and mario sigalotti exists and using it to provide an explicit construction of a ch map defined on a closed subset of r which can not be extended on section is devoted to proving that pliability is also a sufficient condition theorem in section we use our result to extend some theorem proved recently by speight for heisenberg groups see also for an alternative proof more precisely it is proved in that an absolutely continuous curve in a group of step coincides on a set of arbitrarily small complement with a ch curve we show that this is the case for pliable carnot groups proposition finally in section we give some criteria for testing the pliability of a carnot group we first show that the zero horizontal vector is always pliable proposition then by applying some results of control theory providing criteria under which the endpoint mapping is open we show that g is pliable if its step is equal to whitney condition in carnot groups a nilpotent lie group g is said to be a carnot group if it is stratified in the sense that its lie algebra g admits a direct sum decomposition gs called stratification such that gi gj for every i j with i j s and gi gj if i j we recall that gi gj denotes the linear space spanned by x y g x gi y gj the subspace is called the horizontal layer and it is also denoted by gh we say that s is the step of g if gs the group product of two elements g is denoted by given x g we write adx g g for the operator defined by adx y x y the lie algebra g can be identified with the family of vector fields on the exponential is the application that maps a vector x of g into the at time of the integral curve of the vector field x starting from the identity of g denoted by that is if t x t then exp x we also denote by etx g g the flow of the vector field x at time notice that etx p p exp tx integral curves of vector fields are said to be straight curves p the lie group g is diffeomorphic to rn with n dim gk a usual way to identify g and rn through a global system of coordinates is to by exp the group structure from g to g where it can be expressed by the formula in this way exp becomes a mapping of g g onto itself that is simply the identity for any r we introduce the dilation g g uniquely characterized by x y x y for any x y g x for any x using the decomposition x xs with xk gk it holds x for any r we also define on g the dilation exp ps xk whitney theorem for curves in carnot groups given an absolutely continuous curve a b g the velocity t which exists from almost every t a b is identified with the element of g whose associated vector field evaluated at t is equal to t an absolutely continuous curve is said to be horizontal if t gh for almost every for any interval i of r we denote by ch i g the space of all curves c i g such that t gh for every t i assume that the horizontal layer gh of the algebra is endowed with a quadratic norm k kgh the distance dg p q between two points p q g is then defined as the minimal length of a horizontal curve connecting p and q nz b dg p q inf t kgh dt a b g horizontal a o a p b q note that dg is it is known that dg provides the same topology as the usual one on moreover it is homogeneous dg p q p q for any observe that the distance depends on the norm k kgh considered on gh however all distances are in fact metrically equivalent they are even equivalent with any homogeneous distance in a very similar way as all norms on a vector space are equivalent notice that dg p can be seen as the value function of the optimal control problem m x ui xi um rm a p z bp t um t dt min a where xm is a k kgh basis of gh finally the space ch a b g of horizontal curves of class c can be endowed with a natural c metric associated with dg k kgh as follows the distance between two curves and in ch a b g is max sup dg t t sup t t kgh a b a b in the following we will write gh to denote the quantity a b t t kgh whitney condition a homogeneous homomorphism between two carnot groups and is a group morphism l with l l for any moreover l is a homogeneous homomorphism if and only if is a homogeneous lie algebra morphism it is in particular a linear map on identified with the first layer nicolas juillet and mario sigalotti is mapped on the first layer so that a homogeneous homomorphism from r to has the form l t tx where x gh proposition theorem let f be a locally lipschitz map from an open subset u of into then for almost every p u there exists a homogeneous homomorphism lp such that q f p f p q tends to lp uniformly on every compact set k as r goes to zero note that in proposition the map lp is uniquely determined it is called the pansu derivative of f at p and denoted by dfp we denote by ch the space of functions f such that holds at every point p and p dfp is continuous for the usual topology for r this coincides with the definition of ch i given earlier we have the following proposition taylor expansion let f ch where and are carnot groups let k be compact then there exists a function from to with t o t at such that for any p q k f q f p dfp q p q where dfp is the pansu derivative proof this is a direct consequence the mean value inequality by magnani contained in theorem the above proposition hints at the suitable formulation of the c condition for carnot groups this generalization already appeared in the literature in the paper by vodop yanov and pupyshev definition ch condition let k be a compact subset of and consider f k and a map l which associates with any p k a homogeneous group homomorphism l p we say that the ch condition holds for f l on k if l is continuous and there exists a function from to with t o t at such that for any p q k f q f p l p q p q let be a closed set of and f and l such that p l p is continuous we say that the ch condition holds for f l on if for any compact set k it holds for the restriction of f l to of course according to proposition if f ch then the restriction of f df to any closed satisfies the ch condition on whitney theorem for curves in carnot groups in this paper we focus on the case the condition on a compact set k reads rk as where rk f t f exp t x sup because for every r one has l h exp hx for some x gg h and every h with a slight abuse of terminology we say that the ch condition holds for f x on in the classical setting the whitney condition is equivalent to the existence of a c map such that and d have respectively restrictions f and l on this property is usually known as the c extension theorem or simply whitney extension theorem as for instance in even though the original theorem by whitney is more general and in particular includes higher order extensions and considers the extension f as a linear operator this theorem is of broad use in analysis and is still the subject of dedicated research see for instance and the references therein definition we say that the pair has the ch extension property if for every f l satisfying the ch condition on some closed set there exists ch which extends f on and such that d fp l p for every p we now state the ch theorem that franchi serapioni and serra cassano proved in theorem it has been generalised by vodop yanov and pupyshev in in a form closer to the original whitney s result including higher order extensions and the linearity of the operator f theorem franchi serapioni serra cassano for any carnot group and any d n the pair rd has the ch extension property the proof proposed by franchi serapioni and serra cassano is established for carnot groups of step two only but is identical for general carnot groups it is inspired by the proof in that corresponds to the special case for let us mention an example from the literature of with this remarkable fact was explained to us by kozhevnikov example if and are the ultrarigid carnot groups of dimension and respectively presented in and analysed in lemma of one can construct an example f l satisfying the ch condition on some compact k without any possible sion f d f on for this one exploits the rarity of ch maps of maximal rank in ultrarigid carnot groups the definition of ultrarigid from definition is that all quasimorphisms are carnot similitudes a composition of dilations and we do not use here directly the definition of ultrarigid groups but just the result stated in lemma of for and concretely let us set k nicolas juillet and mario sigalotti let the map f be constantly equal to on k and l be the constant projection lemma in applied at the point implies that the only possible extension of f is the projection l but this map vanishes only on p which does not contain it remains us to prove that whitney s condition holds in fact for two points p x x and q y y in k we look at the distance from f x to f p l q l x x y y y x on the one side and from p to q on the other side the first one is up to a multiplicative constant and when goes to zero the second one is for some constant c this proves the ch condition for f l on in the present paper we provide examples of ordered pairs with r such that the ch extension property does or does not hold depending on the geometry of we do not address the problem of whitney extensions for orders larger than a preliminary step for considering extensions would be to provide a suitable taylor expansion for m ch from r to in the spirit of what recalled for m in proposition extension property holds for some let us conclude the section by assuming that the ch ordered pair of carnot groups and by showing how to deduce it for other pairs we describe here below three such possible implications let be a homogeneous subgroup of that admits a complementary group k in the sense of section both and k are homogeneous lie groups and the intersection is reduced to assume moreover that is a carnot group and k is normal so that one can define canonically a projection that is a homogeneous homomorphism moreover is lipschitz continuous see proposition for the rest of the section we say that is an appropriate carnot subgroup of it can be easily proved that has the ch extension property in particular according to example for every k dim gg h k k the vector space r is an appropriate carnot subgroup of therefore r has the ch extension property assume now that is an appropriate carnot subgroup of using the lipschitz continuity of the projection one easily deduces from the definition of ch condition that has the ch extension property finally assume that has the ch extension property where is a carnot group then one checks without difficulty that the same is true for as a consequence of theorem we can use these three implications to infer pliability statements namely a carnot group g is pliable i if g has the ch extension property for some carnot group of positive dimension ii if g is the appropriate carnot subgroup of a pliable carnot group iii if g is the product of two pliable carnot groups rigidity necessary condition for the ch extension property let us first adapt to the case of horizontal curves on carnot groups the notion of rigid curve introduced by bryant and hsu in we will show in the following that the existence of rigid whitney theorem for curves in carnot groups curves in a carnot group g can be used to identify obstructions to the validity of the ch extension property for r g definition bryant hsu let ch a b g we say that is rigid if there exists a neighborhood v of in the space ch a b g such that if v and a a b b then is a reparametrization of a vector x gh is said to be rigid if the curve t exp tx is rigid a celebrated existence result of rigid curves for general manifolds has been obtained by bryant and hsu in and further improved in and examples of carnot groups with rigid curves have been illustrated in and extended in where it is shown that for any n there exists a carnot group of topological dimension n having rigid curves nevertheless such curves need not be straight actually the construction proposed in produces curves which are necessarily not straight following see also and focusing on rigid straight curves in carnot groups we can formulate theorem below in order to state it let t g g be the canonical projection and recall that a curve p i t g is said to be an abnormal path if p i g is a horizontal curve p t and p t x for every t i and x gh and moreover for every y g and almost every t i d p t y p t z t y dt where z t d dt p t gh theorem let x gh and assume that p t g is an abnormal path with p t exp tx if t exp tx is rigid then p t v w for every v w gh and every t moreover denoting by qp t the quadratic form qp t v p t v x v defined on v gh v x we have that qp t for every t conversely if p t v w for every v w gh and every t and qp t for every t then t exp tx is rigid example an example of carnot structure having rigid straight curves is the standard engel structure in this case s dim dim dim and one can can pick two generators x y of the horizontal distribution whose only nontrivial bracket relations are x y and y where and span and respectively let us illustrate how the existence of rigid straight curves can be deduced from theorem one could also prove rigidity by direct computations of the same type as those of example below one immediately checks that p with p t x p t y p t and p t is an abnormal path such that p t exp tx the rigidity of t exp tx then follows from theorem thanks to the relation qp y an extension of the previous construction can be used to exhibit for every n a carnot group of topological dimension n and step n having straight rigid curves it suffices nicolas juillet and mario sigalotti to consider the carnot group with goursat distribution that is the group such that dim dim gi for i n and there exist two generators x y of whose only nontrivial bracket relations are x y and y wi for i n where span wi for i n the following definition introduces the notion of pliable horizontal curve in contrast to a rigid one definition we say that a curve ch a b g is pliable if for every neighborhood v of in ch a b g the set b b v a a is a neighborhood of b b in g gh a vector x gh is said to be pliable if the curve t exp tx is pliable we say that g is pliable if every vector x gh is pliable by metric equivalence of all distances it follows that the pliability of a horizontal vector does not depend on the norm k kgh considered on gh notice that by definition of pliability in every ch neighborhood of a pliable curve a b g there exists a curve with a a b b w and w b this shows that pliable curves are not rigid it should be noticed however that the converse is not true in general as will be discussed in example in this example we show that there exist horizontal straight curves that are neither rigid nor pliable example we consider the carnot algebra g of step that is spaned by x y z x z y z y y z where x y z is a basis of and except from permutations all brackets different from the ones above are zero according to chapter there is a group structure on with coordinates x y z isomorphic to the corresponding carnot group g such that the vectors of are the leftinvariant vector fields x y z y consider the straight curve t t exp tz first notice that is not pliable since for all horizontal curves in a small enough c neighbourhood of the component of the derivative along z is positive which implies that the coordinate is nondecreasing no endpoint of a horizontal curve starting from and belonging to a small enough c neighbourhood of can have negative component let us now show that is not rigid either consider the solution of t z t u t x t notice that the y component of is identically equal to zero as a consequence the same is true r t for the components r t r and while the x z and components of t are respectively u t and u in order to disprove the rigidity it is then sufficient to take a nontrivial continuous u r such that u u whitney theorem for curves in carnot groups let us list some useful manipulations which transform horizontal curves into horizontal curves let be a horizontal curve defined on and such that for every the curve t t is horizontal and its velocity at time t is t for every the curve t is horizontal and its velocity at time t is t the curve defined by t t is horizontal it starts in and finishes in its velocity at time t is t if one composes the commuting transformations with and one obtains a curve with derivative t at time it is possible to define the concatenation of two curves g and g both starting from as follows the concatenated curve g satisfies has the same velocity as on and the velocity of on we have as a consequence of the invariance of the the lie algebra for the a consequence of and is that x gh is rigid if and only if is rigid for every r similarly x gh is pliable if and only if is pliable for every r proposition below gives a characterization of pliable horizontal vectors in terms of a condition which is apriori easier to check than the one appearing in definition before proving the proposition let us give a technical lemma from now on we write bg x r to denote the ball of center x and radius r in g for the distance dg and similarly bgh x r to denote the ball of center x and radius r in gh for the norm k kgh lemma for any x g and r r there exists such that if y z g and satisfy dg y dg z then bg x r y bg x r z proof assume by contradiction that for every n n there exist xn bg x r yn zn bg and such that xn yn bg x r zn equivalently xn bg x r however lim dg x xn r leading to a contradiction proposition a vector v gh is pliable if and only if for every neighborhood v of the curve t exp tv in the space ch g the set v v is a neighborhood of exp v nicolas juillet and mario sigalotti proof let f ch g g gh and denote by g gh g the canonical projection one direction of the equivalence being trivial let us take and assume that f is a neighbourhood of exp v in g where ch g v v gh we should prove that f is a neighborhood of exp v v in g gh step as an intermediate step we first prove that there exists such that bg exp v v is contained in f let be a real parameter in using the transformations among horizontal curves described earlier in this section let us define a map ch g associating with a curve the concatenation transformation of t t on obtained by transformation and a curve defined as follows consider t t again the curve is defined from by t t see transformation the derivative of at time t is t its derivative at time t is t for t hence is continuous and has derivative at limit times and is a map from into ch g moreover has the same derivative v at times and and its derivative at any time in is in the set of the derivatives of in particular notice now that by construction the endpoint of the curve is a function of and only it is actually equal to x x x where x see and let exp v and t exp tv we have because both curves having derivative constantly equal to v we prove now that for close enough to the differential of at is invertible let us use the coordinate identification of g with rn for every y g the limits of y and y as tends to are y and respectively while y and y converge to id and respectively one can check see proposition that the inverse function has derivative at finally the left and right translations are global diffeomorphisms collecting these informations and applying the chain rule we get that tends to an invertible operator as goes to hence for great enough is a local diffeomorphism we know by assumption on v that for any the endpoints of the curves of form a neighborhood of we have shown that this is also the case if we replace by for close to the curves of are in and have moreover derivative v at time he have thus proved that for every there exists such that bg v is contained in f step let us now prove that f is a neighborhood of v in g gh let be a curve in with v and consider for every w bgh v and every the curve w defined as follows w t on transformation whitney theorem for curves in carnot groups and w is the linear interpolation between v and w on notice that w is in let u g be the endpoint at time of the curve in g starting at whose derivative is the linear interpolation between v and w on then w w u w and u depends only on v and w and not on the curve moreover u tends to as goes to uniformly with respect to w bgh v lemma implies that for sufficiently close to for every w bgh v it holds bg u bg we proved that bg bgh v f concluding the proof of the proposition the main result of this section is the following theorem which constitutes the necessity part of the characterization of ch extendability stated in theorem theorem let g be a carnot group if r g has the ch extension property then g is pliable proof suppose by contradiction that there exists v gh which is not pliable we are going to prove that r g has not the ch extension property let t exp tv for t since v is not pliable it follows from proposition that there exist a neighborhood v of in the space ch g and a sequence xn converging to such that for every n no curve in v satisfies v and xn in particular there exists a neighborhood of v in gh such that for every ch g with v and t we have xn v for every n since xn we can assume without loss of generality that for every n max d xn exp tv exp tv t by homogeneity and we deduce that for every y g and every for every c g with y v and t we have y xn v for every n define n and xn for every n it follows from that max d exp tv exp tv t we introduce the sequence defined recursively by and yn notice that yn is a cauchy sequence and denote by its limit as n by construction for every n n and every ch g with yn v and t for all t we have v the proof that the r g nicolas juillet and mario sigalotti has not the ch extension property is then concluded if we show that the ch condition holds for f x on k where k n and f k g and x k gh are defined by f yn x v n for i j let d i j dg f f j exp j x j dg yi yj exp j v we have to prove that d i j o j as i j that is for every there exists such that d i j for i j with i j by triangular inequality we have max i j d i j x dg exp k v yk exp k v i j notice that dg exp k v yk exp k v dg exp k v yk exp k v dg exp k v exp k v where the last equality follows from and the invariance of dg by thanks to one then concludes that dg exp k v yk exp k v pmax i j hence d i j i j o j and this concludes the proof of theorem sufficient condition for the ch extension property we have seen in the previous section that differently from the classical case for a general carnot group g the suitable whitney condition for f x on k is not sufficient for the existence of an extension f of f x on more precisely it follows from theorem that if g has horizontal vectors which are not pliable then there exist triples k f x such that the ch condition holds for f x on k but there is not a ch of f x in this next section we prove the converse to the result above showing that the ch extension property holds when all horizontal vectors are pliable when g is pliable we start by introducing the notion of locally uniformly pliable horizontal vector whitney theorem for curves in carnot groups definition a horizontal vector x is called locally uniformly pliable if there exists a neighborhood u of x in gh such that for every there exists so that for every w ch g w w gh bg exp w bgh w remark as it happens for pliability if x is locally uniformly pliable then for every r is locally uniformly pliable we are going to see in the following remark that pliability and local uniform pliability are not equivalent properties the following proposition however establishes the equivalence between pliability and local uniform pliability of all horizontal vectors proposition if g is pliable then all horizontal vectors are locally uniformly pliable proof assume that g is pliable for every v gh and denote by v a positive constant such that ch g v v gh bg exp v v bgh v v we are going to show that there exists v such that for every w bgh v v g w w gh v v bgh w bg exp w the proof of the local uniform pliability of any horizontal vector x is then concluded by simple compactness arguments taking any compact neighborhood u of x using the notation of definition first fix v in such a way that exp w bg exp v v for every w bgh v v for every w gh every and every curve ch g such that v define ch g as follows t v t w for t t t for t in particular and if kv w kgh w gh v gh kw v kgh we then have w gh such that v gh nicolas juillet and mario sigalotti since depends on v w and but not on we conclude that for every w bgh v ch g w w gh bgh v v bg exp v v notice that dg max kv kgh kw kgh thanks to lemma for sufficiently small v bg exp v v bg exp v now bg v exp v bg whenever w bgh v v similarly bgh bgh v v v exp w v w provided that kv w kgh v the proof of is concluded by taking v min v v we are now ready to prove the converse of theorem concluding the proof of theorem theorem let g be a pliable carnot group then r g has the ch extension property proof by proposition we can assume that all vectors in gh are locally uniformly pliable note moreover that it is enough to prove the extension for maps defined on compact sets the generalisation to closed sets is immediate because the source carnot group is let f x satisfy the ch condition on k where k is compact we have to define on the complementary open set r k which is the countable and disjoint union of open intervals for the unbounded components of r k we simply define as the curve with constant speed x i or x j where i min k and j max k for the finite components a b we proceed as follows we consider y f a f b we let be the smallest number such that ch g x a x a gh contains y x b for every we consider an extension ch of f on a b such that a x a b x b and x a gh by definition of the ch condition there exists a function r r tending to at such that r a b whitney theorem for curves in carnot groups dg f b f a x a is smaller than and kx b a kgh since r a b is equal to b a dg exp x a y we can conclude that dg exp x a y b a using the corresponding estimates for r b a we deduce that dg exp b y b a by construction extends f and x on the interior of we prove now that is ch and that f x on the boundary of it is clear that f is ch on r in order to conclude the proof we are left to pick x let xn tend to x and we must show that xn and xn tend to f x and x x respectively as f and x are continuous on k we can assume without loss of generality that each xn is in r assume for now that xn x for every the connected component an bn of r k containing xn is either constant for n large in this case x bn or its length goes to zero as n in the first case we simply notice that an bn is c by construction in the second case we can assume that an xn bn and bn an goes to zero as f and x are continuous f an and x an converge to f x and x x respectively inequality guarantees that dg exp x an bn f an f bn bn an as n and the local uniform pliability of x x implies that an bn x an gh goes to zero as n it follows that xn x an kgh and dg xn f an go to zero proving that xn and xn tend to f x and x x respectively the situation where xn x for infinitely many n can be handled similarly replacing by application to the lusin approximation of an absolutely continuous curve in a recent paper le donne and speight prove the following result theorem proposition le let g be a carnot group of step and consider a horizontal curve a b for any there exist k a b and a ch a b g such that l a b k and on in the case in which g is equal to the heisenberg group hn such result had already been proved in theorem see also corollary in speight also identifies a horizontal curve on the engel group such that the statement of proposition is not satisfied theorem the name lusin approximation for the property stated in proposition comes from the use of the classical theorem of lusin in the proof let us sketch a proof when g is replaced by a vector space rn the derivative of an absolutely continuous curve is an integrable function lusin s theorem states that coincides with a continuous function x k rn on a set k of measure arbitrarily close to b a thanks to the inner continuity of the lebesgue measure one can assume that k is compact moreover k can be nicolas juillet and mario sigalotti chosen so that the whitney condition is satisfied by x on this is a consequence of the mean value inequality x h x x k o h where o h depends on x by usual arguments of measure theory inequality can be made uniform with respect to x if one slightly reduces the measure of the classical whitney extension theorem provides a c defined on a b with and x on the proof in and also in follows the same scheme as the one sketched above we show here below how the same scheme can be adapted to any pliable carnot group the fact that all carnot groups of step are pliable and that not all pliable carnot groups are of step or is proved in the next section theorem and proposition so that our paper actually provides a nontrivial generalization of proposition the novelty of our approach with respect to those in is to replace the classical rademacher differentiablility theorem for lipschitz or absolutely continuous curves from r to rn by the more adapted theorem proposition lusin approximation of a horizontal curve let g be a pliable carnot group and a b g be a horizontal curve then for any there exist k a b with l a b k and a curve a b g of class ch such that the curves and coincide on proof we are going to prove that for any there exists a compact set k a b with l a b k such the three following conditions are satisfied t exists and it is a horizontal vector at every t k is uniformly continuous for every there exists such that for every t k and with t h a b it holds dg t h t exp t with these conditions the ch condition holds for on since g is pliable according to theorem the ch extension property holds for r g yielding as in the statement of proposition case is lipschitz continuous let be a lipschitz curve from a b to the rademacher theorem proposition states that there exists a a b of full measure such that for any t a the curve admits a derivative at t and it holds dg t h t exp t o h as h goes to zero let be positive by lusin s theorem one can restrict a to a compact set a such that t t is uniformly continuous on and l a moreover by classical arguments of measure theory the functions h dg t h t exp can be bounded by a function that is o as h goes to zero uniformly in t on some compact set with l a in other words for every there exists such that for t and h t t it holds dg t h t exp t whitney theorem for curves in carnot groups with k the three conditions listed above hold true case general horizontal curve let be absolutely continuous on a b it admits a pathlength parametrisation there exists a lipschitz continuous curve t g and a function f a b t absolutely continuous and such that f moreover has norm at almost every time as f is absolutely continuous for every there exists such that for any measurable k the inequality l t k implies l a b f k let be positive and let be a number corresponding to in the previous sentence applying to f the scheme of proof sketched after proposition for n there exists a compact set kf a b with l a b kf such that f is differentiable with a continuous derivative on kf and the bound in the mean value inequality is uniform on kf for the lipschitz curve and for every case provides a compact set t with the listed properties with in place let k be the compact kf f and note that l a b k for t k it holds t h f t hf t o h and dg f t h f t exp h f t o h as h and h go to zero uniformly with respect to t we also know that t f t and t f t gh exist and are continuous on it is a simple exercise to compose the two taylor expansions and obtain the wanted conditions for f note that the derivative of on k is f t f t which is continuous on remark a set e rn is said rectifiable if there exists a countable family of lipschitz curves fk r rn such that e fk r k the usual lusin approximation of curves in rn permits one to replace lipschitz by c in this classical definition of rectifiability when rn is replaced by a pliable carnot group the two definitions still make sense and according to proposition are still equivalent rectifiability in metric spaces and carnot groups is a very active research topic in geometric measure theory see for references conditions ensuring pliability the goal of this section is to identify conditions ensuring that g is pliable let us first focus on the pliability of the zero vector proposition for every carnot group g the vector g is pliable proof according to proposition we should prove that for every the set g ch g gh nicolas juillet and mario sigalotti is a neighborhood of in recall that there exist k n vk gh and tk such that the map vk has rank equal to dim g at tk and satisfies tk see notice that for every the function vk vk v has also rank equal to dim g at and satisfies hence up to replacing tj by and vj by vj for j k and small enough we can assume that tk and kvj kgh for j let o be a neighborhood of tk such that for every o we have and notice that o is a neighborhood of in we complete the proof of the proposition by constructing for every o a curve ch g such that gh for every x gh p g and r let us exhibit a curve ch r g such that r p r x and gh kxkgh the curve can be constructed by imposing and by extending on and r by convex interpolation it is also possible to reverse such a curve by transformation and connect on any segment r the p x with the p by a ch curve respecting moreover gh kxkgh finally just concatenating transformation curves of this type it is possible for every r to connect p x and p y on r with a curve x y ch r g with x y gh max kxkgh ky kgh p we then construct as follows we fix r we impose and we define to be the concatenation of the following continuous curves in gh first take then the constant equal to for a time then then the constant equal to for a time and so on up to vk and finally the constant equal to vk for a time by construction ch g and satisfies remark let us show that as a consequence of the previous proposition pliability and local uniform pliability are not equivalent properties albeit we know from proposition that pliability of all horizontal vectors is equivalent to local uniform pliability of all horizontal vectors recall that local uniform pliability of a horizontal vector x implies pliability of all horizontal vectors in a neighborhood of x cf definition therefore if is locally uniformly pliable for a carnot group g then every horizontal vector of g is pliable remark hence whitney theorem for curves in carnot groups can not be locally uniformly pliable if g is not pliable the remark is concluded by recalling that carnot groups exist see examples and let g be a carnot group and let xm be an orthonormal basis of gh let us consider the control system in g rm given by m x ui xi v where both u um and the control v vm vary in rm let us rewrite x u pm ui xi fi x for i m x ei where em denotes the canonical basis of rm system can then be rewritten as m x x vi fi x for every rm let rm be the endpoint map at time for system with initial condition notice that if x u is a solution of with m initial condition pm corresponding to a control v l r then ch g and xi gh we can then state the following criterium for pliability proposition if the map rm g rm is open at then the horizontal pm vector xi is pliable as a consequence if the restriction of to rm is open at when the p m topology is considered on rm then xi is pliable we deduce the following property if a straight pmcurve is not pliable then it admits an abnormal lift in t indeed if a horizontal vector xi is not pliable then the differential of rm at must be singular hence see for instance section or proposition there exist t g and pu rm with pu such that t h t t pu t t h t t pu t h t t pu t p for t where t exp t m xi and m x h u pu v ui xi pu nicolas juillet and mario sigalotti from it follows that pu t for all t equation then implies that t xi t for every i m and every t moreover must be different from zero comparing and it follows that is an abnormal path the control literature proposes several criteria for testing the openness at of an endpoint map of the type rm the test presented here below taken from generalizes previous criteria obtained in and theorem bianchini and stefani corollary let m be a c manifold and vm be c vector fields on assume that the family of vector fields j vj k j m is lie bracket generating denote by h the iterated brackets of elements in j and recall that the length of an element of h is the sum of the number of times that each of the elements vm appears in its expression assume that every element of h in whose expression each of the vector fields vm appears an even number of times is equal at every q m to the linear combination of elements of h of smaller length evaluated at q fix m and a neighborhood of in p rm let u be the set of those controls v such that the solution of q m vi vi q q is defined up to time and denote by v the endpoint q of such a solution then u is a neighborhood of the following two results show how to apply theorem to guarantee that a carnot group g is pliable and hence that r g has the ch extension property theorem let g be a carnot group of step then g is pliable and r g has the ch extension property proof in order to prove that for every horizontal vector pm we are going to apply theorem u x the endpoint map f l rm g rm is open at zero u i i notice that xi fi w i m and w x x j i j fi w i moreover for every i j m xi xj fi fj fi fj w and all other lie bracket in and between elements of j fi k i m is zero since g is of step in particular all lie brackets between elements of j in which each of the vector fields fm appears an even number of times is zero according to theorem we are left to prove that j is lie bracket generating this is clearly true since span fi w fi w fi fj w i j m whitney theorem for curves in carnot groups is equal to t w g rm for every w g rm we conclude the paper by showing how to construct pliable carnot groups of arbitrarily large step proposition for every s there exists a pliable carnot group of step proof fix s and consider the free nilpotent stratified lie algebra a of step s generated by s elements zs for every i s denote by ii the ideal of a generated by zi and by ji the ideal ii ii then j ji is also an ideal of a then the factor algebra g is nilpotent and inherits the stratification of denote by g the carnot group generated by let xs be the elements of gh obtained by projecting zs by construction every bracket of xs in g in which at least one of the xi s appears more than once is zero moreover g has step s since xs is different from zero let us now apply theorem in order to prove that for every x gh thependpoint map fu rs g rs is open at zero where u rs is such that x ui xi following the same computations as in the proof of theorem k adx xi fi u k i in particular the family j fi k i s is lie bracket generating moreover every lie bracket of elements of jb fi k i s in which at least one of the elements fs appears more than once is zero consider now a lie bracket w between h elements of j let ks be the number of times in which each of the elements fs appears in w let us prove by induction on h that w is the linear combination of brackets between elements of jb in which each fi appears ki times i consider the case h any bracket of the type fi fj k i j s is the linear combination of brackets between elements of jb in which fi and fj appear once as it can easily be proved by induction on k thanks to the jacobi identity the induction step on h also follows directly from the jacobi identity we can therefore conclude that every lie bracket of elements of j in which at least one of the elements fs appears more than once is zero this implies in particular that the hypotheses of theorem are satisfied concluding the proof that g is pliable acknowledgment we warmly thank chapoton and massuyeau for the suggestions leading us to proposition we are also grateful to artem kozhevnikov dario prandi luca rizzi and andrei agrachev for many stimulating discussions this work has been initiated during the ihp trimester geometry analysis and dynamics on manifolds and we wish to thank the institut henri and the fondation sciences de paris for the welcoming working conditions nicolas juillet and mario sigalotti the second author has been supported by the european research council erc stg gecomethods contract number by the grant of the anr and by the fmjh program gaspard monge in optimization and operation research references a agrachev and sachkov control theory from the geometric viewpoint volume of encyclopaedia of mathematical sciences berlin control theory and optimization ii a agrachev and sarychev abnormal geodesics morse index and rigidity ann inst anal non balogh and rectifiability and lipschitz extensions into the heisenberg group math balogh lang and pansu lipschitz extensions of maps between heisenberg groups ann inst fourier grenoble barilari boscain and sigalotti editors dynamics geometry and analysis on manifolds volumes ems series of lectures in mathematics european mathematical society ems bianchini and stefani graded approximations and controllability along a trajectory siam j control bonfiglioli lanconelli and uguzzoni stratified lie groups and potential theory for their sublaplacians springer monographs in mathematics springer berlin brudnyi and shvartsman generalizations of whitney s extension theorem int math res bryant and hsu rigidity of integral curves of rank distributions invent evans and gariepy measure theory and fine properties of functions textbooks in mathematics crc press boca raton fl revised edition fefferman a sharp form of whitney s extension theorem ann of math fefferman israel and luli sobolev extension by linear operators amer math folland and stein hardy spaces on homogeneous groups volume of mathematical notes princeton university press princeton franchi serapioni and serra cassano rectifiability and perimeter in the heisenberg group math franchi serapioni and serra cassano on the structure of finite perimeter sets in step carnot groups geom and karidi a note on carnot geodesics in nilpotent lie groups dynam control systems hermes control systems which generate decomposable lie algebras j differential equations special issue dedicated to lasalle huang and yang extremals in some classes of carnot groups sci china kirchheim and serra cassano rectifiability and parameterization of intrinsic regular surfaces in the heisenberg group ann sc norm super pisa cl sci kozhevnikov metric properties of level sets of differentiable maps on carnot groups doctoral thesis paris sud paris xi may whitney theorem for curves in carnot groups le donne ottazzi and warhurst ultrarigid tangents of nilpotent groups ann inst fourier grenoble le donne and speight lusin approximation for horizontal curves in step carnot groups calc var partial differential equations liu and sussman shortest paths for metrics on distributions mem amer math lusin sur les des fonctions mesurables acad paris magnani towards differential calculus in stratified groups aust math malgrange ideals of differentiable functions tata institute of fundamental research studies in mathematics no tata institute of fundamental research bombay oxford university press london montgomery a survey of singular curves in geometry dynam control systems rigot and wenger lipschitz theorems into jet space carnot groups int math res not imrn serra cassano some topics of geometric measure theory in carnot groups in dynamics geometry and analysis on manifolds volume i ems series of lectures in mathematics european mathematical society ems speight lusin approximation and horizontal curves in carnot groups to appear in revista matematica iberoamericana sussmann some properties of vector field systems that are not altered by small perturbations j differential equations sussmann a general theorem on local controllability siam j control optimal concrete mathematics vuibert paris applications theory and applications vodop yanov and pupyshev theorems on the extension of functions on carnot groups sibirsk mat vodop yanov and pupyshev theorems on the extension of functions on the carnot group dokl akad nauk wenger and young lipschitz extensions into jet space carnot groups math res whitney analytic extensions of differentiable functions defined in closed sets trans amer math whitney differentiable functions defined in closed sets trans amer math zimmerman the whitney extension theorem for c horizontal curves in hn geom to appear institut de recherche umr de strasbourg et cnrs rue descartes strasbourg france address inria team geco cmap polytechnique cnrs palaiseau france address
| 4 |
oct the generalized traveling salesman problem solved with ant algorithms pintea pop camelia chira north university baia mare university romania cmpintea pop petrica cchira abstract a well known n problem called the generalized traveling salesman problem gtsp is considered in gtsp the nodes of a complete undirected graph are partitioned into clusters the objective is to find a minimum cost tour passing through exactly one node from each cluster an exact exponential time algorithm and an effective algorithm for the problem are presented the proposed is a modified ant colony system acs algorithm called reinforcing ant colony system racs which introduces new correction rules in the acs algorithm computational results are reported for many standard test problems the proposed algorithm is competitive with the other already proposed heuristics for the gtsp in both solution quality and computational time introduction many combinatorial optimization problems are n and the theory of n has reduced hopes that n problems can be solved within polynomial bounded computation times nevertheless solutions are sometimes easy to find consequently there is much interest in approximation and heuristic algorithms that can find near optimal solutions within reasonable running time heuristic algorithms are typically among the best strategies in terms of efficiency and solution quality for problems of realistic size and complexity in contrast to individual heuristic algorithms that are designed to solve a specific problem are strategic problem solving frameworks that can be adapted to solve a wide variety of problems algorithms are widely recognized as one of the most practical approaches for combinatorial optimization problems the most representative include genetic algorithms simulated annealing tabu search and ant colony useful references regarding methods can be found in the generalized traveling salesman problem gtsp has been introduced in and the gtsp has several applications to location and telecommunication problems more information on these problems and their applications can be found in several approaches were considered for solving the gtsp a algorithm for symmetric gtsp is described and analyzed in in is given a approach for asymmetric gtsp in is described a genetic algorithm for the gtsp in it is proposed an efficient composite heuristic for the symmetric gtsp etc the aim of this paper is to provide an exact algorithm for the gtsp as well as an effective algorithm for the problem the proposed is a modified version of ant colony system acs introduced in ant system is a heuristic algorithm inspired by the observation of real ant colonies acs is used to solve hard combinatorial optimization problems including the traveling salesman problem tsp definition and complexity of the gtsp let g v e be an undirected graph whose edges are associated with nonnegative costs we will assume that g is a complete graph if there is no edge between two nodes we can add it with an infinite cost let vp be a partition of v into p subsets called clusters v and vl vk for all l k p we denote the cost of an edge e i j e by cij the generalized traveling salesman problem gtsp asks for finding a tour h spanning a subset of nodes such that h contains exactly one node from each cluster vi i p the problem involves two related decisions choosing a node subset s v such that vk for all k p and finding a minimum cost hamiltonian cycle in the subgraph of g induced by such a cycle is called a hamiltonian tour the gtsp is called symmetric if and only if the equality c i j c j i holds for every i j v where c is the cost function associated to the edges of an exact algorithm for the gtsp in this section we present an algorithm that finds an exact solution to the gtsp given a sequence vkp in which the clusters are visited we want to find the best feasible hamiltonian tour h cost minimization visiting the clusters according to the given sequence this can be done in polynomial time by solving shortest path problems as described below we construct a layered network denoted by ln having p layers corresponding to the clusters vkp and in addition we duplicate the cluster the layered network contains all the nodes of g plus some extra nodes v for each v there is an arc i j for each i vkl and j l p having the cost cij and an arc i h i h vkl l p having cost cih moreover there is an arc i j for each i vkp and j having cost cij for any given v we consider paths from v to that visits exactly two nodes from each cluster vkp hence it gives a feasible hamiltonian tour conversely every hamiltonian tour visiting the clusters according to the sequence vkp corresponds to a path in the layered network from a certain node v to therefore the best cost minimization hamiltonian tour h visiting the clusters in a given sequence can be found by determining all the shortest paths from each v to each with the property that visits exactly one node from cluster the overall time complexity is then m n log n o nm nlogn in the worst case we can reduce the time by choosing as the cluster with minimum cardinality it should be noted that the above procedure leads to an o p nm nlogn time exact algorithm for the gtsp obtained by trying all the possible cluster sequences therefore we have established the following result the above procedure provides an exact solution to the gstp in o p nm nlogn time where n is the number of nodes m is the number of edges and p is the number of clusters in the input graph clearly the algorithm presented is an exponential time algorithm unless the number of clusters p is fixed ant colony system ant system proposed in is a approach used for various combinatorial optimization problems the algorithms were inspired by the observation of real ant colonies an ant can find shortest paths between food sources and a nest while walking from food sources to the nest and vice versa ants deposit on the ground a substance called pheromone forming a pheromone trail ants can smell pheromone and when choosing their way they tend to choose paths marked by stronger pheromone concentrations it has been shown that this pheromone trail following behavior employed by a colony of ants can lead to the emergence of shortest paths when an obstacle breaks the path ants try to get around the obstacle randomly choosing either way if the two paths encircling the obstacle have the different length more ants pass the shorter route on their continuous pendulum motion between the nest points in particular time interval while each ant keeps marking its way by pheromone the shorter route attracts more pheromone concentrations and consequently more and more ants choose this route this feedback finally leads to a stage where the entire ant colony uses the shortest path there are many variations of the ant colony optimization applied on various classical problems ant system make use of simple agents called ants which iterative construct candidate solution to a combinatorial optimization problem the ants solution construction is guided by pheromone trails and problem dependent heuristic information an individual ant constructs candidate solutions by starting with an empty solution and then iterative adding solution components until a complete candidate solution is generated each point at which an ant has to decide which solution component to add to its current partial solution is called a choice point after the solution construction is completed the ants give feedback on the solutions they have constructed by depositing pheromone on solution components which they have used in their solution solution components which are part of better solutions or are used by many ants will receive a higher amount of pheromone and hence will more likely be used by the ants in future iterations of the algorithm to avoid the search getting stuck typically before the pheromone trails get reinforced all pheromone trails are decreased by a factor ant colony system acs was developed to improve ant system making it more efficient and robust ant colony system works as follows m ants are initially positioned on n nodes chosen according to some initialization rule for example randomly each ant builds a tour by repeatedly applying a stochastic greedy rule the state transition rule while constructing its tour an ant also modifies the amount of pheromone on the visited edges by applying the local updating rule once all ants have terminated their tour the amount of pheromone on edges is modified again by applying the global updating rule as was the case in ant system ants are guided in building their tours by both heuristic information and by pheromone information an edge with a high amount of pheromone is a very desirable choice the pheromone updating rules are designed so that they tend to give more pheromone to edges which should be visited by ants the ants solutions are not guaranteed to be optimal with respect to local changes and hence may be further improved using local search methods based on this observation the best performance are obtained using hybrid algorithms combining probabilistic solution construction by a colony of ants with local search algorithms as opt etc in such hybrid algorithms the ants can be seen as guiding the local search by constructing promising initial solutions because ants preferably use solution components which earlier in the search have been contained in good locally optimal solutions reinforcing ant colony system for gtsp an ant colony system for the gtsp it is introduced in order to enforces the construction of a valid solution used in acs a new algorithm called reinforcing ant colony system racs it is elaborated with a new pheromone rule as in and pheromone evaporation technique as in let vk y denote the node y from the cluster vk the racs algorithm for the gtsp works as follows initially the ants are placed in the nodes of the graph choosing randomly the clusters and also a random node from the chosen cluster at iteration t every ant moves to a new node from an unvisited cluster and the parameters controlling the algorithm are updated each edge is labeled by a trail intensity let t represent the trail intensity of the edge i j at time an ant decides which node is the next move with a probability that is based on the distance to that node cost of the edge and the amount of trail intensity on the connecting edge the inverse of distance from a node to the next node is known as the visibility at each time unit evaporation takes place this is to stop the intensity trails increasing unbounded the rate evaporation is denoted by and its value is between and in order to stop ants visiting the same cluster in the same tour a tabu list is maintained this prevents ants visiting clusters they have previously visited the ant tabu list is cleared after each completed tour to favor the selection of an edge that has a high pheromone value and high visibility value a probability function pk iu is considered j k i are the unvisited neighbors of node i by ant k and u j k i u vk y being the node y from the unvisited cluster vk this probability function is defined as follows pk iu t t t k i t t where is a parameter used for tuning the relative importance of edge cost in selecting the next node pk iu is the probability of choosing j u where u vk y is the next node if q the current node is i if q the next node j is chosen as follows j k t t i where q is a random variable uniformly distributed over and is a parameter similar to the temperature in simulated annealing after each transition the trail intensity is updated using the correction rule from t t n where is the cost of the best tour in ant colony system only the ant that generate the best tour is allowed to globally update the pheromone the global update rule is applied to the edges belonging to the best tour the correction rule is t t t where t is the inverse cost of the best tour in order to avoid stagnation we used the pheromone evaporation technique introduced in when the pheromone trail is over an upper bound the pheromone trail is the pheromone evaporation is used after the global pheromone update rule the racs algorithm computes for a given time timemax a solution the optimal solution if it is possible and can be stated as follows in the description representation and computational results a graphic representation of reinforcing ant colony system for solving gtsp is show in fig at the beginning the ants are in their nest and will start to search food in a specific area assuming that each cluster has specific food and the ants are capable to recognize this they will choose each time a different cluster the pheromone trails will guide the ants to the shorter path a solution of gtsp as in fig to evaluate the performance of the proposed algorithm the racs was compared to the basic acs algorithm for gtsp and furthermore to other heuristics from literature nearest neighbor nn a composite heuristic gi and a random algorithm the numerical experiments that compare racs with other heuristics used problems from tsp library tsplib provides optimal objective values for each of the problems several problems with euclidean distances have been considered the exact algorithm proposed in section is clearly outperformed by heuristics including racs because his running time is exponential while heuristics including racs are polynomial time algorithms and provide good solution for reasonable sizes of the problem to divide the set of nodes into subsets we used the procedure proposed in this procedure sets the number of clusters m identifies the m farthest nodes from each other called centers and assigns each remaining node to its nearest center obviously some real world problems may have different cluster structures but the solution procedure presented in this paper is able to handle any cluster structure the initial value of all pheromone trails n lnn where lnn is the result of nearest neighbor nn algorithm in nn algorithm the rule is always to go next to the nearest location the corresponding tour traverses the nodes in the constructed order for the pheromone evaporation phase let denote the upper bound with lnn the decimal values can be treated as parameters and can be changed if it is necessary the parameters for the algorithm are critical as in all other ant systems figure the reinforcing ant colony system racs figure a graphic representation of the generalized traveling salesman problem gtsp solved with an heuristic called reinforcing ant colony system racs is illustrated the first picture shows an ant starting from the nest to find food going once through each cluster and returning to the nest all the ways are initialized with the same pheromone quantity after several iterations performed by each ant from the nest the solution is visible the second picture shows a solution of generalized traveling salesman problem gtsp represented by the largest pheromone trail thick lines the pheromone is evaporating on all the other trails gray lines currently there is no mathematical analysis developed to give the optimal parameter in each situation in the acs and racs algorithm the values of the parameters were chosen as follows in table from figure we compare the computational results for solving the gtsp using the acs and racs algorithm with the computational results obtained using nn gi and random algorithm mentioned above the columns in table from figure are as follows the name of the test problem the first digits give the number of clusters nc and the last ones give the number of nodes n optimal objective value for the problem acs racs nn gi the objective value returned by acs racs nn gi and genetic algorithm the best solutions are in table in the bold format all the solutions of acs and racs are the average of five successively run of the algorithm for each problem termination criteria for acs and racs is given by the timemax the maximal computing time set by the user in this case ten minutes table shows that reinforcing ant colony system performed well finding the optimal solution in many cases the results of racs are better than the results of acs the racs algorithm for the generalized traveling salesman problem can be improved if more appropriate values for the parameters are used also an efficient combination with other algorithms can potentially improve the results figure reinforcing ant colony system racs versus acs nn gi and conclusion the basic idea of acs is that of simulating the behavior of a set of agents that cooperate to solve an optimization problem by means of simple communications the algorithm introduced to solve the generalized traveling salesman problem called reinforcing ant colony system is an algorithm with new correction rules the computational results of the proposed racs algorithm are good and competitive in both solution quality and computational time with the existing heuristics from the literature the racs results can be improved by considering better values for the parameters or combining the racs algorithm with other optimization algorithms some disadvantages have also been identified and they refer the multiple parameters used for the algorithm and the high hardware resources requirements references colorni dorigo maniezzo distributed optimization by ant colonies proc of conf on artif life paris france elsevier publishing dorigo optimization learning and natural algorithms in italian thesis dipart di elettronica politecnico di milano italy glover kochenberger handbook of metaheuristics kluwer fischetti gonzales toth a algorithm for the symmetric generalized travelling salesman problem oper res fischetti gonzales toth the generalized traveling salesman and orienteering problem kluwer laporte nobert generalized traveling salesman problem through n sets of nodes an integer programming approach infor noon bean a lagrangian based approach for the asymmetric generalized traveling salesman problem oper res pintea dumitrescu improving ant systems using a local updating proceedings ieee computer society international symposium on symbolic and numeric algorithms for scientific computing synasc bixby reinelt library of travelling salesman and related problem instances http renaud boctor an efficient composite heuristic for the symmetric generalized traveling salesman problem euro j snyder daskin a genetic algorithm for the generalized traveling salesman problem informs san antonio tx hoos the ant system and local search for the traveling salesman problem proc int conf on evol ieee press piscataway nj
| 9 |
notes on pure dataflow matrix machines oct programming with matrix transformations michael bukatin steve matthews andrey radul here north america llc burlington massachusetts usa bukatin department of computer science university of warwick coventry uk project fluid cambridge massachusetts usa abstract the streams associated with all neuron outputs using the matrix controlling the dmm this computation is linear and is potentially quite global as any neuron output in the net can contribute to any neuron input in the net dmms described in the literature are heavily typed one normally defines a finite collection of allowed kinds of linear streams and a finite collection of allowed types of neurons these two collections are called the dmm signature one considers a particular fixed signature then one assumes the address space accommodating a countable number of neurons of each type and then a dmm is determined by a matrix of connectivity weights one normally assumes that only a finite number of those weights are at any given moment of time in particular dmms can be equipped with powerful reflection facilities include in the signature the kind of streams of matrices shaped in such a fashion as to be capable of describing a dmm over this signature then designate a particular neuron self working as an accumulator of matrices of this shape and agree that the most recent output of this neuron will be used at the down movement of each step as the matrix controlling the calculations of all neuron inputs from all neuron outputs dataflow matrix machines are generalized recurrent neural nets the mechanism is provided via a stream of matrices defining the connectivity and weights of the network in question a natural question is what should play the role of untyped for this programming architecture the proposed answer is a discipline of programming with only one kind of streams namely the streams of appropriately shaped matrices this yields pure dataflow matrix machines which are networks of transformers of streams of matrices capable of defining a pure dataflow matrix machine categories and subject descriptors guages general terms keywords software d programming dataflow continuous deformation of software introduction the purpose of these notes is to contribute to the theoretical understanding of dataflow matrix machines dataflow matrix machines dmms arise in the context of synchronous dataflow programming with linear streams streams equipped with an operation of taking a linear combinations of several streams this is a new programming architecture with interesting properties one of these properties is that large classes of programs are parametrized by matrices of numbers in this aspect dmms are similar to recurrent neural nets and in fact they can be considered to be a very powerful generalization of recurrent neural nets just like recurrent neural nets dmms are essentially twostroke engines on the up movement the neuron transformations compute the next elements of the streams associated with the neuron outputs from the streams associated with neuron inputs this computation is local to the neuron in question and is generally nonlinear on the down movement the next elements of the streams associated with all neuron inputs are computed from pure dataflow matrix machines version one kind of streams dmms seem to be a powerful programming platform in particular it is convenient to manually write software as dmms at the same time the options to automatically synthesize dmms by synthesizing the matrices in question are available however dmms are a bit too unwieldy for a theoretical investigation from the theoretical viewpoint it is inconvenient that there are many kinds of streams it is also inconvenient that one needs to fix a signature and that the parametrization by matrices is valid only for this fixed signature so a question naturally arises what would be the equivalent of untyped for dataflow matrix machines one of the principles of untyped one data type is enough namely the type of programs all data can be expressed as programs the equivalent of this principle for dmms would be to have only one kind of streams streams of matrices where a matrix is so shaped as to be able to define a dmm which would be a network of transformers of streams of matrices see section for details instead of string rewriting a number of streams of matrices are unfolding in time in this approach so all data are to be expressed as matrices of numbers under this approach see section just like all data must be expressed as in the untyped one signature order constructions continuous in particular in making spaces of programs continuous denotationally the continuous domains representing the meaning of programs are common but operationally we tend to fall back onto discrete schemas dataflow matrix machines are seeking to change that and to provide programming facilities using continuous programs and continuous deformations of programs on the level of operational semantics and of implementation this can be done both for discrete time and discrete index spaces matrices of computational elements and potentially for continuous time and continuous index spaces for computational elements the oldest electronic continuous platform is electronic analog computers the analog program itself however is very discrete because this kind of machine has a number of sockets and for every pair of such sockets there is an option to connect them via a patch cord or not to connect them among dataflow architectures oriented towards handling the streams of continuous data one might mention labview and pure data in both cases the programs themselves are quite discrete the computational platform which should be discussed in more details in this context is recurrent neural networks turing universality of recurrent neural networks is known for at least years however together with many other useful and elegant turinguniversal computational systems recurrent neural networks do not constitute a convenient programming platform but belong to the class of esoteric programming languages see for detailed discussion of that interestingly enough whether recurrent neural networks understood as programs are discrete or continuous depends on how one approaches the representation of network topology if one treats the network connectivity as a graph and thinks about this graph as a discrete data structure then recurrent neural networks themselves are discrete if one states instead that the network connectivity is always the complete graph and that the topology is defined by some of the weights being zeros then recurrent neural networks themselves are continuous the most frequent case is borderline one considers a recurrent neural net to be defined by the matrix of weights and therefore to be continuous however there are auxiliary discrete structures the matrix of weights is often a sparse matrix and so a dictionary of nonzero weights comes into play also a language used in describing the network or its implementation comes into play as an auxiliary discrete structure dataflow matrix machines belong to this borderline case in particular the use of sparse matrices is inevitable because the matrices in question are matrices with finite number of nonzero elements choosing a fixed selection of types of neurons seems too difficult at the moment for the time being we would like to retain the ability to add arbitrary types of neurons to our dmms so instead of selecting a fixed canonical signature we assume that there is an underlying language allowing to describe countable collection of neuron types in such a fashion that all neuron types of interest can be expressed in that language then assume that all neuron types described by all neuron type expressions in the underlying language are in the signature assume that our address space is structured in such a way as to accommodate countable number of neurons for each type of neurons see section since we have a countable collection of expressions describing neuron types our overall collection of neurons is still countable and the matrix describing the rules to recompute neuron inputs from the neuron outputs is still countable so now we have a parametrization by countable matrices of numbers across all dmms and not just across dmms with a particular fixed signature accumulators revised the notion of accumulator plays a key role in a number of dmm constructions including the reflection facility self the most standard version is a neuron performing an identity transform of its vector input x to its vector output y of the same kind one sets the weight of the recurrent connection from y to x to and then the neuron accumulates contributions of other neurons connected to x with nonzero weights so at each step the accumulator neuron in effect performs v v operation however it is somewhat of abuse of the system of kinds of streams to consider v and as belonging to the same space and we ll see evidence that to do so is a suboptimal convention later in the paper so what we do first of all is that we equip the accumulator neuron with another input where is collected then the body of the neuron computes the sum of v and instead of just performing the identity transform see section for more details in the situations where one has multiple kinds of linear streams one would often want to assign different kinds to v and to although in other situations one would still use the same kind for the both of them effectively considering to be structure of the paper in section we discuss continuous models of computation and their aspects in section we juxtapose string rewriting with approaches to programming in section we discuss the language of indexes of the network matrix and how to accommodate countable number of neuron types within one signature in section we discuss representation of constants and vectors as matrices section provides two examples where it is natural to split the accumulator input into v and one such example comes from the neuron self controlling the network matrix another example section is more involved and requires us to revisit domain theory in the context of linear models of computation this is a bitopological setting more specifically domains allowing for both monotonic and inference and this is the setting where approximations spaces tend to become embedded into vector spaces which is where the connection with linear models of computation comes into play programming string rewriting approach there are several approaches to programming the most popular approach starts with standard higherorder functional programming and focuses on integrating streambased programming into that standard paradigm the theoretical underpinning of this approach is and string rewriting the dataflow community produced purely approaches to programming one of those approaches which should be mentioned is an approach based on multidimensional streams continuous models of computation the history of continuous models of computation is actually quite long where the progress was more limited was in making pure dataflow matrix machines version for any field name iti the concatenation is the name of the corresponding neuron input for any field name otj the concatenation is the name of the corresponding neuron output for every such pair of indices i j there is a matrix element in our matrices under consideration to summarize in this approach the class of pure dataflow matrix machines is implicitly parametrized by a sufficiently universal language lt describing all types of neurons taken to be of potential interest together with their associated stream transformations for details of dmm functioning see sections and of the approach which we adopt in this paper is based on the notion of streams of programs an early work which should be mentioned in connection with this approach is an argument in favor of this approach for programming with linear streams was presented in section of among recent papers exploring various aspects of the approach based on the notion of streams of programs are one of the goals of the present paper is to show that this approach can play the role in synchronous dataflow programming with linear streams comparable to the role played by untyped in functional programming dmm address space language of indices constants and vectors as matrices when one has a matrix it is often more convenient to index its rows and columns by finite strings over a fixed finite alphabet than by numbers there is no principal difference but this choice discourages focusing on an arbitrary chosen order and encourages semantically meaningful names for the indices here we explain how the construction promised in section works to implement the program outlined in section one needs to express the most important linear streams such as streams of numbers scalars streams of matrix rows and streams of matrix columns and other frequently used streams of vectors as streams of matrices as indicated in one of the key uses of scalars and also of matrix rows and columns is their use as multiplicative masks the ability to use scalars as multiplicative masks needs to be preserved when those scalars are represented by matrices for example if we have a neuron which takes an input stream of scalars a and an input stream of matrices m and produces an output stream of matrices a m then we still need to be able to reproduce this functionality when scalars a are represented by matrices of the same shape as matrix m the most straightforward way to do this is to have a neuron which takes two input streams of matrices and performs their multiplication hadamard product sometimes also called the schur product if we chose the hadamard product as our main bilinear operation on matrices then the scalar x must be represented by the matrix all elements of which are equal to x neuron types define the notion of a type of neurons following the outline presented in section of for multiple kinds of linear streams we only have one kind of linear streams in the present paper so the definition is simplified a neuron type consists of a integer input arity m a positive integer output arity n and a transformation describing how to map m input streams of matrices into n output streams of matrices namely associate with the neuron type in question a transformation f taking as inputs m streams of length t and producing as outputs n streams of length t for integer time t require the obvious prefix condition that when f is applied to streams of length t the first t elements of the output streams of length t are the elements which f produces when applied to the prefixes of length t of the input streams the most typical situation is when for t the t s elements of the output streams are produced solely on the basis of elements number t of the input streams but our definition also allows neurons to accumulate unlimited history if necessary matrices admitting finite descriptions one particular feature of this approach is that we can no longer limit ourselves by matrices containing finite number of elements but we also need at least some infinite matrices admitting finite descriptions this means that one needs a convention of what should be done in case of incorrect operations such as taking a scalar product of two infinite vectors of all ones or adding a matrix consisting of all ones to self it seems likely that the technically easiest convention in such cases would be to output zeros or to reset the network matrix to all zeros on the other hand it is of interest to consider and study the limits of sequences of finitely describable matrices and a network might be computing such a limit when t language lt in this section we are going to use several alphabets assume that the following special symbols don t belong to any of the other alphabets assume that there is a language lt over alphabet such that finite strings from ltt lt describe all neuron types of interest call a string t the name of the neuron type it defines we are not worried about uniqueness of names for a type here assume that the input arity of the type in question is mt and the output arity of the type in question is nt that for every integer i such that i mt associate field name iti from lt and for every integer j such that j nt associate field name otj from lt so that implies and implies also assume that there is an alphabet with more than one letter in it and any finite string s over is a valid simple name representing matrix rows and columns as matrices streams of matrix rows and streams of matrix columns also play important roles in represent element y of a row by the corresponding matrix column all elements of which equal y represent element z of a column by the corresponding matrix row all elements of which equal z hence rows are represented by matrices with equal values along each column and columns are represented by matrices with equal values along each row given matrix row denote by its representation as a matrix given matrix column denote by its representation as a matrix given scalar x denote by its representation as a matrix respecting the matlab convention to denote the hadamard product by we denote the hadamard product of two matrices language of indices the following convention describes the address space for a countable number of neurons for each of the countable number of neuron types of interest the indexes are expressed by strings over the alphabet for any name of neuron type t ltt and for any simple name s the concatenation is a name of a neuron pure dataflow matrix machines version while omitting the infix for matrix multiplication at b by or ab t note that because matrix rows correspond to neuron inputs and matrix columns correspond to neuron outputs one should always think about these matrices as rectangular and not as square matrices so the transposition is always needed when performing the standard matrix multiplication on these matrices in a standard matrix update operation generalized from several natural examples is proposed given a row two columns and with the constraint that both and have finite number of nonzero elements p the matrix is updated by the formula aij aij k akj in terms of matrix representations what gets added to the t a work matrix a is in section of matrix rows and columns are used for subgraph selection consider a subset of neurons and take to be a row with values at the positions corresponding to the neuron outputs of the subset in question and zeros elsewhere and take to be a column with values at the positions corresponding to the neuron inputs of the subset in question and zeros elsewhere denote the matrix maximum as the overall connectivity of the subgraph in question is while the internal connecpressed by the matrix tivity of this subgraph is partial inconsistency landscape and warmus numbers another example of why it is natural to have separate inputs for v and in an accumulator comes from considering a scheme of computation with warmus numbers we have to explain first what are warmus numbers and why considering them and a particular scheme of computation in question is natural in this context partial inconsistency and vector semantics in the presence of partial inconsistency approximation spaces tends to become embedded into vector spaces one example of this phenomenon is that if one allows negative values for probabilities then probabilistic powerdomain is embedded into the space of signed measures which is a natural setting for denotational semantics of probabilistic programs warmus numbers another example involves algebraic extension of interval numbers with respect to addition interval numbers don t form a group with respect to addition however one can extend them with pseudosegments b a with the contradictory property a b for example is a pseudosegment expressing an interval number with the contradictory constraint that x and at the same time x the so extended space of interval numbers is a group and a vector space over reals the first discovery of this construction known to us was made by warmus since then it was rediscovered many times for a rather extensive bibliography related to those rediscoveries see other vectors as matrices the most straightforward way to represent other vectors or vectors with finite number of nonzero elements in this setup is to represent them as matrix rows as well this means reserving a finite or countable number of appropriately typed neurons to represent coordinates for example to describe vectors representing characters in the encoding which is standard in neural nets one would need to reserve neurons to represent the letters of the alphabet in question partial inconsistency landscape there are a number of common motives which appear multiple times in various studies of partial inconsistency in particular bilattices bitopology bicontinuous domains facilities for nonmonotonic and inference involutions etc together these motives serve as focal elements of the field of study which has been named the partial inconsistency landscape in in particular the following situation is typical in the context of bitopological groups the two topologies t and t are group dual of each other that is the group inverse induces a bijection between the respective systems of open sets and the antimonotonic group inverse is an involution which is a bicontinuous map from x t t to its bitopological dual x t t because approximation domains tend to become embedded into vector spaces in this context the setting of bicontinuous domains equipped with two scott topologies which tend to be group dual of each other seems to be natural for semantic studies of computations with linear streams accumulators revised here we continue the line of thought started in section we give a couple of examples illustrating why it is natural to have separate inputs for v and in an accumulator the main example is the neuron self itself producing the matrix controlling the network on the output and taking additive updates to that matrix on the input this is a matrix with finite number of nonzero elements so it has to be represented as a sparse matrix via a dictionary of nonzero elements a typical situation is that the additive update on each time step is small compared to the matrix itself more specifically the update is typically small in the sense that the number of affected matrix elements is small compared to the overall number of nonzero matrix elements so it does not make much sense to actually copy the output of self to its input of self and perform the additive update there which is what should be done if the definition of an accumulator with one input is to be taken literally what should be done instead is that additive updates should be added together at an input of self and then on the up movement the self should add the sum of those updates to the matrix it accumulates so instead of hiding this logic as implementation details it makes sense to split the inputs of self into x with the output of self connected to x with weight nothing else connected to x with weight and the copying of the output of self to x being a and accumulating the additive updates to self pure dataflow matrix machines version computing with warmus numbers section of provides a detailed overview of the partial inconsistency landscape including the bitopological and bilattice properties of warmus numbers it turns out that warmus numbers play a fundamental role in mathematics of partial inconsistency in particular section of that paper proposes a schema of computation via monotonic evolution punctuated by involutive steps computations with warmus extension of interval numbers via monotonic evolution punctuated by involutive steps are a good example of why the accumulators should have the asymmetry between v and if an accumulator neuron is to accumulate a monotonically evolving warmus number by accepting additive updates to that number then the can not be an arbitrary warmus number but it must be a pseudosegment b a such that a b the case of a b is allowed given that there is a constraint of this kind it is natural to want to accumulate contributions at a separate input on the down movement and to let the accumulator enforce the constraint on the up movement by ignoring requests for updates yet another input might be added to trigger involutive steps an involutive step in this context transforms d c into c d alternatively requests for updates might trigger the involutions normally the involution would be triggered only if the accumulated number is already a pseudosegment in which case the involution is an step karpathy the unreasonable effectiveness of recurrent neural networks http keimel bicontinuous domains and some old problems in domain theory electronic notes in theoretical computer science kozen semantics of probabilistic programs journal of computer and system sciences krishnaswami reactive programming without spacetime leaks acm sigplan notices lawson stably compact spaces mathematical structures in computer science matthews adding second order functions to kahn data flow technical report research report university of warwick http pollack on connectionist models of natural language processing phd thesis university of illinois at chapter is available at http open problem bicontinuous reflexive domains despite impressive progress in studies of bicontinuity and bitopology in the context of partial inconsistency landscape the issues related to reflexive domains and solutions of recursive domain equation in the context of bicontinuous domains and vector semantics don t seem to be well understood given that dataflow matrix machines equipped with facilities work directly on the level of vector spaces one would hope that the gap between operational and denotational descriptions would be more narrow in this case than for more traditional situations such as untyped popova the arithmetic on proper improper intervals a repository of literature on interval algebraic extensions http siegelmann and sontag on the computational power of neural nets journal of computer and system sciences wadge lucid in jagannathan editor proceedings of the international symposium on lucid and intensional programming pages http conclusion dataflow matrix machines work with arbitrary linear streams in this paper we focus on the case of pure dataflow matrix machines which work with the single kind of linear streams namely the streams of matrices defining the connectivity patterns and weights in pure dmms themselves this allows us to pinpoint the key difference between pure dmms and recurrent neural networks instead of working with streams of numbers pure dataflow matrix machines work with streams of programs with programs being represented as network connectivity matrices warmus calculus of approximations bull acad pol cl iii http zhou wu zhang and zhou minimal gated unit for recurrent neural networks http monotonic evolution by additions warmus numbers vs conventional interval numbers consider a sequence x x x of elements which are monotonically increasing and are obtained by additive corrections from previous elements of the sequence andima kopperman and nickolas an asymmetric ellis if these are conventional interval numbers this situation is only theorem topology and its applications possible for the trivial case of as addition bukatin and matthews linear models of computation and can not reduce the degree of imprecision for conprogram learning in gottlob et editors gcai easychair ventional interval numbers it is not possible to perform nontrivial proceedings in computing vol pages http monotonic evolution of conventional interval numbers by adding other interval numbers to previous elements of the sequence in bukatin matthews and radul dataflow matrix machines as question programmable dynamically expandable generalized for warmus numbers monotonic evolution by additive correcrecurrent neural networks http tions is possible provided that every additive correction summand bukatin matthews and radul programming patterns in xi ai bi is a zero dataflow matrix machines and generalized recurrent neural nets ai bi that is bi ai http references farnell designing sound mit press rectifiers and fluid project fluid github repository https rectified linear unit relu is a neuron with the activation function f x max x in the recent years relu became the most popular neuron in the context of deep networks whether it is equally good for recurrent networks remains to be seen the activation function max x is an integral of the heaviside step function lack of smoothness at does not seem to interfere with gradient methods used during neural net training interestingly enough the standard on reals associated with upper and lower topologies on reals are closely related to relu x y f x y y x goodman mansinghka roy bonawitz and tenenbaum church a language for generative models in proc of uncertainty in artificial intelligence http johnston hanna and millar advances in dataflow programming languages acm computing surveys jung and moshier on the bitopological nature of stone duality technical report school of computer science university of birmingham http pure dataflow matrix machines version linear and bilinear neurons in lstm and gated recurrent unit networks bers but a vector of m matrices m n this is what accounts for factoring m n dimension out various schemas of recurrent networks with gates and memory were found to be useful in overcoming the problem of vanishing gradients in the training of recurrent neural networks starting with lstm in and now including a variety of other schemas for a convenient compact overview of lstm gated recurrent units networks and related schemas see section of the standard way to describe lstm and gated recurrent unit networks is to think about them as networks of sigmoid neurons augmented with external memory and gating mechanisms however it is long understood and is used in the present paper that neurons with linear activation functions can be used as accumulators to implement memory it is also known for at least years that bilinear neurons such as neurons multiplying two inputs each of those inputs accumulating linear combinations of output signals of other neurons can be used to modulate signals via multiplicative masks gates and to implement conditional constructions in this fashion see also section of looking at the formulas for ltsm and gated recurrent unit networks in table of one can observe that instead of thinking about these networks as networks of sigmoid neurons augmented with external memory and gating mechanisms one can describe them simply as recurrent neural networks built from sigmoid neurons linear neurons and bilinear neurons without any external mechanisms when ltsm and gated recurrent unit networks are built as recurrent neural networks from sigmoid neurons linear neurons and bilinear neurons some weights are variable and subject to training and some weights are fixed as zeros or ones to establish a particular network topology software prototypes we prototyped lightweight pure dmms in processing in the lightweight pure dmms directory of project fluid which is our open source project dedicated to experiments with the computational architectures based on linear streams for simplicity we used numbers to index rows and columns of the matrices instead of using semantically meaningful strings we recommend to use as indices for work in particular we demonstrated during those experiments that it is enough to consider a set of several constant update matrices together with our network update mechanism described in the present paper to create oscillations of network weights and waves of network connectivity patterns the aug experiment directory assume that the neuron self adds matrices x and x on the up movement to obtain matrix y assume that at the starting moment t j for all j j for all j assume that y is a constant matrix such that j for all j j for all j the network starts with a down movement after the first down movement x becomes a copy of y x becomes a copy of y and after the first up movement at the time t changes sign after the second down movement x becomes minus y and after the second up movement at the time t changes sign again etc here we have obtained a simple oscillation of a network weight the network matrix is y at any given moment of time lightweight pure dataflow matrix machines the aug experiment directory pure dataflow matrix machines are networks with a finite part of the network being active at any given moment of time they process streams of matrices with finite number of elements sometimes it is convenient to consider the case of networks of finite size with fixed number of inputs m and fixed number of outputs n if we still would like those networks to process streams of matrices describing network weights and topology those matrices would be finite rectangular matrices m n we call the resulting class of networks lightweight pure dmms if we work with reals of limited precision and consider fixed values of m and n the resulting class is not as its memory space is finite however it is often useful to consider this class for didactic purposes as both theoretical constructions and software prototypes tend to be simpler in this case while many computational effects can already be illustrated in this generality here instead of y we take a collection of constant update matrices y y jn just like in the previous example make sure that the first rows indexed by of those matrices are for the second rows indexed by take j j j j n jn jn j y and the rest of the elements of the second n rows of these matrices are start at t with y matrix having the first row as before and the second row containing the only element j then one can easily see or verify by downloading and running under processing the open source software in the lightweight pure experiment directory of the project fluid that at the moment t the only element in the second row of y is j at the moment t the only element in the second row of y is j and so on until at the moment t n this wave of network tivity pattern loops back to j and then continues looping indefinitely through these n states dimension of the network operators the network has n outputs each of which is a matrix m n hence the overall dimension of the output space is m n the network has m inputs each of which is a matrix m n hence the overall dimension of the input space is m n so overall the dimension of space of all possible linear operators from outputs to inputs which could potentially be used during the down movement is m n however our model actually uses matrices of the dimension m n during the down movement so only a subspace of dimension m n of the overall space of all possible linear operators of the dimension m n is allowed the matrix is applied not to a vector of numbers but to a vector of n matrices m n and yields not a vector of pure dataflow matrix machines version final remarks a the actual implementation of self in the prototype enforces the constraint that j for all j b by making the update matrices dynamically dependent upon input symbols one could embed an arbitrary deterministic finite automaton into this control mechanism in this fashion
| 6 |
accepted as a workshop contribution at iclr l earning l onger m emory in r ecurrent n eural n etworks apr tomas mikolov armand joulin sumit chopra michael mathieu marc aurelio ranzato facebook artificial intelligence research broadway new york city ny usa tmikolov ajoulin spchopra myrhev ranzato a bstract recurrent neural network is a powerful model that learns temporal patterns in sequential data for a long time it was believed that recurrent networks are difficult to train using simple optimizers such as stochastic gradient descent due to the vanishing gradient problem in this paper we show that learning longer term patterns in real data such as in natural language is perfectly possible using gradient descent this is achieved by using a slight structural modification of the simple recurrent neural network architecture we encourage some of the hidden units to change their state slowly by making part of the recurrent weight matrix close to identity thus forming a kind of longer term memory we evaluate our model on language modeling tasks on benchmark datasets where we obtain similar performance to the much more complex long short term memory lstm networks hochreiter schmidhuber i ntroduction models of sequential data such as natural language speech and video are the core of many machine learning applications this has been widely studied in the past with approaches taking their roots in a variety of fields goodman young et koehn et in particular models based on neural networks have been very successful recently obtaining performances in automatic speech recognition dahl et language modeling mikolov and video classification simonyan zisserman these models are mostly based on two families of neural networks feedforward neural networks and recurrent neural networks feedforward architectures such as neural networks usually represent time explicitly with a window of the recent history rumelhart et while this type of models work well in practice fixing the window size makes dependency harder to learn and can only be done at the cost of a linear increase of the number of parameters the recurrent architectures on the other hand represent time recursively for example in the simple recurrent network srn elman the state of the hidden layer at a given time is conditioned on its previous state this recursion allows the model to store complex signals for arbitrarily long time periods as the state of the hidden layer can be seen as the memory of the model in theory this architecture could even encode a perfect memory by simply copying the state of the hidden layer over time while theoretically powerful these recurrent models were widely considered to be hard to train due to the vanishing and exploding gradient problems hochreiter bengio et mikolov showed how to avoid the exploding gradient problem by using simple yet efficient strategy of gradient clipping this allowed to efficiently train these models on large datasets by using only simple tools such as stochastic gradient descent and through time williams zipser werbos nevertheless simple recurrent networks still suffer from the vanishing gradient problem as gradients are propagated back through time their magnitude will almost always exponentially shrink this makes memory of the srns focused only on short term patterns practically ignoring longer accepted as a workshop contribution at iclr term dependencies there are two reasons why this happens first standard nonlinearities such as the sigmoid function have a gradient which is close to zero almost everywhere this problem has been partially solved in deep networks by using the rectified linear units relu nair hinton second as the gradient is backpropagated through time its magnitude is multiplied over and over by the recurrent matrix if the eigenvalues of this matrix are small less than one the gradient will converge to zero rapidly empirically gradients are usually close to zero after steps of backpropagation this makes it hard for simple recurrent neural networks to learn any long term patterns many architectures were proposed to deal with the vanishing gradients among those the long short term memory lstm recurrent neural network hochreiter schmidhuber is a modified version of simple recurrent network which has obtained promising results on hand writing recognition graves schmidhuber and phoneme classification graves schmidhuber lstm relies on a fairly sophisticated structure made of gates which control flow of information to hidden neurons this allows the network to potentially remember information for longer periods another interesting direction which was considered is to exploit the structure of the hessian matrix with respect to the parameters to avoid vanishing gradients this can be achieved by using secondorder methods designed for objective functions see section in lecun et al unfortunately there is no clear theoretical justification why using the hessian matrix would help nor there is to the best of our knowledge any conclusive thorough empirical study on this topic in this paper we propose a simple modification of the srn to partially solve the vanishing gradient problem in section we demonstrate that by simply constraining a part of the recurrent matrix to be close to identity we can drive some hidden units called context units to behave like a cache model which can capture long term information similar to the topic of a text kuhn de mori in section we show that our model can obtain competitive performance compared to the sequence prediction model lstm on language modeling datasets m odel s imple recurrent network yt yt r u ht u r v p ht a a st b xt xt a b figure a simple recurrent network b recurrent network with context features we consider sequential data that comes in the form of discrete tokens such as characters or words we assume a fixed dictionary containing d tokens our goal is to design a model which is able to predict the next token in the sequence given its past in this section we describe the simple recurrent network srn model popularized by elman and which is the cornerstone of this work a srn consists of an input layer a hidden layer with a recurrent connection and an output layer see figure the recurrent connection allows the propagation through time of information about the state of the hidden layer given a sequence of tokens a srn takes as input the encoding xt of the current token and predicts the probability yt of next one between the current token representation and the prediction there is a hidden layer with m units which store additional information about the previous tokens seen in the sequence more precisely at each time t the state accepted as a workshop contribution at iclr of the hidden layer ht is updated based on its previous state and the encoding xt of the current token according to the following equation ht axt where x exp x is the sigmoid function applied coordinate wise a is the d m token embedding matrix and r is the m m matrix of recurrent weights given the state of these hidden units the network then outputs the probability vector yt of the next token according to the following equation yt f u ht where f is the function and u is the m d output matrix in some cases the size d of the dictionary can be significant more than tokens for standard language modeling tasks and computing the normalization term of the function is often the of this type of architecture a common trick introduced in goodman is to replace the function by a hierarchical we use a simple hierarchy with two levels by binning the tokens into d clusters with same cumulative word frequency this reduces the complexity of computing the from o hd to about o h d but at the cost of lower performance around loss in perplexity we will mention explicitly when we use this approximation in the experiments the model is trained by using stochastic gradient descent method with through time rumelhart et williams zipser werbos we use gradient renormalization to avoid gradient explosion in practice this strategy is equivalent to gradient clipping since gradient explosions happen very rarely when reasonable are used the details of the implementation are given in the experiment section it is generally believed that using a strong nonlinearity is necessary to capture complex patterns appearing in data in particular the class of mapping that a neural network can learn between the input space and the output space depends directly on these nonlinearities along with the number of hidden layers and their sizes however these nonlinearities also introduce the socalled vanishing gradient problem in recurrent networks the vanishing gradient problem states that as the gradients get propagated back in time their magnitude quickly shrinks close to zero this makes learning longer term patterns difficult resulting in models which fail to capture the surrounding context in the next section we propose a simple extension of srn to circumvent this problem yielding a model that can retain information about longer context c ontext features in this section we propose an extension of srn by adding a hidden layer specifically designed to capture longer term dependencies we design this layer following two observations the nonlinearity can cause gradients to vanish a fully connected hidden layer changes its state completely at every time step srn uses a fully connected recurrent matrix which allows complex patterns to be propagated through time but suffers from the fact that the state of the hidden units changes rapidly at every time step on the other hand using a recurrent matrix equal to identity and removing the nonlinearity would keep the state of the hidden layer constant and every change of the state would have to come from external inputs this should allow to retain information for longer period of time more precisely the rule would be st bxt where b is the d s context embedding matrix this solution leads to a model which can not be trained efficiently indeed the gradient of the recurrent matrix would never vanish which would require propagation of the gradients up to the beginning of the training set many variations around this type of memory have been studied in the past see mozer for an overview of existing models most of these models are based on srn with no accepted as a workshop contribution at iclr recurrent connections between the hidden units they differ in how the diagonal weights of the recurrent matrix are constrained recently pachitariu sahani showed that this type of architecture can achieve performance similar to a full srn when the size of the dataset and of the model are small this type of architecture can potentially retain information about longer term statistics such as the topic of a text but it does not scale well to larger datasets pachitariu sahani besides it can been argued that purely linear srns with learned weights will perform very similarly to a combination of cache models with different rates of information decay kuhn de mori cache models compute probability of the next token given a unordered representation of longer history they are well known to perform strongly on small datasets goodman mikolov zweig show that using such contextual features as additional inputs to the hidden layer leads to a significant improvement in performance over the regular srn however in their work the contextual features are using standard nlp techniques and not learned as part of the recurrent model in this work we propose a model which learns the contextual features using stochastic gradient descent these features are the state of a hidden layer associated with a diagonal recurrent matrix similar to the one presented in mozer in other words our model possesses both a fully connected recurrent matrix to produce a set of quickly changing hidden units and a diagonal matrix that that encourages the state of the context units to change slowly see the detailed model in figure the fast layer called hidden layer in the rest of this paper can learn representations similar to models while the slowly changing layer called context layer can learn topic information similar to cache models more precisely denoting by st the state of the p context units at time t the update rules of the model are st ht yt bxt p st axt f u ht v st where is a parameter in and p is a matrix note that there is no nonlinearity applied to the state of the context units the contextual hidden units can be seen as an exponentially decaying bag of words representation of the history this exponential trace memory as denoted by mozer has been already proposed in the context of simple recurrent networks jordan mozer a close idea to our work is to use leaky integration neurons jaeger et which also forces the neurons to change their state slowly however without the structural constraint of scrn it was evaluated on the same dataset as we use further penn treebank by bengio et al interestingly the results we observed in our experiments show much bigger gains over stronger baseline using our model as will be shown later alternative model interpretation if we consider the context units as additional hidden units with no activation function we can see our model as a srn with a constrained recurrent matrix m on both hidden and context units r p where ip is the identity matrix and m is a square matrix of size m p the sum of the number of hidden and context units this reformulation shows explicitly our structural modification of the elman srn elman we constrain a diagonal block of the recurrent matrix to be equal to a reweighed identity and keep an block equal to for this reason we call our model structurally constrained recurrent network scrn adaptive context features fixing the weight to be constant in eq forces the hidden units to capture information on the same time scale on the other hand if we allow this weight to be learned for each unit we can potentially capture context from different time delays pachitariu sahani more precisely we denote by q the recurrent matrix of the contextual hidden layer and we consider the following update rule for the state of the contextual hidden layer st accepted as a workshop contribution at iclr st i q bxt where q is a diagonal matrix with diagonal elements in we suppose that these diagonal elements are obtained by applying a sigmoid transformation to a parameter vector diag q this parametrization naturally forces the diagonal weights to stay strictly between and we study in the following section in what situations does learning of the weights help interestingly we show that learning of the weights does not seem to be important as long as one uses also the standard hidden layer in the model e xperiments we evaluate our model on the language modeling task for two datasets the first dataset is the penn treebank corpus which consists of words in the training set the of data and division to training validation and test parts are the same as in mikolov et the performance on this dataset has been achieved by zaremba et al using combination of many big regularized lstm recurrent neural network language models the lstm networks were first introduced to language modeling by sundermeyer et al the second dataset which is moderately sized is called it is composed of a version of the first million characters from wikipedia dump we did split it into training part first characters and development set last characters that we use to report performance after that we constructed the vocabulary and replaced all words that occur less than times by unk token the resulting vocabulary size is about to simplify reproducibility of our results we released both the scrn code and the scripts which construct the datasets in this section we compare the performance of our proposed model against standard srns and lstm rnns which are becoming the architecture of choice for modeling sequential data with dependencies i mplementation d etails we used torch library and implemented our proposed model following the graph given in figure note that following the alternative interpretation of our model with the recurrent matrix defined in eq our model could be simply implemented by modifying a standard srn we fix at unless stated otherwise the number of backpropagation through time bptt steps is set to for our model and was chosen by parameter search on the validation set for normal srn we use just bptt steps because the gradients vanish faster we do a stochastic gradient descent after every forward steps our model is trained with a batch gradient descent of size and a learning rate of we divide the learning rate by after each training epoch when the validation error does not decrease r esults on p enn t reebank c orpus we first report results on the penn treebank corpus using both small and moderately sized models with respect to the number of hidden units table shows that our structurally constrained recurrent network scrn model can achieve performance comparable with lstm models on small datasets with relatively small numbers of parameters it should be noted that the lstm models have significantly more parameters for the same size of hidden layer making the comparison somewhat unfair with the input forget and output gates the lstm has about more parameters than srn with the same size of hidden layer comparison to leaky neurons is also in favor of scrn bengio et al report perplexity reduction from srn to srn leaky neurons while for the same dataset we observed much bigger improvement going from perplexity srn down to scrn table also shows that scrn outperforms the srn architecture even with much less parameters this can be seen by comparing performance of scrn with hidden and contextual units test the scrn code can be downloaded at http accepted as a workshop contribution at iclr perplexity versus srn with hidden units perplexity this suggests that imposing a structure on the recurrent matrix allows the learning algorithm to capture additional information to obtain further evidence that this additional information is of a longer term character we did further run experiments on the dataset that contains various topics and thus the longer term information affects the performance on this dataset much more model ngram ngram cache srn srn srn lstm lstm lstm scrn scrn scrn scrn hidden context validation perplexity test perplexity table results on penn treebank corpus baseline simple recurrent nets srn long short term memory rnns lstm and structurally constrained recurrent nets scrn note that lstm models have more parameters than srns for the same size of hidden layer l earning s elf ecurrent w eights we evaluate influence of learning the diagonal weights of the recurrent matrix for the contextual layer for the following experiments we used a hierarchical with classes on the penn treebank corpus to speedup the experiments in table we show that when the size of the hidden layer is small learning the diagonal weights is crucial this result confirms the findings in pachitariu sahani however as we increase the size of our model and use sufficient number of hidden units learning of the weights does not give any significant improvement this indicates that learning the weights of the contextual units allows these units to be used as representation of the history some contextual units can specialize on the very recent history for example for close to the contextual units would be part of a simple bigram language model with various learned weights the model can be seen as a combination of cache and bigram models when the number of standard hidden units is enough to capture short term patterns learning the weights does not seem crucial anymore keeping this observation in mind we fixed the diagonal weights when working with the corpus model scrn scrn scrn scrn scrn scrn hidden context fixed weights adaptive weights table perplexity on the test set of penn treebank corpus with and without learning the weights of the contextual features note that in these experiments we used a hierarchical r esults on t ext our next experiment involves the corpus which is significantly larger than the penn treebank as this dataset contains various articles from wikipedia the longer term information such as current topic plays bigger role than in the previous experiments this is illustrated by the gains when cache is added to the baseline model the perplexity drops from to reduction accepted as a workshop contribution at iclr we report experiments with a range of model configurations with different number of hidden units in table we show that increasing the capacity of standard srns by adding the contextual features results in better performance for example when we add contextual units to srn with hidden units the perplexity drops from to reduction such model is also much better than srn with hidden units perplexity model scrn scrn scrn hidden context context context context context table structurally constrained recurrent nets perplexity for various sizes of the contextual layer reported on the development set of dataset in table we see that when the number of hidden units is small our model is better than lstm despite the lstm model with hidden units being larger the scrn with hidden and contextual features achieves better performance on the other hand as the size of the models increase we see that the best lstm model is slightly better than the best scrn perplexity versus as the perplexity gains for both lstm and scrn over srn are much more significant than in the penn treebank experiments it seems likely that both models actually model the same kind of patterns in language model srn srn srn lstm lstm lstm scrn scrn scrn hidden context perplexity on development set table comparison of various recurrent network architectures evaluated on the development set of c onclusion in this paper we have shown that learning longer term patterns in real data using recurrent networks is perfectly doable using standard stochastic gradient descent just by introducing structural constraint on the recurrent weight matrix the model can then be interpreted as having quickly changing hidden layer that focuses on short term patterns and slowly updating context layer that retains longer term information empirical comparison of scrn to long short term memory lstm recurrent network shows very similar behavior in two language modeling tasks with similar gains over simple recurrent network when all models are tuned for the best accuracy moreover scrn shines in cases when the size of models is constrained and with similar number of parameters it often outperforms lstm by a large margin this can be especially useful in cases when the amount of training data is practically unlimited and even models with thousands of hidden neurons severely underfit the training datasets we believe these findings will help researchers to better understand the problem of learning longer term memory in sequential data our model greatly simplifies analysis and implementation of recurrent networks that are capable of learning longer term patterns further we published the code that allows to easily reproduce experiments described in this paper at the same time it should be noted that none of the above models is capable of learning truly long term memory which has a different nature for example if we would want to build a model that can accepted as a workshop contribution at iclr store arbitrarily long sequences of symbols and reproduce these later it would become obvious that this is not doable with models that have finite capacity a possible solution is to use the recurrent net as a controller of an external memory which has unlimited capacity for example in joulin mikolov a memory is used for such task however a lot of research needs to be done in this direction before we will develop models that can successfully learn to grow in complexity and size when solving increasingly more difficult tasks r eferences bengio yoshua simard patrice and frasconi paolo learning dependencies with gradient descent is difficult neural networks ieee transactions on bengio yoshua nicolas and pascanu razvan advances in optimizing recurrent networks in icassp dahl george e yu dong deng li and acero alex deep neural networks for speech recognition audio speech and language processing ieee transactions on elman jeffrey finding structure in time cognitive science goodman joshua classes for fast maximum entropy training in acoustics speech and signal processing icassp ieee international conference on volume pp ieee goodman joshua a bit of progress in language modeling computer speech language graves alex and schmidhuber juergen offline handwriting recognition with multidimensional recurrent neural networks in advances in neural information processing systems pp graves alex and schmidhuber framewise phoneme classification with bidirectional lstm and other neural network architectures neural networks hochreiter sepp the vanishing gradient problem during learning recurrent neural nets and problem solutions international journal of uncertainty fuzziness and systems hochreiter sepp and schmidhuber long memory neural computation jaeger herbert mantas popovici dan and siewert udo optimization and applications of echo state networks with neurons neural networks jordan michael attractor dynamics and parallelism in a connectionist sequential machine proceedings of the eighth annual conference of the cognitive science society pp joulin armand and mikolov tomas inferring algorithmic patterns with recurrent nets arxiv preprint koehn philipp hoang hieu birch alexandra chris federico marcello bertoldi nicola cowan brooke shen wade moran christine zens richard et al moses open source toolkit for statistical machine translation in proceedings of the annual meeting of the acl on interactive poster and demonstration sessions pp association for computational linguistics kuhn roland and de mori renato a natural language model for speech recognition pattern analysis and machine intelligence ieee transactions on lecun yann bottou leon orr genevieve and klaus efficient backprop neural networks tricks of the trade pp accepted as a workshop contribution at iclr mikolov statistical language models based on neural networks phd thesis ph thesis brno university of technology mikolov tomas and zweig geoffrey context dependent recurrent neural network language model in slt pp mikolov tomas kombrink stefan burget lukas cernocky jh and khudanpur sanjeev extensions of recurrent neural network language model in acoustics speech and signal processing icassp ieee international conference on pp ieee mozer michael a focused algorithm for temporal pattern recognition complex systems mozer michael neural net architectures for temporal sequence processing in santa fe institute studies in the sciences of complexity volume pp publishing co nair vinod and hinton geoffrey rectified linear units improve restricted boltzmann machines in proceedings of the international conference on machine learning pp pachitariu marius and sahani maneesh regularization and nonlinearities for neural language models when are they needed arxiv preprint rumelhart david e hinton geoffrey e and williams ronald j learning internal representations by error propagation technical report dtic document simonyan karen and zisserman andrew convolutional networks for action recognition in videos in advances in neural information processing systems pp sundermeyer martin ralf and ney hermann lstm neural networks for language modeling in interspeech werbos paul generalization of backpropagation with application to a recurrent gas market model neural networks williams ronald j and zipser david learning algorithms for recurrent networks and their computational complexity theory architectures and applications pp young steve evermann gunnar gales mark hain thomas kershaw dan liu xunying moore gareth odell julian ollason dave povey dan et al the htk book volume entropic cambridge research laboratory cambridge zaremba wojciech sutskever ilya and vinyals oriol recurrent neural network regularization arxiv preprint
| 9 |
convex regularization for apr tensor regression garvesh ming and han university of abstract in this paper we present a general convex optimization approach for solving highdimensional multiple response tensor regression problems under structural assumptions we consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown subspace within our framework we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates our framework leads to upper bounds in terms of two very simple quantities the gaussian width of a convex set in tensor space and the intrinsic dimension of the tensor subspace to the best of our knowledge this is the first general framework that applies to multiple response problems these general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including regression vector models tensor models and pairwise interaction models moreover in many of these settings we prove that the resulting estimates are minimax optimal we also provide a numerical study that both validates our theoretical guarantees and demonstrates the breadth of our framework departments of statistics and computer science and optimization group at wisconsin institute for discovery university of university avenue madison wi the research of garvesh raskutti is supported in part by nsf grant morgridge institute for research and department of statistics university of university avenue madison wi the research of ming yuan and han chen was supported in part by nsf frg grant and nih grant introduction many modern scientific problems involve solving statistical problems where the sample size is small relative to the ambient dimension of the underlying parameter to be estimated over the past few decades there has been a large amount of work on solving such problems by imposing structure on the parameter of interest in particular sparsity and other subspace assumptions have been studied extensively both in terms of the development of fast algorithms and theoretical guarantees see buhlmann and van de geer and hastie et al for an overview most of the prior work has focussed on scenarios in which the parameter of interest is a vector or matrix increasingly common in practice however the parameter or object to be estimated naturally has a higher order tensor structure examples include hyperspectral image analysis li and li computed tomography semerci et radar signal processing sidiropoulos and nion audio classification mesgarani et and text mining cohen and collins among numerous others it is much less clear how the low dimensional structures inherent to these problems can be effectively accounted for the main purpose of this article is to fill in this void and provide a general and unifying framework for doing so consider a general tensor regression problem where covariate tensors x i and response tensors y i rdm are related through y i hx i t i i i here t is an unknown parameter of interest and i s are independent and identically distributed noise tensors whose entries are independent and identically distributed centred normal random variables with variance further for simplicity we assume the covariates x i are gaussian but with fairly general dependence assumptions the notation will refer throughout this paper to the standard inner product taken over appropriate euclidean spaces hence for a and b ha bi x dm x jm jm r jm is the usual inner product if m n and if m n then ha bi rdm such that its jm jn entry is given by ha bi jm jn x dm x jm jm jm jn jm the goal of tensor regression is to estimate the coefficient tensor t based on observations x i y i i n in addition to the canonical example of tensor regression with y a scalar response m n many other commonly encountered regression problems are also special cases of the general tensor regression model regression see anderson vector autoregressive model see and pairwise interaction tensor model see rendle and are some of the notable examples in this article we provide a general treatment to these seemingly different problems our main focus here is on situations where the dimensionality dk s are large when compared with the sample size in many practical settings the true regression coefficient tensor t may have certain types of structure because of the high ambient dimension of a regression coefficient tensor it is essential to account for such a structure when estimating it sparsity and are the most common examples of such low dimensional structures in the case of tensors sparsity could occur at the level level or level depending on the context and leading to different interpretations there are also multiple ways in which may be present when it comes to higher order tensors either at the original tensor level or at the matricized tensor level in this article we consider a general class of convex regularization techniques to exploit either type of structure in particular we consider the standard convex regularization framework tb arg min n x i ky ha x i a where the regularizer r is a norm on and is a tuning parameter hereafter for a tensor a kakf ha we derive general risk bounds for a family of weakly decomposable regularizers under fairly general dependence structure among the covariates these general upper bounds apply to a number of concrete statistical inference problems including the aforementioned regression vector models tensor models and pairwise interaction tensors where we show that they are typically optimal in the minimax sense in developing these general results we make several contributions to a fast growing literature on high dimensional tensor estimation first of all we provide a unified and principled approach to exploit the low dimensional structure in these tensor problems in doing so we incorporate an extension of the notion of decomposability originally introduced by negahban et al for vector and matrix models to weak decomposability previously introduced in van de geer which allows us to handle more delicate tensor models such as the nuclear norm regularization for tensor models moreover we provide for the regularized least squared estimate given by a general risk bound under an easily interpretable condition on the design tensor the risk bound we derive is presented in terms of merely two geometric quantities the gaussian width which depends on the choice of regularization and the intrinsic dimension of the subspace that the tensor t lies in we believe this is the first general framework that applies to multiple responses and general dependence structure for the covariate tensor x finally our general results lead to novel upper bounds for several important regression problems involving tensors regression models and pairwise interaction models for which we also prove that the resulting estimates are minimiax rate optimal with appropriate choices of regularizers our framework incorporates both tensor structure and multiple responses which present a number of challenges compared to previous approaches these challenges manifest themselves both in terms of the choice of regularizer r and the technical challenges in the proof of the main result firstly since the notion of is more generic for tensors meaning there are a number of choices of convex regularizer r and these must satisfy a form of weak decomposability and provide optimal rates multiple responses and the flexible dependence structure among the covariates also present significant technical challenges for proving restricted strong convexity a key technical tool for establishing rates of convergence in particular a uniform law lemma is required instead of classical techniques as developed in negahban and wainwright raskutti et al that only apply to univariate responses the remainder of the paper is organized as follows in section we introduce the general framework of using weakly decomposable regularizers for exploiting structures in high dimensional tensor regression in section we present a general upper bound for weakly decomposable regularizers and discuss specific risk bounds for commonly used sparsity or regularizers for tensors in section we apply our general result to three specific statistical problems namely regression multivariate autoregressive model and the pairwise interaction model we show that in each of the three examples appropriately chosen weakly decomposable regularizers leads to minimax optimal estimation of the unknown parameters numerical experiments are presented in section to further demonstrate the merits and breadth of our approach proofs are provided in section methodology recall that the regularized estimate is given by n x ky i ha x i a tb arg min d n for brevity we assume implicitly hereafter that the minimizer on its left hand side is uniquely defined our development here actually applies to the more general case where tb can be taken as an arbitrary element from the set of the minimizers of particularly interest here is the weakly decomposable convex regularizers extending a similar concept introduced by negahban et al for vectors and matrices let a be an arbitrary linear subspace of and its orthogonal complement a ha bi for all b a we call a regularizer r weakly decomposable with respect to a pair a b where b a if there exist a constant cr such that for any a and b b r a b r a cr r b in particular if holds for any b b a we say r is weakly decomposable with respect to a a more general version of this concept was first introduced in van de geer because r is a norm by triangular inequality we also have r a b r a r b many of the commonly used regularizers for tensors are weakly decomposable or decomposable for short when cr our definition of decomposability naturally extends from similar notion for vectors n and matrices n introduced by negahban et al we also allow for more general choices of cr here to ensure a wider applicability for example as we shall see the popular tensor nuclear norm regularizer is decomposable with respect to appropriate linear subspaces with cr but not decomposable if cr we have now described a catalogue of commonly used regularizers for tensors and argue that they are all decomposable with respect to appropriately chosen subspaces of to fix ideas we shall focus in what follows on estimating a tensor t that is n although our discussion can be straightforwardly extended to tensors sparsity regularizers an obvious way to encourage sparsity is to impose the vector penalty on the entries of a r a x x x following the same idea as the lasso for linear regression see tibshirani this is a canonical example of decomposable regularizers for any fixed i where d d write a i b i a for all it is clear that i a for all i and r a defined by is decomposable with respect to a with cr in many applications sparsity arises with a more structured fashion for tensors for example a fiber or a slice of a tensor is likely to be zero simultaneously fibers of a tensor a are the collection of vectors and fibers can be defined in the same fashion to fix ideas we focus on fibers sparsity among fibers can be exploited using the regularizer r a x x k similar to the group lasso see yuan and lin where k k stands for the usual vector norm similar to the vector regularizer the group regularizer is also decomposable for any fixed i write a i b i a for all it is clear that i a for all i and r a defined by is decomposable with respect to a with cr note that in defining the regularizer in instead of vector norm other q q norms could also be used see turlach et al sparsity could also occur at the slice level the slices of a tensor a are the collection of matrices let k k be an arbitrary norm on matrices then the following group regularizer can be considered r a x typical examples of the matrix norm that can be used in include frobenius norm and nuclear norm among others in the case when k kf is used r is again a decomposable regularizer with respect to a i b i a for all for any i now consider the case when we use the matrix nuclear norm k in let and j be two sequences of projection matrices on and respectively let j a j a and b j a j by pinching inequality see bhatia it can be derived that r is decomposable with respect to a j and b j regularizers in addition to sparsity one may also consider tensors with there are multiple notions of rank for tensors see koldar and bader for a recent review in particular the cp rank is defined as the smallest number r of tensors needed to represent a tensor a r x uk vk wk where uk vk and wk to encourage a low rank estimate we can consider the nuclear norm regularization following yuan and zhang we define the nuclear norm of a through its dual norm more specifically let the spectral norm of a be given by kaks ha u v wi max kuk kvk kwk then its nuclear norm is defined as max ha bi kbks we shall then consider the regularizer r a we now show this is also a weakly decomposable regularizer let pk be a projection matrix in rdk define a r x uk vk wk write q and where i pk lemma for any a and projection matrices pk in rdk k we have k lemma is a direct consequence from the characterization of for tensor nuclear norm given by yuan and zhang and can be viewed as a tensor version of the pinching inequality for matrices write a a qa a b a a a and by lemma r defined by is weakly decomposable with respect to a and b with cr we note that a counterexample is also given by yuan and zhang which shows that for the tensor nuclear norm we can not take cr another popular way to define tensor rank is through the tucker decomposition recall that the tucker decomposition of a tensor a is of the form x x x so that u v and w are orthogonal matrices and the core tensor s is such that any two slices of s are orthogonal the triplet are referred to as the tucker ranks of a it is not hard to see that if holds then the tucker ranks can be equivalently interpreted as the dimensionality of the linear spaces spanned by uk k r vk k r and wk k r respectively the following relationship holds between cp rank and tucker ranks max r min a convenient way to encourage low tucker ranks in a tensor is through matricization let denote the matricization of a tensor that is a is a matrix whose column vectors are the the fibers of a and can also be defined in the same fashion it is clear rank mk a rk a a natural way to encourage is therefore through nuclear norm regularization kmk a r a by the pinching inequality for matrices r defined by is also decomposable with respect to a and b with cr risk bounds for decomposable regularizers we now establish risk bounds for general decomposable regularizers in particular our bounds are given in terms of the gaussian width of a suitable set of tensors recall that the gaussian width of a set s is given by wg s e supha gi where g is a tensor whose entries are independent n random variables see gordon for more details on gaussian width note that the gaussian width is a geometric measure of the volume of the set s and can be related to other volumetric characterizations see pisier also define the unit ball for the r as follows br a r a we impose the mild assumption that kakf r a which ensures that the regularizer r encourages structure now we define a quantity that relates the size of the norm r a to the frobenius norm kakf over the the subspace a following negahban et al for a subspace a of define its compatibility constant s a as s a a kakf sup which can be interpreted as a notion of intrinsic dimensionality of a now we turn our attention to the covariate tensor denote by x i vec x i the vectorized covariate from the ith sample with slight abuse of notation write x vec x x n the concatenated covariates from all n samples for convenience let dm dm further for brevity we assume a gaussian design so that x n where cov x x rndm with more technical work our results may be extended beyond gaussian designs we note that we do not require that the sample tensors x i be independent we shall assume that has bounded eigenvalues which we later verify for a number of statistical examples let and represent the smallest and largest eigenvalues of a matrix respectively in what follows we shall assume that for some constants c cu note that in particular if all covariates x i i n are independent and identically distributed then has a block diagonal structure and boils down to similar conditions on cov x i x i however is more general and applicable to settings in which the x i s may be dependent such as models which we shall discuss in further detail in section we are now in position to state our main result on the risk bounds in terms of both frobenius norm k kf and the empirical norm k kn where for a tensor a which we define as n kha x i n the main reason we focus on random gaussian design is so that we can prove a uniform law that relates the empirical norm defined above to the frobenius norm of a tensor in a see lemma lemma is analogous to restricted strong convexity defined in negahban et al but since we are dealing with multiple responses a more refined technique is required to prove lemma theorem suppose that holds for a tensor t from a linear subspace where holds let tb be defined by where the regularizer r is decomposable with respect to a and for some linear subspace a if cr wg br cr n then there exists a constant c such that with probability at least exp br n o c r u max ktb t ktb t s a cr when n is sufficiently large assuming that the right hand side converges to zero as n increases as stated in theorem our upper bound boils down to bounding two quantities s a and wg br which are both purely geometric quantities to provide some intuition wg br captures how large the r norm is relative to the k kf norm and s a captures the low dimension of the subspace a several technical remarks are in order note that wg br can be expressed as expectation of the dual norm of according to r see rockafellar for details the dual norm is given by b sup ha bi where the supremum is taken over tensors of the same dimensions as b it is straightforward to see that wg br e g to the best of our knowledge this is the first general result that applies to multiple responses as mentioned earlier incorporating multiple responses presents a technical challenge see lemma which is a uniform law analogous to restricted strong convexity while theorem focusses on gaussian design results can be extended to random design using more sophisticated techniques see mendelson zhou or for fixed design by assuming covariates deterministically satisfy the condition in lemma since the focus of this paper is on general dependence structure we assume random gaussian design one important practical challenge is that cu and c are typically unknown and these clearly influence the choice of this is a common challenge for statistical inference and we don t address this issue in this paper in practice is typically chosen through a more sophisticated choice of based on estimation of and other constants remains an open question another important and open question is for what choices of is the upper bound optimal up to a constant in section we provide specific examples in which we provide minimax lower bounds which match the upper bounds up to constant however as we see for tensor regression for tensor regression discussed in section we are not aware of a convex regularizer that matches the minimax lower bound now we develop upper bounds on both quantities in different scenarios as in the previous section we shall focus on third order tensor in the rest of the section for the ease of exposition sparsity regularizers we first consider sparsity regularizers described in the previous section and sparsity recall that vectorized regularizer a x x x could be used to exploit sparsity clearly a max it can then be shown that lemma there exists a constant c such that p wg c log let s a x x x i s for an arbitrary a s write i a then is decomposable with respect to a i a as defined by it is easy to verify that for any a s a i b i a kbkf sup in light of and theorem implies that n o s log d d d b b sup max t kn t kf n t s with high probability by taking r log n where is the regularized least squares estimate defined by when using regularizer a similar argument can also be applied to sparsity to fix ideas we consider here only sparsity among fibers in this case we use a group lasso type of regularizer a x x k then a max k lemma there exists a constant c such that wg c let p s a max log x x i s similar to the previous case for an arbitrary a s write i a then is decomposable with respect to a i a as defined by it is easy to verify that for any a s a i b i a kbkf sup in light of and theorem implies that n o s max d log d d b b sup max t kn t kf n t s with high probability by taking r max log n where is the regularized least squares estimate defined by when using regularizer comparing with the rates for and sparsity regularization we can see the benefit of using group lasso type of regularizer when sparsity is likely to occur at the fiber level more specifically consider the case when there are a total of nonzero entries from nonzero fibers if an regularization is applied we can achieve the risk bound log n on the other hand if group regularization is applied then the risk bound t becomes max log n when nonzero entries are clustered in fibers we may expect in this case enjoys performance superior to that of since log is larger than max log t sparsity and structure now we consider sparsity and structure again to fix ideas we consider here only sparsity among slices as discussed in the previous section two specific types of regularizers could be employed a x kf and a x where recall that denotes the nuclear norm of a matrix that is the sum of all singular values note that a max kf then we have the following result lemma there exists a constant c such that wg c let p max log s a x i s for an arbitrary a s write i a then is decomposable with respect to a i a as defined by it is easy to verify that for any a s a i a b i a kbkf sup based on and theorem implies that n o s max d d log d sup max t t n t s with high probability by taking r max log n where is the regularized least squares estimate defined by when using regularizer alternatively for a max ks we have the following lemma there exists a constant c such that wg c now consider p max log r a x rank r for an arbitrary a r denote by and the projection onto the row and column space of respectively it is clear that a b j as defined by in addition recall that is decomposable with respect to b j and a j as defined by it is not hard to see that for any a r a j from which we can derive that lemma for any a r a j b kbkf sup in light of and theorem implies that n o r max d d log d sup max t t n t r with high probability by taking r max log n where is the regularized least squares estimate defined by when using regularizer comparing with the rates for estimates with regularizers and we can see the benefit of using when the nonzero slices are likely to be of in particular consider the case when there are nonzero slices and each nonzero slice has rank up to then applying leads to risk bound t max log n whereas applying leads to t r max log n it is clear that is a better estimator when r regularizers we now consider regularizers that encourages low rank estimates we begin with the tensor nuclear norm regularization a recall that a kaks lemma there exists a constant c such that p wg c now let r a max a a a r for an arbitrary a r denote by the projection onto the linear space spanned by the and fibers respectively as we argued in the previous section is weakly decomposable with respect to a and b and a b where a and b are defined by and respectively lemma for any a r b a sup kbkf lemmas and show that n o d d d b b sup max t kn t kf n t r with high probability by taking r n where is the regularized least squares estimate defined by when using regularizer next we consider the regularization via matricization a a a a it is not hard to see that a max a ks a ks a ks lemma there exists a constant c such that p wg c max on the other hand lemma for any a r a b kbkf sup lemmas and suggest that n o r max d d d d d d sup max t t n t r with high probability by taking r max n where is the regularized least squares estimate defined by when using regularizer comparing with the rates for estimates with regularizers and we can see the benefit of using for any t r if we apply regularizer then t n this is to be compared with the risk bound for matricized regularization t r max n obviously always outperform since r min the advantage of is typically rather significant since in general r min on the other hand is more amenable for computation both upper bounds on frobenius error on and are novel results and complement the existing results on tensor completion gandy et al mu et al and yuan and zhang neither nor is minimax optimal and remains an interesting question as to whether there exists a convex regularization approach that is minimax optimal specific statistical problems in this section we apply our results to several concrete examples where we are attempting to estimate a tensor under certain sparse or low rank constraints and show that the regularized least squares estimate tb is typically minimiax rate optimal with appropriate choices of regularizers in particular we focus on the aspect of the general framework to provide novel upper bounds and matching minimax lower bounds regression with large p the first example we consider is the regression model i yk p m x x i i xj tj k where i n represents the index for each sample k m represents the index for each response and j p represents the index for each feature for the regression problem we have n m m which represents the total number of responses and p which represent the total number of parameters since we are in the setting where p is large but only a small number s are relevant we define the subspace p x i kf s furthermore for each i we assume x i where each entry of x i x i k j corresponds to the j th feature for the k th response for simplicity we assume the x i s are independent e the penalty function we are considering is gaussian with covariance r a p x kf and the corresponding dual function applied to the gaussian tensor g is g max kg j kf theorem under the regression model with t and independent gause e if sian design where u r such that max log p n converges to zero as n increases then there exist some constants such that with probability at least n o c max ktb t ktb t c when n is sufficiently large where tb is the regularized least squares estimate defined by with regularizer given by in addition s max log min max kte t n te t for some constant with probability at least where the minimum is taken over all estimators te based on data x i y i i n theorem shows that when taking r max log p n the regularized least squares estimate defined by with regularizer given by achieves minimax optimal rate of convergence over the parameter space alternatively there are settings where the effect of covariates on the multiple tasks may be of low rank structure in such a situation we may consider p x a r rank a j r an appropriate penalty function in this case is r a p x ka j and the corresponding dual function applied to g is g max kg j ks theorem under the regression model with t and independent gause e if sian design where r such that max m log p n converges to zero as n increases then there exist some constants such that with probability at least n o c max ktb t ktb t c when n is sufficiently large where tb is the regularized least squares estimate defined by with regularizer given by in addition r max m log min max kte t n te t for some constant with probability at least where the minimum is taken over all estimators te based on data x i y i i n again theorem shows that by taking r max m log p n the regularized least squares estimate defined by with regularizer given by achieves minimax optimal rate of convergence over the parameter space comparing with optimal rates for estimating a tensor from one can see the benefit and importance to take advantage of the extra low rankness if the true coefficient tensor is indeed from as far as we are aware these are the first results that provide upper bounds and matching minimax lower bounds for regression with sparse or slices as pointed out earlier the challenge in going from scalar to multiple response is proving lemma which is an analog of restricted strong convexity multivariate sparse models now we consider the setting of vector models in this case our generative model is x p x aj x t where t n represents the time index j p represents the lag index x t is an vector t n represents the additive noise note that the parameter tensor t is an m m p tensor so that aj and tk j represents the of the k th variable on the th variable at lag j this model is studied by basu and michailidis where p is relatively small to avoid introducing dependence and m is large our main results allow more general structure and regularization schemes than those considered in basu and michailidis since we assume the number of series m is large and there are possible interactions between the series we assume there are only s interactions in total m x m x a i ak s the penalty function we are considering is r a m x m x kak k and the corresponding dual function applied to g is g max kgk k the challenge in this setting is that the x s are highly dependent and we use the results developed in basu and michailidis to prove that is satisfied prior to presenting the main results we introduce concepts developed in basu and michailidis that play a role in determining the constants and which relate to the stability of the processes a gaussian time series is defined by its matrix function h cov x t x for all t h z further we define the spectral density function x fx to ensure the spectral density is bounded we make the following assumption m fx ess sup fx further we define the matrix polynomial a z p x aj z j where aj denote the matrices and z represents any point on the complex plane note that for a stable invertible ar p process fx a e a e we also define the lower extremum of the spectral density m fx ess inf fx note that m fx and m fx satisfy the following bounds m fx a and m fx a where a min a z a z and a max a z a z from a straightforward calculation we have that for any fixed e hence and now we state our main result for models theorem under the vector model defined by with t if s max p log m such that converges to zero as n increases then there exist some constants such that with probability at least n o max b b max kt t kn kt t kf when n is sufficiently large where tb is the regularized least squares estimators defined by with regularizer given by in addition s max p log s e min max kt t kf n te t for some constant with probability at least where the minimum is taken over all estimators te based on data x t t n p theorem provides to our best knowledge the only lower bound result for multivariate time series the upper bound is also novel and is different from proposition in basu and michailidis since we impose sparsity only on the large m directions and not over the p lags whereas basu and michailidis impose sparsity through vectorization note that proposition in basu and michailidis follows directly from lemma with p and using the sparsity regularizer basu and michailidis vectorize the problem and prove restricted strong convexity whereas since we leave the problem as a problem we requried the more refined technique used for proving lemma pairwise interaction tensor models finally we consider the tensor regression where t follows a pairwise interaction model more specifically x i y i i n are independent copies of a random couple x and y r such that y hx t i and here a such that a a and the pairwise interaction was used originally by rendle et al rendle and schmidtthieme for personalized tag recommendation and later analyzed in chen et al hoff briefly introduced a single index additive model amongst other tensor models which is a of the pairwise interaction model the regularizer we consider is r a ka ka ka it is not hard to see that r defined above is decomposable with respect to a for any projection matrices let a a a and a max rank a r for simplicity we assume gaussian design so theorem under the pairwise interaction model with t if r max n such that converges to zero as n increases then there exist constants such that with probability at least min n o b b max kt t kn kt t kf when n is sufficiently large where tb is the regularized least squares estimate defined by with regularizer given by in addition r max min max kte t n te t for some constant with probability at least where the minimum is taken over all estimate te based on data x i y i i n as in the other settings theorem establishes the minimax optimality of the regularized least squares estimate when using an appropriate convex decomposable regularizer since this is single response and the norm involves matricization this result is a straightforward extension to earlier results numerical experiments in this section we provide a series of numerical experiments that both support our theoretical results and display the flexibility of our general framework in particular we consider several different models including tensor regression with a scalar response section tensor regression section regression with both group sparsity and regularizers section sparse autoregressive models section and pairwise interaction models section to perform the simulations in a computationally tractable way we adapt the block coordinate descent approaches in case developed by simon et al and those developed by qin et al for univariate response settings to capture group sparsity and regularizers to fix ideas in all numerical experiments the covariate tensors x i s were independent standard gaussian ensembles except for the multivariate models and the noise i s are random tensors with elements following n independently as to the choice of tuning parameter we adopt grid search on to find the one with the least estimation error in terms of mean squared error in all our numerical examples tensor regression first we consider a tensor regression model y i hb x i i i where b y i i r x i the regression coefficient tensor b was generated as follows the first s slices are standard normal ensembles and the remaining slices are set to be zero naturally we consider here the regularizer min n d x x i i ky ha x ikf kf j figure shows the mean squared error of the estimate averaged over runs with standard n mse mse mse n d d n s figure mean squared error of the regularization for third order tensor regression the plot was based on simulation runs and the error bars in each panel represent one standard deviation deviation versus d n and s respectively in the left and middle panels we set s whereas in the right panel we fixed d as we can observe the mean squared error increases approximately according to s and which agrees with the risk bound given in lemma we also considered a setting where b is more specifically the s nonzero slices were random matrices in this case the lowrankness regularizer can be employed n x x i ky ha x i min j the performance of the estimate averaged over simulation runs is summarized by figure n d mse n mse mse d n r figure mean squared error for third order tensor regression with slices tensor coefficients the plot was based on simulation runs and the error bars in each panel represent one standard deviation where in the left and middle panels r and in the right panel d once again our results are consistent with our theoretical results tensor regression although we have focused on third order tensors for brevity our treatment applies to higher order tensors as well as an illustration we now consider fourth order models where b y i i r x i to generate tensors we impose low cp rank as follows generate four independent groups of r independent random vectors of unit length and via performing an svd of gaussian random matrix two times and keeping the r pairs of leading singular vectors and then compute the yielding p a tensor b we consider two different regularization schemes first we impose structure through matricization min n x i ky ha x i a secondly we use the square matricization as follows n x i ky ha x i a min where reshape a fourth order tensor into a matrix by collapsing its first two indices and last two indices respectively table shows the average error rmse for short for both approaches as we can see the approach appears superior to the approach which is also predicted by the theory n d r snr rmse matricization rmse square matricization table tensor regression with fourth order tensor covariates and scale response based on matricization rmse were computed based on simulations runs numbers in parentheses are standard errors regression our general framework can handle in a seamless fashion for demonstration we consider here regression with both group sparsity and regularizer more specifically the following model was considered y i hb x i i i where b y i i x i rd as before to impose group sparsity the first s slices of b were generated as gaussian ensembles and the remaining slices were set to zero for both the group sparsity and regularizers we used the algorithm for regression in simon et al for each block of the coordinate descent the with both and nuclear norm penalty have solutions n n mse mse d mse d n s figure matrix response regression with sparse slices tensor coefficients the plot was based on simulation runs and the error bars in each panel represent one standard deviation figure shows the average with standard deviation mean squared error over runs versus the d n and s parameter here d as we observe the error increase approximately according to log d s and which supports our upper bound in theorem we also generated b in the same fashion as before figure plots the average with standard deviation mean squared error against d n and r respectively these results are consistent with the main result in theorem n d mse n mse mse d n r figure matrix response regression with slices tensor coefficients the plot was based on simulation runs and the error bars in each panel represent one standard deviation multivariate sparse models now we consider dependent covariates and responses through the multivariate model recall that the generative model is x p x x t where t n represents the time index j p represents the lag index x t is an vector t represents the additive noise we consider four different structures for b and we choose the entries of b to be sufficiently small to ensure the time series is stable sparsity are s slices of diagonal matrix where diagonal elements are constants with are zero slices sparse slices are s slices which are independent random matrix truncated matrix with elements from n are zero slices here for m and for m group sparsity by lag sparse normal slices are s slices where elements follow n with are zero slices group sparsity by coordinate sparse normal fibers is a vector of normal elements following n when s which is a random sample of size s from m m and zero otherwise table shows the average rmse for runs of each case as a function of m p s and in general the smaller the n is or the larger the m or p is the harder it is to recover the coefficient b these findings are consistent with our theoretical developments pairwise interaction tensor models finally we consider the pairwise interaction tensor models as described in section to implement this regularization scheme we kept iterating among the matrix slices and and updating one of the three at a time while assuming the other two components are fixed for the update of we conducted an approximated projection onto the subspace after each generalized gradient descent soft thresholding step i where is the step size for the gradient step is the gradient of the least square objective function is the singular space operator with threshold and is the approximated projection operator that make any given matrix have zero row sums by shifting rows and zero column sums by shifting columns we simulated independent random matrix s and make them have zero column sums and row sums by table shows the average with standard deviation rmse under different r d n combinations under runs in general the rmse in estimating the tensor coefficient increases as s and d increases s diagonal slices s diagonal slices s slices s slices s gaussian slices s gaussian slices s gaussian fibers s gaussian fibers vectorized sparsity slices group sparsity by lag group sparsity by coordinate m p n s s s r s r snr sd rmse sd simulations runs numbers in parentheses are standard errors table multivariate model with various rmse were computed based on coefficient tensor regularizer n s rmse snr table pairwise interaction model rmse were computed based on simulations runs numbers in parentheses are standard errors proofs in this section we present the proofs to our main results we begin with the proof of theorem proof of theorem our proof involves the following main steps in the initial step we use an argument similar to those developed in negahban et al to exploit weak decomposability and properties of the empirical risk minimizer and convex duals to upper bound ktb t kn in terms of r tb t and next we use properties of gaussian random variables and supremum of gaussian processes to express the lower bound on in terms of the gaussian width e g lemma below the final and most challenging step involves proving a uniform law relating ktb t kn to ktb t kf lemma below which is analogous to restricted strong convexity the proof for lemma uses a novel truncation argument and is similar in spirit to that of lemma in raskutti et al lemma is necessary to incorporate multiple responses as existing results relating the k kn to the population k kf norm dasgupta and gupta raskutti et van de geer only apply to univariate functions throughout r a refers to the weakly decomposable regularizer over the tensor a for a tensor a we shall write and as its projections onto and with respect to the frobenius norm respectively since tb is the empirical minimizer n n x i x i ky hx i tb ky hx i t t substituting y i hx i t i i and tb t n n x x i i khx x i r t r tb n n x i x i r r t r cr r n n x i x i r r cr r n where the second inequality follows from the decomposability and the last one follows from triangular inequality let g be an tensor where each entry is n recall the definition of gaussian width wg br e g for simplicity let cr and recall that e g we have the following lemma lemma if e g then n x i x i n with probability at least exp e g the proof relies on gaussian comparison inequalities and concentration inequalities proof of lemma recall that we have set cu e g n first we show that g with high probability using concentration of lipschitz functions for gaussian random variables see theorem in appendix a first we prove that f g g hg ai is a function in terms of in particular note that f g f sup ai sup hg ai a r a a r a e arg maxa r a hg ai then let a sup hg ai a r a e sup ai sup ai hg ai r a a r a e ai e hg ai e hg ai sup hg ai a r a sup hg ai a kakf kg g kf where recall that kakf r a which implies the second last inequality therefore f g is a function with respect to the frobenius norm therefore by applying theorem in appendix a p sup hg ai e sup hg ai wg br therefore g n with probability at least exp br exp wg br to complete the proof we use a gaussian comparison inequality between the supremum p of the process cu hg ai and i x i ai over the set br recall that n n x x i x i sup a i x i n n recall that each i rdm is an standard centered gaussian tensor with each entry having variance and vec x is a gaussian vector covariance r ndm ndm first we condition on all the i s which are indepedendent of the x i s further let w i i n be standard normal gaussian tensors where w i first we condition on all the i s which are indepedendent of the x i s assuming and using a standard gaussian comparison inequality due to lemma in appendix a proven earlier in anderson if we condition on the i s we get n n x x x i x i ai x p sup i w i ai p sup n n cu a r a a r a since cov vec x i ndm ndm now we define the wj rn as the standard random vector where j dm and wj n wj wj conditioning on the w i s and dealing with the randomness in the i s n x i kwj k hg ai w i ai max n n n where g is an standard normal tensor since the i s are standard normal now we upper bound max kwj k n using standard tail bounds since kwj is a random variable with n degrees of freedom for each j p kwj exp n using tail bounds provided in appendix a presented in laurent and massart now taking the union bound over dm kwj k p max exp log dm n n and provided n log dm it follows that with probability greater than exp kwj k n therefore with probability at least exp n x i w i ai hg ai n n now we apply slepian s lemma slepian to complete the proof slepian s lemma is stated in appendix a applying slepian s lemma lemma in appendix a n x i i w ai x p sup hg ai x p sup n r a r a n for all x substituting x by means that n x i i w ai x p r g x p r n n for any x this completes the proof in light of lemma for the remainder of the proof we can condition on the event that n x i x i n under this event n x r cr r khx i cr since n x x i we get r r cr hence we define the cone c r r r and know that hence n x cr cr p khx i s a cr cr recall that n khx i n thus cr p s a cr for convenience in the remainder of this proof let cr p s a cr now we split into three cases i if then max on the other hand if ii and max cu c n then cu c hence the only case we need to consider is iii and cu now we follow a similar proof technique to the proof for theorem in raskutti et al let us define the following set c r r r further let us define the event cu e c c let us define the alternative event cu e c c we claim that it suffices to show that e holds with probability at least exp for some constant c in particular given an arbitrary c consider the tensor e c u c e c and e f cu by since c and c is we have construction consequently it is sufficient to prove that e holds with high probability lemma assume that for any c there exists an n such that then there exists a such that p e exp proof of lemma denote by dn dn and dm dm now we define the random variable zn c sup n x x i n n then it suffices to show that zn c recall that the norm n x i n to expand this out reacall m m and define an extension of the standard matricization m m rdm which groups together the first m modes with a slight abuse of notation it follows that n dn x vec x i n m rdm and clearly vec x i rdm in order to complete the proof we make use where of a truncation argument for a constant to be chosen later consider the truncated quadratic function u min and define q m vec x i m vec x i x sign where x is an input tensor further let x x x x and e x pdn and similarly n pdn x i n by construction for any c and hence sup the remainder of the proof consists of showing that for a suitable of for all c and p zn where zn by definition m vec x m vec x e q q m vec x e vec x i p m vec x e where the second last inequality follows from the inequality and the m vec x i is a gaussian random nal inequality follows from markov s inequality since variable m vec x m vec x e f therefore setting summing over m implies which implies now to prove the high probability bound on zn by first upper bounding e zn a standard symmetrization argument see pollard shows that n dn x x i m vec x i i ex zn z sup zm n i i i where zm are rademacher random variables that is p zm p zm m vec x i i is a lipschitz function with lipschitz constant the since contrtaction inequality ledoux and talagrand implies that dn n x x i m vec x i i ex zn z sup zm n n dn x x i ex z sup zm vec x i i n using standard comparisons between rademacher and guassian complexities see lemma of bartlett and mendelson there exists a c such that n dn x x i zm vec x i i ex z sup n dn n x i wm vec x i i cex w sup n i where wm s i n and m dn are independent standard normal random variables next we upper bound the gaussian complexity ew n x i sup hw x i n clearly n x i hw x i n n x r w i x i r n by the definition of and our earlier argument since c p p cu r r r r r s a s a kf c therefore ew n p cu x i r s a hw x i sup c n since we have ew p cu r s a c c n x cu sup hw i x i c n n finally we need a concentration bound to show that p zn in particular using talagrand s theorem for empirical processes talagrand by construction and dn x m vec x i var dn x m vec x i e dn x m consequently talagrand s inequality implies that p zn e zn u exp since e zn cu c n the claim follows by setting u cu c r n finally we return to the main proof on the event e it now follows easily that max this completes the proof for theorem s a proof of other results in section in this section we present proofs for the other main results from section deferring the more technical parts to the appendix proof of lemmas and we prove these three lemmas together since the proofs follow a very similar argument first let s denote the directions in which sparsity is q applied and ds dk denote the total dimension in all these directions for example in lemma s and ds for lemma s and ds and for lemma s and ds recall n and dn note that g can be represented by the variational form g where u r and v r hg u vi sup kvec u k kvkf ds c c n now we express the supremum of this gaussian process as sup vec u ms g vec v u v where recall ms is the matricization involving either slice or fiber the remainder of the proof follows from lemma in appendix b proof of lemma recall that g max kg ks for each lemma in appendix b with n satisfies the concentration inequality e kg ks p applying standard bounds on the maximum of functions of independent gaussian random variables e max kg ks p this completes the proof log proof of lemma using the standard nuclear norm upper bound for a matrix in terms of rank and frobenius norm a x q x rank kf x rank x x rank where the final inequality follows from the inequality finally note that for any a r x rank r which completes the proof proof of lemma note that g kgks we can directly apply lemma with n from appendix b proof of lemma from tucker decomposition it is clear that for any a r we can find sets of vectors uk k vk k and wk k such that r x uk vk w k and in addition u k vk wk for any k k it is not hard to see that r x kuk kvk kwk on the other hand as shown by yuan and zhang r x kuk k kvk k kwk k the claim then follows from an application of inequality proof of lemma recall that we are considering the regularizer a max a ks a ks a ks and our goal is to upper bound g max kmk g ks once again apply lemma in appendix b with n for each matricization implies p p p e g max proof of lemma it is not hard to see that a a a max a a a a which completes the proof proof of results in section in this section we prove the results in section first we provide a general minimax lower result that we apply to our main results let t be an arbitrary subspace of tensors theorem assume that holds and there exists a finite set am t of tensors such that log m such that u ka a kf cu for all m and all then min max kte t u te t with probability at least for some c proof we use standard techniques developed in ibragimov and has minskii and extended in yang and barron let am be a set such that ka a u for all and let m e be a random variable uniformly distributed over the index set m m now we use a standard argument which allows us to provide a minimax lower bound in terms of the probability of error in a multiple hypothesis testing problem see yang and barron yu then yields the lower bound u e e inf sup p kt t kf inf p te am te t te where the infimum is taken over all estimators te that are measurable functions of x and y let x x i i n y y i i n and e i i n using fano s inequality see cover and thomas for any estimator te we have e p te am e ix am y log log m taking expectations over x on both sides we have e p te am e ex ix am y log log m for m let q denote the condition distribution of y conditioned on x and the event t a and dkl q denote the divergence between q and q from the convexity of mutual information see cover and thomas we have the upper bound ix t y m x m dkl q given our linear gaussian observation model n x nka a i i x i x i ha ha dkl q further if holds then ex ix t y n x m ex ka a n x m ka a based on our construction there exists a set am where each a t such that log m and u ka a kf for all m if holds then ex ka a ka a and we can conclude that ex ix t y and from the earlier bound due to fano s inequality for and such that log log m we are guaranteed that n o e p te am the proof is now completed because log m and log proof of theorem the proof for the upper bound follows directly from lemma with m and p and noting that the overall covariance r ndm ndm is e since each of the samples is independent hence with blocks to prove the lower bound we use theorem and construct a suitable packing set for the way we construct this packing is to construct two separate packing sets and select the set with the higher packing number using a similar argument to that used in raskutti et al which also uses two separate packing sets the first packing set we consider involves selecting the slice a s where a and s s consider vectorizing each slice so v vec a s rsm hence in order to apply theorem we define the set t to be slices which is isomorphic to the vector space rsm using lemma in appendix c there exists a packing set v v v n rsm such that log n and for all v v where kv v for any if we choose c n then theorem implies the lower bound sm min max kte t u n te t with probability greater than the second packing set we construct is for the slice rp since in the third direction only s of the p are the packing number for any slice is analogous to the packing number for vectors with ambient dimension letting v we need to construct a packing set for v rp kvk s using lemma in appendix c there exists a discrete set v v v n such that log n cs log for some c and kv k v for k for any setting log s log min max kte t u t n te with probability greater than taking a maximum over lower bounds involving both packing sets completes the proof of the lower bound in theorem proof of theorem the upper bound follows directly from lemma with m and p and noting that the overall covariance r ndm ndm is with e since each of the samples is independent blocks to prove the lower bound we use theorem and construct a suitable packing set for once again we construct two separate packings and choose the set that leads to the larger minimax lower bound for our first packing set we construct a packing along one slice let us assume a where rank r and if we let m where m then a m using lemma in appendix c there exists a set an such that log n crm and ka a p for all and any here we set therefore using theorem min max kte t u te t rm n with probability greater than the second packing set for involves a packing in the space of singular values since p x rank let k m be the singular values of the matrix under our rank constraint we have p m x x i mp let v r where v vec note that p m x x i r implies kvk using lemma there exists a set v v v n such that log n cr log and for all kv v for any if we set log therefore using theorem r log min max kte t u n te t with probability greater than hence taking a maximum over both bounds r max m log log m r max m log min max kte t u u n n te t with probability greater than proof of theorem the upper bound with s max p log m n follows directly from lemma with p and m and is satisfied with and according to to prove the lower bound is similar to the proof for the lower bound in theorem once again we use theorem and construct a two suitable packing sets for the first packing set we consider involves selecting an arbitrary subspace te a s s p now if we let v vec a then v comes from an vector space for any a te using lemma in appendix c there exists a packing set v v v n rsp such that log n csp and for all v v where kv v for any if we choose p then theorem implies the lower bound sp min max kte t u n te t with probability greater than further the second packing set we construct is for the slice for any since in the second and third direction only s of the are we consider the vector space v rm kvk s once again using the standard standard hypercube construction in lemma in appendix c there exists a discrete set v v v n such that log n cs log for some c and kv v for for any setting log yields s log s e min max kt t kf cu n te t with probability greater than taking a maximum over lower bounds involving both packing sets completes the proof of of our lower bound proof of theorem the upper bound follows from a slight modification of the statement in lemma in particular since r a ka ka ka the dual norm is a max ka ks hence following the same technique as used in lemma r r max d d max e g c max n n it is also straightforward to see that s to prove the lower bound we construct three packing sets and select the one with the largest packing number recall that a a a and a max rank a r therefore our three packings are for a a and a assuming each has rank we focus on packing in a since the approach is similar in the other two cases using lemma from appendix b in combination with theorem r min min max kte t u n te t with probability greater than repeating this process for packings in a and a assuming each has rank r and taking a maximum over all three bounds yields the overall minimax lower bound r max min max kte t u n te t with probability greater than references agarwal negahban and wainwright noisy matrix decomposition via convex relaxation optimal rates in high dimensions the annals of statistics anderson the integral of a symmetric convex set and some probability inequalities proc of american mathematical society anderson an introduction to multivariate statistical analysis wiley series in probability and mathematical statistics wiley new york bartlett and mendelson gaussian and rademacher complexities risk bounds and structural results journal of machine learning research basu and michailidis regularized estimation in sparse time series models annals of statistics bhatia matrix analysis springer new york buhlmann and van de geer statistical for data springer series in statistics springer new york chen lyu i king and xu exact and stable recovery of pairwise interaction tensors in advances in neural information processing systems cohen and collins tensor decomposition for fast parsing with pcfgs in advances in neural information processing systems cover and thomas elements of information theory john wiley and sons new york dasgupta and gupta empirical processes with a bounded diameter geometric and functional analysis gandy recht and yamada tensor completion and rank tensor recovery via convex optimization inverse problems gordon on milmans inequality and random subspaces which escape through a mesh in rn geometric aspects of functional analysis israel seminar lecture notes hastie tibshirani and wainwright statistical learning with sparsity the lasso and generalizations monographs on statistics and applied probability crc press new york hoff multilinear tensor regression for longitudinal relational data technical report department of statistics university of washington i ibragimov and has minskii statistical estimation asymptotic theory springerverlag new york koldar and bader tensor decompositions and applications siam review laurent and massart adaptive estimation of a quadratic functional by model selection the annals of statistics ledoux the concentration of measure phenomenon mathematical surveys and monographs american mathematical society providence ri ledoux and talagrand probability in banach spaces isoperimetry and processes new york ny li and li tensor completion for compression of hyperspectral images in ieee international conference on image processing icip pages new introduction to multiple time series analysis springer new york massart concentration inequalties and model selection ecole d de springer new york shahar mendelson upper bounds on product and multiplier empirical processes technical report technion mesgarani slaney and shamma audio classification based on multiscale features ieee transactions on speech and audio processing mu huang wright and goldfarb square deal lower bounds and improved relaxations for tensor recovery in international conference on machine learning negahban and wainwright estimation of near matrices with noise and scaling the annals of statistics negahban and wainwright restricted strong convexity and weighted matrix completion jmlr negahban ravikumar wainwright and yu a unified framework for highdimensional analysis of with decomposable regularizers statistical science pisier the volume of convex bodies and banach space geometry volume of cambridge tracts in mathematics cambridge university press cambridge uk pollard convergence of stochastic processes new york qin scheinberg and d goldfarb efficient descent algorithms for the group lasso math program raskutti wainwright and yu restricted eigenvalue conditions for correlated gaussian designs journal of machine learning research raskutti wainwright and yu minimax rates of estimation for linear regression over q ieee transactions of information theory raskutti wainwright and yu rates for sparse additive models over kernel classes via convex programming journal of machine learning research rendle and pairwise interaction tensor factorization for personalized tag recommendation in icdm rendle marinho nanopoulos and learning optimal ranking with tensor factorization for tag recommendation in sigkdd rockafellar convex analysis princeton university press princeton semerci hao kilmer and miller tensor based formulation and nuclear norm regularizatin for multienergy computed tomography ieee transactions on image processing sidiropoulos and nion tensor algebra and harmonic retrieval in signal processing for mimo radar ieee transactions on signal processing simon friedman and hastie a blockwise coordinate descent algorithm for penalized multiresponse and grouped multinomial regression technical report georgia november slepian the barrier problem for gaussian noise bell system tech j talagrand new concentration inequalities in product spaces invent tibshirani regression shrinkage and selection via the lasso journal of the royal statistical society series b turlach venables and wright simultaneous variable selection technometrics van de geer empirical processes in cambridge university press van de geer weakly decomposable regularization penalties and structured sparsity scandivanian journal of statistics theory and applications yang and barron determination of minimax rates of convergence the annals of statistics yu assouad fano and le cam research papers in probability and statistics festschrift in honor of lucien le cam pages yuan and lin model selection and estimation in regression with grouped variables journal of the royal statistical society b yuan and zhang on tensor completion via nuclear norm minimization foundation of computational mathematics to appear shuheng zhou restricted eigenvalue conditions on subgaussian random matrices technical report eth zurich a results for gaussian random variables in this section we provide some standard concentration bounds that we use throughout this paper first we provide standard tail bounds due to laurent and massart laurent and massart lemma let z be a centralized random variable with m degrees of freedom then for all x p z m mx exp and p z m mx exp gaussian comparison inequalities the first result is a classical result from anderson lemma anderson s comparison inequality let x and y be gaussian random vectors with covariance and respectively if is positive then for any convex symmetric set c p x c p y c the following lemma is slepian s inequality slepian which allows to upper bound the supremum of one gaussian process by the supremum of another gaussian process lemma slepian s lemma let gs s s and hs s s be two centered gaussian processes defined over the same index set suppose that both processes are almost surely bounded for each s t s if e gs gt e hs ht then e gs e hs further if e e for all s s then p sup gs x p sup hs x for all x finally we require a standard result on the concentration of lipschitz functions over gaussian random variables theorem theorem from massart let g n be a gaussian random variable then for any function f rd r such that x f y lkx yk for all x y rd we have p g e f g t exp for all t b suprema for gaussian tensors in this section we provide important results on suprema of gaussian tensors over different sets the group norm let g be an gaussian matrix and define the set v u v kuk kvk using this notation let us define the define the random quantity m g v sup u gv u v then we have the following overall bound lemma p p e m g v log proof our proof user similar ideas to the proof of theorem in raskutti et al we need to upper bound e m g v we are taking the supremum of the gaussian process u gv sup kuk kvk eu v over the set v and apply slepian s inwe now construct a second gaussian process g equality see lemma in appendix to upper bound u gv sup kuk kvk eu v in particular let us define the by the supremum over our second gaussian process g process as eu v g u h v g where the vectors g h are standard normals also independent of each other it is straightforward to show that both u gv and g u h v are further it is straightforward to show that eu v g ku kv v var g now we show that var u gv gv ku kv v to this end observe that var u gv gv kuv v k u v v v ku k kv v u k kuk v v kv k kvk first note that for all v v and by the inequality v v kv k kvk and u k kuk therefore var u gv gv ku k kv v consequently using lemma e m g v e sup g u sup h v kuk kvk therefore e m g v e sup g u sup h v kuk kvk e sup g u e sup h v kuk kvk e kgk e khk by known results on gaussian maxima see ledoux and talagrand p e khk log and e kgk p p o dj therefore e m g v p log spectral norm of tensors our proof is based on an extension of the proof techniques used for the proof of proposition in negahban and wainwright lemma let g be a random sample from an gaussian tensor ensemble then we have e kgks log n p x dk proof recall the definition of kgks kgks un gi sup un dn since each entry un gi is a gaussian random variable kgks is the supremum of a gaussian process and therefore the concentration bound follows from theorem in ledoux we use a standard covering argument to upper bound e kgks let um be a covering number of the sphere s in terms of vector similarly for all dk k therefore k n let um k be a covering number of the sphere s un un gi un ujn gi un un ujn gi taking a supremum over both sides kgks max un ujn gi mn kgks repeating this argument over all n directions kgks max jn mn ujnn gi by construction each variable gi is a gaussian with variance at most so by standard bounds on gaussian maxima p p p e kgks log mn log log mn there exist a of s dk with log mk dk log which completes the proof c hypercube packing sets in this section we provide important results for the lower bound results one key concept is the hamming hamming distance is between two vectors v rd and v rd is defined by dh v v d x i vj lemma let c d where d then there exists a discrete subset v v v m c such that log m cd for some constant c and for all kv v for any proof let v d d a member of the hypercube by d recall the definition of hamming distance provided above in this case amounts to the places either vj or is negative but both or not negative then according to lemma in yu there exists a subset of this hypercube v v v m such that dh v v d and log m cd clearly kv v dh v v kv v further this completes the proof next we provide a hupercube packing set for the sparse subset of vectors that is the set v v rd kvk s this follows from lemma in raskutti et al which we state here for completeness lemma let c d where d then there exists a discrete subset v v v m v c such that log m cs log for some c and for all kv v for any finally we present a packing set result from lemma in agarwal et al that packs into the set of matrices lemma let min and let then for each r min there exists a set of matrices am with with cardinality log m cr min for some constant c such that ka a for all
| 10 |
stochastic power system simulation using the adomian decomposition method nan duan student member ieee and kai sun senior member ieee abstract for dynamic security assessment considering uncertainties in grid operations this paper proposes an approach for simulation of a power system having stochastic loads the proposed approach solves a stochastic differential equation model of the power system in a way using the adomian decomposition method the approach generates solutions expressing both deterministic and stochastic variables explicitly as symbolic variables so as to embed stochastic processes directly into the solutions for efficient simulation and analysis the proposed approach is tested on the new england system with different levels of stochastic loads the approach is also benchmarked with a traditional stochastic simulation approach based on the eulermaruyama method the results show that the new approach has better time performance and a comparable accuracy index decomposition method stochastic differential equation stochastic load stochastic simulation u i introduction ncertainties exist in operations of power grids many factors such as random load consumptions and unanticipated relay protection actions contribute to the randomness of grid operations it can be foreseen that a future power grid will have more uncertainties and stochastic behaviors in system operations due to the increasing penetrations of responsive loads and intermittent renewable generations thus dynamic security assessment dsa of power systems should be conducted in both deterministic and stochastic manners however most of today s power system simulation software tools are still based on solvers of deterministic equations daes that do not involve stochastic variables to model uncertainties in system operating conditions in literature there are three major approaches for the modeling of a dynamic system having stochastic effects as shown in fig the master equation the equation and gillespie method the master equation and the equation are widely applied in the field of computational biology which both focus on the evolution of probability distribution the gillespie method focuses on individual stochastic trajectories the first two approaches provide a more comprehensive understanding of stochastic effects with a dynamic system but require solving this work was supported by nsf grant nan duan and kai sun are with the department of eecs at the university of tennessee knoxville nduan kaisun high dimensional partial differential equations so they are computationally difficult to be applied to simulations of realistic power systems there have been works using the gillespie method for power system simulation stochastic modeling master equation multiple runs gillespie algorithm fokkerplanck equation multiple runs eulermaruyama method adomian decomposition method fig stochastic modeling approaches in recent years some researchers have contributed to power system simulation in a manner reference proposed a systematic method to simulate the system behaviors under the influence of stochastic perturbations on loads bus voltages and rotor speeds this approach introduces stochastic differential equations sdes to represent stochastic perturbations and solves the equations by ito calculus and then a mean trajectory with the envelope on trajectory variations is yielded by repeating simulations for many times papers utilize a similar approach to study power system stability under random effects to analyze long term stability of a power system with wind generation a new sde model is developed in which also applies the singular perturbation theory to investigate the slow dynamics of the system with stochastic wind generation however the time performance of such an approach based on method can hardly meet the requirements for online power system simulation especially when the penetration of distributed energy resources ders reaches a high level the distribution network behaves in a more stochastic manner as seen from the transmission network and hence a large number of sdes need to be included in the power system model which will significantly influence the simulation speed also the nature of the gillespie method requires a large number of simulations on the same model to yield the mean trajectory as well as the envelope on variations therefore adding any extra sde to the existing set of sdes will result in multiplying computing time by a factor of hundreds or even thousands in our previous works a new approach for power system simulation has been proposed that approach applies the adomain decomposition method adm to power system daes to derive a solution sas for each state variable as an explicit function of symbolic variables including time the initial system state and other selected parameters on the system condition then each function is evaluated by plugging in values of its symbolic variables over consecutive small time windows to make up a desired simulation period so as to obtain the simulated trajectory of each state variable since the form of every sas is a summation of finite terms for approximation its evaluation can be fast and parallelized among terms thus compared to traditional numerical integration based power system simulation this approach decomposes the computation into offline derivation and online evaluation of an sas and is better fit for online power system simulation and a parallel computing environment in fact such a approach also suggests a viable alternative paradigm for fast stochastic simulation for example early works by adomian in the utilized the adm to solve nonlinear sdes by embedding explicitly stochastic processes into the terms of an sas for power system simulation in a stochastic manner this paper proposes an approach as an extension of the adm based approach proposed in utilizing the nature of an sas yielded by the adm this new approach embeds a stochastic model a stochastic load model into the sas evaluation of an sas with the stochastic model whose parameters are represented symbolically will not increase many computational burdens compared to evaluation of an sas for deterministic simulation thus an expected number of simulation runs for one single case are achieved by evaluating one sas for the same number of times the rest of this paper is organized as follows section ii presents the sde model of a power system that integrates stochastic loads section iii gives the approach for solving the power system sdes for stochastic simulation section iv uses a smib system to compare the fundamental difference between the admbased approach and the approach in mathematics section v introduces a criterion for defining the stability of a general stochastic dynamical system which is also applied to power systems section vi validates the proposed approach using the ieee system with the stochastic loads and compares the results and time performance with those from the approach finally conclusions are drawn in section vii ii power system sde model with stochastic loads synchronous generator modeling for a power system having k synchronous generators consider the model to model each generator having saliency ignored all generators are coupled through nonlinear algebraic equations about the network k r k h pm k pek d k k x dk x dk i dk e fdk e qk qk t k iqk e x qk x qk dk dk t sin k e qk cos k j e qk sin k e dk cos k e k e dk def i tk i r k ji ik y k e pek e qk iqk e dk idk iqk i ik sin k i r k cos k idk i r k sin k i ik cos k x dk idk e dk e dk x qk iqk e qk e qk in and is the rated angular frequency hk and dk are respectively the rotor angle rotor speed inertia and damping coefficient of the machine k yk is the kth row of the reduced admittance matrix y e is the column vector of all generator s electromotive forces emfs and ek is the kth element pmk and pek are the mechanical and electric powers efdk is the internal field voltage iqk idk xqk xdk x qk and x dk are transient voltages stator currents opencircuit time constants synchronous reactances and transient reactances in and respectively stochastic load modeling a stochastic model can be built based on analysis on real data and assumptions on probabilistic characteristics of the stochastic variables traditionally uncertainties in loads of a power system are ignored in simulation for the sake of simplicity however their stochastic behaviors are wellrecognized in taking stochastic loads into consideration will enable more realistic power system stability assessment this paper uses the process in to model the stochastic variations of a load in these sdes pl y pl bp w t ql yql bq w t where w t is the white noise vector whose dimension equals the number of load buses a and b parameters are drifting and diffusion parameters of the sdes operator is the hadamard product multiplication and ypl and yql are the stochastic variations in normal distributions the stochastic dynamic of the load is therefore modeled by pl y pl ql yql where and are the mean values of the active and reactive loads respectively periodicities and autocorrelations have been observed in historical data of loads on the daily basis however in the time frame of seconds loads at different substations have much lower autocorrelations refer to this paper sets the drifting parameter on the autocorrelations of loads as t iii proposed approach to solving power system sdes a modeling stochastic variables consider s stochastic variables t ys t which could be stochastic loads following s different distributions each yi t can be transformed by function gi in from some in a normal distribution for example if yi t is a load represented by a normal distribution with certain mean value then specifies a normal distribution as in and gi shifts it to around the desired mean value like in and t y t g g g s s the process is utilized to generate each where a n n n am n the next step is to apply inverse laplace transform to both sides of and to calculate the order sas of n x sas t y x n t y n in the resulting sas stochastic variables in y appear explicitly as symbolic variables iv comparison between the approach and the proposed approach this section applies both the approach and the proposed approach to the smib system with a stochastic load shown in fig to illustrate the fundamental difference between the two approaches e from t a t b w t where ra jxd r jx t t t s t t rl a t t t as t t jxl b t t t bs t t i ai i s b solving sdes using the adm consider a nonlinear system modeled by sde having m deterministic state variables xm such as the state variables of generators exciters and speed governors and s stochastic variables ys t f x t y t x t t t xm t t f f f m t to solve x t the procedure in can be used first apply laplace transformation to to obtain x f x y x s s then use and to calculate the adomian polynomials under the assumption of x t xn t n f k x y ak n x n y n k n fk xk y n k recursive formulas and can be derived by matching terms of x t and f x x s an s n the stochastic load is connected to the generator bus and has its resistance rl and reactance xl modeled by stochastic variables thus the whole system is now modeled by des and sdes pm cos sin t bw x b w t l l where gl jbl gs jbs rl jx l ra jx r jx gr jbr n ak n fig smib system with constant impedance load at generator bus gl gr gs bl br bs bl br bs bl br bs gl gr gs gl gr gs g b br bs gl gr bs bl br gs gl gr e s l bs gr gs br bs br gs gr bs br gs gr bs gr gs br since rl and xl change stochastically gl and bl can not be treated as constants in the variances of rl and xl depend on the values of drifting parameters and and diffusion parameters and respectively to find the sas of this system the first step is to apply adm to des and once the sas of the system s des is derived the sas of the sdes can be derived and incorporated into it for instance the order sas for rotor speed is t t n where t t d pm therefore the infinite order sas of is s t s i t i b si dsi i i i rl t rl apply maclaurin expansion of an exponential function and lemma in to the solution becomes t k e e cos sin e cos the order sas of rl can be derived using adm as rl t rl n t n where rl t rl b t t rl t rl t b rl t rl e b t b s ds then apply the integration by parts formula t r d r t k e k e d pm cos sin k e k e h r cos h r sin h r and for some forms of sdes an analytical solution may exist which can be incorporated into the des sas to directly derive the sas of the entire system for example the general expression of the sas terms of can be written as t tn rl n t n rl n b sn dsn n t t as at as e db s e b t b s ds the close form solution can be found as t rl t rl db s in this case the symbolic variable rl in can be replaced by instead of on the other hand for the approach since the deterministic model described by and does not permit a close form solution the sample trajectories of have to be numerically computed the numerical scheme for rl is shown in and the same scheme also applies to xl r l nt r l nt r l nt t r l nt w in practice the value of is dependent of the step size for integration w t t s rl t rl b here b t is the brownian motion starting at origin and db t t dt similarly the order sas of xl is x l t x l n t n where x l t x l b t t x l t rl t b x l t rl t t b to derive the sas of the entire system considering both the des and sdes replace the symbolic variables in the des sas representing the stochastic variables with the sdes sas the order sas of the system can be derived by replacing the symbolic variables rl and xl in with sas stability of stochastic systems there are a variety of definitions on the stability of a stochastic dynamical system in literature the definition of asymptotic stability in probability in can be directly applied to a power system with stochastic variables that definition is a counterpart of the asymptotic lyapunov stability of a deterministic system definition stability in probability an equilibrium point is said to be stable in probability if for given and r there exists r such that p x t xeq r t whenever definition asymptotic stability in probability an equilibrium point is said to be asymptotic stable in probability if it is stable in probability and for given there exists such that p lim x t x eq t whenever to analyzed the stability of numerical simulation results this paper modifies to so that the stability can be accessed using the results of finite time period simulations p x t xeq ts where ts is a predefined time instant is a small positive number vi case studies the proposed approach is tested on the ieee new england system as shown in fig selected loads are assumed to change stochastically while all generators are represented by deterministic models in each case study the stochastic simulation result by the eulermaruyama approach is used as the benchmark and the order sass are used and evaluated every the value of each stochastic variable is changed every for each case sample trajectories are generated the fault applied in all cases is a fault at bus cleared by tripping line all simulations are performed in matlab on a desktop computer with an intel core cpu and gb ram a result from the approach b result from the approach fig simulation results of generator rotor angle with loads connecting to bus and represented by stochastic variable with load variation from the simulation results the deterministic system response is indicated by the mean value and is asymptotically stable use the stochastic system stability definition introduced in section when the loads at buses and have small variances the system behaves similar to a deterministic system which is asymptotically stable with a probability of s fig ieee system stochastic loads at with low variances in the first case model the loads at buses and about of the system load by process the variances of the loads are of their mean values the results from the approach and the approach are shown in fig among all the generators generator has the shortest the electrical distance to bus and hence the rotor angle of it is presented in the following results stochastic loads at with low variances in the second case extend stochastic loads to all buses with variances equal to of their mean values as shown in fig the simulation results from two approaches agree with each other which reveal a less stable system response due to increased uncertainties when all the system loads are stochastic the system is asymptotically stable with a probability of s compared to the first case having only two stochastic loads with the same value the probability of the system being asymptotically stable reduces from to therefore when the percentage of stochastic loads increases even though the load uncertainties are low and the equilibrium point of the system is almost the same as its deterministic model the asymptotic stability of the system in probability downgrades that justifies the necessity of using stochastic load models to study the stability of power systems with a high penetration of stochastic loads a result from the approach b result from the approach fig simulation results of generator rotor angle with all loads represented by stochastic variable with load variation in the third case all the loads are represented by stochastic loads and the variances of the loads are increased to of the mean values this case may represent a scenario having ders widely deployed in distribution networks which make the aggregated bus load seen from each transmission or subtransmission substation behave more stochastically the simulation results from the approach and eulermaruyama approach are shown in fig the approach agrees with the approach on the simulation results both of them show that the system loses its stability when the variance of the loads increases to of their mean values the instability is due to the cumulative effect of stochastic load variations the confidence envelope can be utilized as an indicator of the system stability unlike fig the confidence envelope in fig is not bounded any more indicating a probability of the system losing stability a approach stochastic loads at with high variances b approach fig simulation results of bus voltage at bus with all loads represented by stochastic variable with load variation a result from the approach a result from the approach fig simulation results of generator rotor angle with all loads represented by stochastic variable with load variation bus voltages also reflect the impact from high load uncertainties as shown in fig about the voltage magnitude of bus denoted by with loads of high uncertainties the system has an increased risk of and issues because the imbalance between generation and load is magnified by increased load uncertainties that also indicates the importance of stochastic power system simulation when penetration of ders becomes high from results of stochastic power system simulation how the probability distribution function pdf of a system variable evolves in time during a period can be estimated and fit into an anticipated probability distribution for analysis as an example if we assume to follow a normal distribution at each time instant with the mean value and variance varying with time fig shows the evolutions of its pdf using simulation results from both the approach and approach for comparison fig basically matches fig indicating the accuracy of the proposed approach in reflecting the evaluation of the pdf from as time elapses the pdf of the bus voltage not only shifts the mean value but also increases the variance indicated by the increasing width of the shape such information is not available from deterministic power system simulation the longer the system is subjected to the effect of stochastic variables the bigger variance and larger uncertainty the system has in dynamics fig mean value of generator s rotor angle for case a fig standard deviation of generator s rotor angle for case a as more loads are modeled as stochastic the variance of state variables grows accordingly the mean value and standard deviation of the rotor angle of generator for case b are shown in fig and fig in case b the standard deviation reaches its largest value during the first swing which is larger than the largest standard deviation in case a a approach fig mean value of generator s rotor angle for case b b approach fig evolution of the pdf of the voltage magnitude at bus from s to variances of state variables to compare the accuracy of the numerical results from the approach and approach the mean value and standard deviation of the trajectories are compared for case a as shown in fig and fig the admbased approach achieves comparable accuracy as the eulermaruyama approach in terms of both mean value and standard deviation value fig standard deviation of generator s rotor angle for case comparison on time performances the time performances for cases a b and c of the admbased approach and approach are compared in table i from which the approach takes less than of the time cost of the approach the advantage of the approach in time performance is more prominent when many simulation runs are required as discussed in the approach is inherently suitable for parallel implementation which could help further improve the time performance if parallel computers are available table i time performance comparison of stochastic load cases time costs s ito calculus single run ito calculus runs adm single run adm runs stochastic loads at all buses case b c stochastic loads at buses and case a vii conclusion this paper proposes an alternative approach for stochastic simulation of power systems using the sas derived from the adm the stochastic effects from load uncertainties can be taken into considerations the result from the proposed approach is benchmarked with that from the approach since the evaluation of sass is faster than the integration with the approach the proposed approach has an obviously advantage in time performance this is critical when a large number of simulation runs need to be performed for simulating stochastic behaviors of a future power grid having a high penetration of ders the simulation results on different levels of stochastic loads show that when the level of load uncertainty is low the deterministic simulation is still trustworthy compared to the trajectory from stochastic simulation but once the level of load uncertainty becomes high the trajectory no longer represents the true behavior of the system viii references hiskens alseddiqui sensitivity approximation and uncertainty in power system dynamic simulation ieee trans power systems vol no pp tatari dehghan razzaghi application of the adomian decomposition method for the equation math and comput modelling vol no mar spencer bergman on the numerical solution of the fokkerplanck equation for nonlinear stochastic systems nonlinear dynamics vol no pp saito mitsui simulation of stochastic differential equations ann inst stat vol no pp higham an algorithmic introduction to numerical simulation of stochastic differential equations soc ind and appl math review wang crow fokker planck equation application to analysis of a simplified wind turbine model north american power symposium champaign il milano a systematic method to model power systems as stochastic differential algebraic equations ieee trans power systems vol no pp wu wang li hu a stochastic model for power system transient stability with wind power ieee pes general meeting national harbor md wang crow numerical simulation of stochastic differential algebraic equations for power system transient stability with random loads ieee pes general meeting detroit mi yuan zhou li zhang stochastic small signal stability of power system with wind power generation ieee trans power systems vol no jul wang chiang wang liu wang stability analysis of power systems with wind power based on stochastic differential equations model development and foundations ieee trans sustainable energy vol no pp duan sun application of the adomian decomposition method for solutions of power system differential algebraic equations ieee powertech eindhoven netherlands duan sun finding solutions of power system equations for fast transient stability simulation arxiv preprint duan sun power system simulation using the multistage adomian decomposition method ieee trans power systems no pp adomian nonlinear stochastic differential equations math anal and vol no pp qi sun kang optimal pmu placement for power system dynamic state estimation by using empirical observability gramian ieee trans power systems vol pp jul galiana handschin fiechter identification of stochastic electric load models from physical data ieee trans automat control vol no pp sauer numerical solution of stochastic differential equations in finance in handbook of mathematical functions springer berlin heidelberg pp nouri study on stochastic differential equations via modified adomian decomposition method sci series a vol no pp cao liu z fan of the method for stochastic differential delay equations appl math and vol no pp hutzenthaler jentzen kloeden strong convergence of an explicit numerical method for sdes with lipschitz continuous coefficients the ann of appl probability vol pp adomian a review of the decomposition method in applied mathematics of math anal and vol pp thygesen a survey of lyapunov techniques for stochastic differential equations dept of math modelling tech univ of denmark lyngby denmark imm technical report nr mao stochastic differential equations and applications edition chichester uk horwood burrage burrage mitsui numerical solutions of stochastic differential equations implementation and stability issues of computational and appl vol no pp kozin a survey of stability of stochastic systems automatica vol no pp
| 3 |
magnifyme aiding cross resolution face recognition via identity aware synthesis maneet singh shruti nagpal richa singh mayank vatsa and angshul majumdar india feb maneets shrutin rsingh mayank angshul abstract enhancing low resolution images via or image synthesis for face recognition has been well studied several image processing and machine learning paradigms have been explored for addressing the same in this research we propose synthesis via deep sparse representation algorithm for synthesizing a high resolution face image from a low resolution input image the proposed algorithm learns sparse representation for both high and low resolution gallery images along with an identity aware dictionary and a transformation function between the two representations for face identification scenarios with low resolution test data as input the high resolution test image is synthesized using the identity aware dictionary and transformation which is then used for face recognition the performance of the proposed sdsr algorithm is evaluated on four databases including one real world dataset experimental results and comparison with existing seven algorithms demonstrate the efficacy of the proposed algorithm in terms of both face identification and image quality measures low resolution face image bicubic interpolation low resolution face image high resolution gallery image bicubic interpolation figure images captured few minutes before boston marathon bombing of suspect dzhokhar tsarnaev circled the resolution of the circled image is less than which is interpolated to covariate of face recognition with widespread applications several researchers have shown that the performance of sota algorithms reduces while matching face images in order to overcome this limitation an intuitive approach is to generate a high resolution image for the given low resolution input which can be provided as input to the face recognition engine figure shows a sample real world image captured before the boston bombing since the person of interest is at a distance the face captured is thus of low resolution upon performing bicubic interpolation to obtain a high resolution image results in an image suffering from blur and poor quality with the ultimate aim of high recognition performance the generated high resolution image should have good quality while preserving the identity of the subject as elaborated in the next subsection while there exist multiple synthesis or super resolution techniques we hypothesize that utilizing a domain model for face synthesis should result in improved recognition performance especially for recognition scenarios to this effect this work presents a novel domain specific identity aware synthesis via deep sparse coding algorithm for synthesizing a high resolution face image from a given low resolution input image introduction group images are often captured from a distance in order to capture multiple people in the image in such cases the resolution of each face image is relatively smaller thereby resulting in errors during automated tagging similarly in surveillance and monitoring applications cameras are often designed to cover the maximum field of view this often limits the size of face images captured especially for individuals at a distance if we use these images to match against high resolution images profile images on social media or mugshot images captured by law enforcement then resolution gap between the two may lead to incorrect results this task of matching a low resolution input image against a database of high resolution images is referred to as cross resolution face recognition and it is a challenging literature review in literature different techniques have been proposed to address the problem of cross resolution face recognition these can broadly be divided into transformation based techniques and based techniques transformation based techniques address the resolution difference between images by explicitly introducing a transformation function either at the image or at the feature level techniques propose to resolution invariant features or classifiers in order to address the resolution variations in wang et al present an exhaustive review of the proposed techniques for addressing cross resolution face recognition peleg and elad propose a statistical model that uses minimum mean square error estimator on high and low resolution image pair patches for prediction lam propose a singular value decomposition based approach for super resolving low resolution face images researchers have also explored the domain of representation learning to address the problem of cross resolution face recognition yang et al propose learning dictionaries for low and high resolution image patches jointly followed by learning a mapping between the two yang et al propose a sparse classification approach in which the face recognition and hallucination constraints are solved simultaneously gu et al propose convolutional sparse coding where an image is divided into patches and filters are learned to decompose a low resolution image into features a mapping is learned to predict high resolution feature maps from the low resolution features mundunuri and biswas propose a scaling and stereo cost technique to learn a common transformation matrix for addressing the resolution variations a parallel area of research is that of where research has focused on obtaining a high resolution image from a given low resolution image with the objective of the visual quality of the input there has been significant advancement in the field of over the past several years including recent representation learning architectures being proposed for the same it is important to note that while such techniques can be utilized for addressing cross resolution face recognition however they are often not explicitly trained for face images or for providing results research contributions this research focuses on cross resolution face recognition by proposing a image synthesis algorithm capable of handling large magnification factors we propose a deep sparse representation based transfer learning approach termed as synthesis via deep sparse representation sdsr the proposed identity aware thesis algorithm can be incorporated as a module prior to any existing face recognition engine to enhance the resolution of a given low resolution input in order to ensure synthesis the proposed model is trained using a gallery database having a single image per subject the results are demonstrated with four databases and the effectiveness is evaluated in terms of both image quality measure of the synthesized images and face identification accuracies with existing face recognition models synthesis via deep sparse representation dictionary learning algorithms have an inherent property of representing a given sample as a sparse combination of it s basis functions this property is utilized in the proposed sdsr algorithm to synthesize a high resolution image from a given low resolution input the proposed model learns a transformation between the representations of low and high resolution images that is instead of interpolating the pixel values this work focuses on interpolating a more abstract representation further motivated by the abstraction capabilities of deep learning we propose to learn the transformation from deeper levels of representation unlike traditional dictionary learning algorithms we propose to learn the transformation at deeper levels of representation this leads to the key contribution of this work synthesis via deep sparse representation sdsr a transfer learning approach for synthesizing a high resolution image for a given low resolution input preliminaries let x be the input training data with n samples dictionary learning algorithms learn a dictionary d and sparse representations a using data x the objective function of dictionary learning is written as n x i x f min d a n where a are the sparse codes represents and is the regularizing constant that controls how much weight is given to induce sparsity in the representations in eq the first term minimizes the reconstruction error of the training samples and the second term is a regularization term on the sparse codes in literature researchers have proposed extending a single level dictionary to a dictionary to learn multiple levels of representations of the given data a deep dictionary learns k dictionaries d dk and sparse coefficients a ak for a given input n x i x dk i f i min d a n the architecture of deep dictionary is inspired from the deep learning techniques where deeper layers of feature learning enhance the level of abstraction learned by the network thereby learning meaningful latent variables in real world scenarios of surveillance or image tagging the task is to match the low resolution test images probe to the database of high resolution images known as gallery images without loss of generality we assume that the target comprises of high resolution gallery images while the source domain consists of low resolution images in the proposed model for low resolution face images xl and high resolution face images xh k level deep dictionaries are learned in both source gl gkl and target domain gh gkh it is important to note that the dictionaries are generated using the preacquired gallery images corresponding sparse representations al akl and ah akh are also k n learned for all k levels where akh h are the representations learnt corresponding to the high k n tion deep dictionary and akl l are the th representations learnt from the k level dictionary for the low resolution images the proposed algorithm learns a transformation m between akh and akl the optimization formulation for synthesis via deep sparse representation sdsr a deep dictionary is written as n x i xh gkh i h gh ah n g a m min f l i x gkl i l l k i i h f k x k x j i j i f where are regularization parameters which control the amount of sparsity in the learned representations of the j th layer while is the regularization constant for learning the transformation function gh and gl correspond to the deep dictionaries learned for the high and low resolution gallery images respectively the sdsr algorithm learns multiple levels of dictionaries and corresponding representations for low and high resolution face images along with a transformation between the features learned at the deepest layer training sdsr algorithm without loss of generality training of the proposed sdsr algorithm is explained with k shown in figure for l i l i x i l l i h i l i l f f since the number of variables in eq is large even more for deeper dictionaries directly solving the optimization problem may provide incorrect estimates and lead to overfitting therefore greedy layer by layer training is applied it is important to note that since there is a regularizer on the coefficients of the first and the second layer the dictionaries and can not be collapsed into one dictionary in order to learn the estimates eq is split into learning the first level representation second level representation and the transformation from eq the optimization function for two level deep dictionary is as follows n x i x i f i i min n assuming an intermediate variable i i such that n the above equation can be modeled as a optimization of the following two equations n x i min x i f i a n n x i i min n h i h sdsr algorithm l a two level deep dictionary eq can be written as n x i i min xh i h f gh gl n a a m f i a deep dictionary of two levels eq requires two steps for learning eq upon extending the formulation to k level deep dictionary it would require exactly k steps for optimization the proposed sdsr algorithm eq builds upon the above and utilizes k steps based greedy learning for a k level deep dictionary k steps are for learning representations using the deep dictionary architecture and the k step is for learning the transformation between the final representations therefore eq is solved using an independent three step approach i learn first level source low resolution and target high resolution domain dictionaries ii learn second level low and high resolution image dictionaries and iii learn a transformation between the final representations using the concept in eq in the first step two separate k dictionaries are learned from the given input data for the low resolution and high resolution face images independently given the training data consisting of low xl and high xh resolution face learn sparse representations learn sparse representations high resolution gallery images level dictionary level sparse representations level dictionary level sparse representations a learn sparse representations learn sparse representations low resolution gallery images level dictionary level dictionary level sparse representations b xltest learn transformation m using transformation m level sparse representations face recognition engine test h test h test l test l xhtest figure synthesis via deep sparse representation algorithm for deep dictionary a refers to the training of the model while b illustrates the high resolution synthesis of a low resolution input images the following minimization is applied for the two domains respectively n i min xl i i l l n gl a l n min i x i h n h i h n here and l n ah refer to the sparse codes learned for the low and high resolution images respectively each of the above two equations can be optimized independently using an alternating minimization dictionary learning technique over the dictionary and representation after this step dictionaries and representations are obtained for the two varying resolution data in the second step a deep dictionary is created by learning the second level dictionaries using the representations obtained from the first level and that is two separate dictionaries one for low resolution images and one for high resolution images are learned using the representations obtained at the first level as input features the equations for this step can be written as follows n i min i i l gl l n gl al n i min i h gh n gh ah i h n here l is the final tation obtained for the low resolution images and n h refers to the representation obtained for the high resolution images similar to the previous step the equations can be solved independently using alternating minimization over the dictionary and representations after this step and are obtained in order to synthesize from one resolution to another the third step of the algorithm involves learning a transformation between the deep representations of the two resolutions and the following minimization is solved to obtain a transformation min f m the above equation is a least square problem having a closed form solution after training the dictionaries and the transformation function m are obtained which are then used at test time testing synthesizing high resolution face image from low resolution image during testing a low resolution test image xtest l is input to the algorithm using the trained gallery based dictionaries and first and second level representations test test are obtained for the given image l l xtest test test test l l l l the transformation function m learned in eq is then used to obtain the second level high resolution representation test h test test h l table summarizing the characteristics of the training and testing partitions of the databases used in experiments dataset cmu real world scenarios scface training subjects training images testing subjects testing images gallery resolution probe resolutions probe resolution original image bicubic interp dong et al kim et al gu et al dong et al peleg et al yang et al proposed sdsr cmu multipie scface table identification accuracies obtained using verilook for cross resolution face recognition the target resolution is the algorithms which do not support the required magnification factor are presented as using eq and eq and the second level representation for the given image in the target domain a synthesized output of the given image is obtained first test is calcuh lated with the help of and then xtest h is obtained using which is the synthesized image in the target domain test test test xtest h gh h h it is important to note that the synthesized high resolution image is a sparse combination of the basis functions of the learned high resolution dictionary in order to obtain a good quality high resolution synthesis the dictionary is trained with the high resolution database this ensures that the basis functions of the trained dictionaries span the latent space of the images as will be demonstrated via experiments as well a key highlight of this algorithm is to learn good quality representative dictionaries with a single sample per subject as well the high resolution synthesized output image xtest h can then be used by any face identification engine for recognition databases and experimental protocol the effectiveness of the proposed sdsr algorithm is demonstrated by evaluating the face recognition performance with original and synthesized images two face recognition systems cots verilook and luxand are used on four different face databases for verilook the face quality and confidence thresholds are set to minimum in order to reduce enrollment errors the performance of the proposed algorithm is compared with six recently proposed and synthesis techniques by kim et al kernel ridge regression peleg et al sparse representation based statistical prediction model gu et al convolutional sparse coding yang et al dictionary learning dong et al deep convolutional networks and dong et al deep convolutional networks along with one of the most popular technique bicubic interpolation the results of the existing algorithms are computed by using the models provided by the authors at the links provided in the footnotes it is to be noted that not all the algorithms support all the levels of magnification for instance the algorithm proposed by kim et al supports up to levels of magnification whereas yang et s algorithm supports up to levels of magnification face databases table summarizes the statistics of the databases in terms of training and testing partitions along with the resolutions details of the databases are provided below cmu dataset images pertaining https http http http http http to subjects are selected with frontal pose uniform illumination and neutral expression subjects are used for training while the remaining are in the test set dataset consists of face images of subjects all subjects have a single normal image and the dataset contains images of different covariates such as lighting expression and distance for this research normal images are used as the high resolution gallery database while face images under the distance covariate are downsampled and used as probe images scface dataset it consists of subjects each having one high resolution frontal face image and multiple low resolution images captured from three distances using surveillance cameras real world scenarios dataset contains images of seven subjects associated with the london bombing boston bombing and mumbai attacks each subject has one high resolution gallery image and multiple low resolution test images the test images are captured from surveillance cameras and are collected from multiple sources from the internet since the number of subjects are just seven in order to mimic a real world scenario the gallery size is increased to create an extended gallery of subjects images from the nd human identification and meds datasets are used for the same protocol for all the datasets a real world matching protocol is followed for each subject multiple low resolution images are used as probe images which are matched against the database of high resolution gallery images only a single high resolution image per subject is used as gallery the proposed and comparative algorithms are used to synthesize or a high resolution image from a given low resolution input the magnification factor varies from for probes of to for probes of to match it against the gallery database of size for all the databases except the scface test images are of sizes varying from to for the scface database predefined protocol is followed and probe resolutions are and face detection is performed using face if provided or using viola jones face detector and synthetic downsampling is performed to obtain lower resolutions all the experiments are performed with five times random to ensure consistency implementation details the sdsr algorithm is trained using the gallery database for each dataset the regularization constant for sparsity is kept at different dictionaries have different dimensions based on the input data for instance the dictionaries created for scface dataset contain and atoms in the first and second dictionary respectively the source code input bicubic proposed input bicubic proposed figure sample images from scface dataset incorrectly synthesized by the sdsr algorithm for input of the algorithm will be made publicly available in order to ensure reproducibility of the proposed approach results and analysis the proposed algorithm is evaluated with three sets of experiments i face recognition performance with resolution variations ii image quality measure and iii face identification analysis with different dictionary levels the resolution of the gallery is set to for the first experiment the probe resolution varies from to while it is fixed to for the next two experiments face recognition across resolutions for all datasets and resolutions results are tabulated in tables to the key observations pertaining to these set of experiments are presented below and probe resolutions except bicubic interpolation none of the existing super resolution or synthesis algorithms used in this comparison support a magnification factor of for or for therefore the results on these two resolutions are compared with original resolution when the probe is used as input to cots as it is without any resolution enhancement and bicubic interpolation only as shown in the third and fourth columns of the two tables on the cmu and databases matching with original and bicubic interpolated images results in an accuracy of whereas the images synthesized using the proposed algorithm provide accuracy of and respectively and probe resolutions as shown in table on cmu and databases with test resolution of and the synthesized images obtained using the proposed sdsr algorithm yield a accuracy of other approaches yield a accuracy of less than except bicubic interpolation on size which provides accuracy of as shown in table similar performance trends are observed using on the two databases for scface the accuracy with sdsr is significantly higher than the existing approaches however due to the challenging nature of the database both commercial matchers provide low accuracies fig presents sample images from the scface dataset incorrectly synthesized via the proposed sdsr algorithm varying acquisition devices of the training and testing partitions along with the covariates of pose and illumination creates the problem further challenging probe resolution original image bicubic interp dong et al kim et al gu et al dong et al peleg et al yang et al proposed sdsr cmu scface table identification accuracies obtained using luxand for cross resolution face recognition the target resolution is the algorithms which do not support the required magnification factor are presented as caspeal cmu real world scface a b c d e f g figure probe images of are to a corresponds to the original probe b f correspond to different techniques bicubic interpolation kim et al gu et al dong et al dong et al and the proposed sdsr algorithm probe resolution using the proposed algorithm achieves improved performance than other techniques except on the cmu dataset where it does not perform as well on all other databases the proposed algorithm yields the best results upon analyzing both the tables it is clear that the proposed algorithm is robust to different recognition systems and performs well without any bias for a specific kind of recognition algorithm another observation is that with images superresolved using bicubic interpolation yield best results on the first two databases however it should be noted that these results are only observed for a magnification factor of and for images which were synthetically in real world surveillance datasets such as scface the proposed approach performs best with both commercial systems real world scenarios dataset table summarizes the results of on real world scenarios dataset since the gallery contains images from subjects we marize the results in terms of the identification performance with top retrieved matches it is interesting to observe that for all test resolutions the proposed algorithm significantly outperforms existing approaches sdsr achieves a identification accuracy on probe resolution of and an accuracy of for test resolution cross dataset experiments the sdsr algorithm was trained on the cmu dataset and tested on the scface dataset for a probe resolution of a identification accuracy of was obtained using whereas a identification accuracy of and was obtained respectively the results showcase that the proposed model is still able to achieve better recognition performance as compared to other techniques however the drop in accuracy strengthens our hypothesis that using an model for performing synthesis is more beneficial for achieving higher classification performance quality analysis fig shows examples of images from multiple databases generated using the proposed and existing algorithms in this figure images of are synthesized from low resolution images of it can be observed that the output images obtained using existing algorithms columns b f have artifacts in terms of blockiness blurriness however the quality of the images obtained using the proposed algorithm column g are significantly better than the other algorithms to compare the visual quality of the outputs a no reference image quality measure brisque is utilized image spatial quality evaluator brisque computes the distortion in the image by using the statistics of locally normalized luminance coefficients it is calculated in the spatial domain and is used to estimate the losses of naturalness in an image lower the value less table real world scenarios recognition accuracy obtained in top ranks against a gallery of subjects using verilook having resolution of probe resolution original image bicubic interpolation dong et al kim et al gu et al dong et al peleg et al yang et al proposed sdsr table average no reference quality measure brisque for probe resolution of synthesized to obtained over five folds a lower value for brisque corresponds to lesser distortions in the image database bicubic interp dong et al kim et al gu et al dong et al proposed sdsr cmu scface real world table accuracies for varying levels of sdsr algorithm with probe and gallery database cots cmu verilook luxand verilook luxand verilook luxand scface dictionary levels distorted is an image from table it can be seen that images obtained using the proposed sdsr algorithm have a better lower brisque score as compared to images generated with existing algorithms a difference of at least points is observed in the brisque scores effect of dictionary levels as explained in the algorithm section synthesis can be performed at different levels of deep dictionary with varying values of this experiment is performed to analyze the effect of different dictionary levels on identification performance the proposed algorithm is used to synthesize high resolution images magnification factor of from input images of size with varying dictionary levels k first level dictionary k is equivalent to shallow dictionary learning whereas two and three levels correspond to synthesis with deep dictionary learning table reports the identification accuracies obtained with the two commercial matchers for four databases the results show that the proposed approach with k generally yields the best results in some cases the proposed approach with k yields better results generally abstraction capability of deeper layers and overfitting are two effects in deep learning based approaches in table we observe the between these two most of the datasets are moderately sized therefore we observe good results in the second layer in the third layer overfitting offsets the abstraction hence we see none to marginal changes further computational complexity with deep dictionary features is higher and the improvements in accuracy are not consistent across databases on the other hand paired on the results obtained by the shallow dictionary and deep dictionary demonstrate statistical significance even with a confidence level of for verilook specifically for a single image synthesis with dictionary requires ms requires ms and requires conclusion the key contribution of this research is a recognitionoriented module based on dictionary learning algorithm for synthesizing a high resolution face image from low resolution input the proposed sdsr algorithm learns the representations of low and high resolution images in a hierarchical manner along with a transformation between the representations of the two the results are demonstrated on four databases with test image resolutions ranging from to matching these requires generating synthesized high resolution images with a magnification factor of to results computed in terms of both image quality measure and face recognition performance illustrate that the proposed algorithm consistently yields good recognition results computationally the proposed algorithm requires less than millisecond for generating a synthesized high resolution image which further showcases the efficacy and usability of the algorithm for low resolution face recognition applications references luxand https verilook http baker and kanade hallucinating faces in ieee international conference on automatic face and gesture recognition fg pages bhatt singh vatsa and ratha improving face matching using cotransfer learning ieee transactions on image processing december dahl norouzi and shlens pixel recursive super resolution in ieee international conference on computer vision dong loy he and tang image using deep convolutional networks ieee transactions on pattern analysis and machine intelligence dong loy and tang accelerating the superresolution convolutional neural network in european conference on computer vision pages springer flynn bowyer and phillips assessment of time dependency in face recognition an initial study in international conference on biometric person authentication pages founds orlans whiddon and watson nist special database encounter dataset ii medsii national institute of standards and technology tech rep gao cao shan chen zhou zhang and zhao the chinese face database and baseline evaluations ieee transactions on systems man and cybernetics part a systems and humans january grgic delac and grgic scface surveillance cameras face database multimedia tools application february gross matthews cohn kanade and baker image vision computing may gu zuo xie meng feng and zhang convolutional sparse coding for image in ieee international conference on computer vision december jian and lam simultaneous hallucination and recognition of faces based on singular value decomposition ieee transactions on circuits and systems for video technology november kim and kwon using sparse regression and natural image prior ieee transactions on pattern analysis and machine intelligence june ledig theis huszar caballero cunningham acosta aitken tejani totz wang and shi single image using a generative adversarial network in ieee conference on computer vision and pattern recognition lee a battle raina and ng efficient sparse coding algorithms in advances in neural information processing systems pages mittal moorthy and bovik image quality assessment in the spatial domain ieee transactions on image processing mudunuri and biswas low resolution face recognition across variations in pose and illumination ieee transactions on pattern analysis and machine intelligence ngiam chen bhaskar koh and ng sparse filtering in advances in neural information processing systems pages peleg and elad a statistical prediction model based on sparse representations for single image ieee transactions on image processing june polatkan zhou carin blei and daubechies a bayesian nonparametric approach to image superresolution ieee transactions on pattern analysis and machine intelligence rubinstein zibulevsky and elad double sparsity learning sparse dictionaries for sparse signal approximation ieee transactions on signal processing march tariyal majumdar singh and vatsa greedy deep dictionary learning corr thiagarajan ramamurthy and spanias multilevel dictionary learning for sparse representation of images in digital signal processing and signal processing education meeting pages tong li liu and gao image using dense skip connections in ieee international conference on computer vision viola and jones robust face detection international journal computer vision wang tao gao li and li a comprehensive survey to face hallucination international journal of computer vision wang zhang liang and pan dictionary learning with applications to image and synthesis in ieee conference on computer vision and pattern recognition pages wang chang yang liu and huang studying very low resolution recognition using deep networks in the ieee conference on computer vision and pattern recognition june wang miao jonathan wu wan and tang face recognition a review the visual computer yang wright huang and ma image superresolution via sparse representation ieee transactions on image processing november yang wei yeh and wang recognition at a long distance very low resolution face recognition and hallucination in international conference on biometrics pages may
| 1 |
ttp tool for tumor progression johannes ivana krishnendu martin mar ist austria institute of science and technology austria klosterneuburg austria program for evolutionary dynamics harvard university cambridge usa department of mathematics harvard university cambridge usa department of organismic and evolutionary biology harvard university cambridge usa abstract in this work we present a flexible tool for tumor progression which simulates the evolutionary dynamics of cancer tumor progression implements a branching process where the key parameters are the fitness landscape the mutation rate and the average time of cell division the fitness of a cancer cell depends on the mutations it has accumulated the input to our tool could be any fitness landscape mutation rate and cell division time and the tool produces the growth dynamics and all relevant statistics introduction cancer is a genetic disease which is driven by the somatic evolution of cells where driver mutations for cancer increase the reproductive rate of cells through different mechanisms evading growth suppressors sustaining proliferative signaling or resisting cell death tumors are initiated by some genetic event which increases the reproductive rate of previously normal cells the evolution of cancer malignant tumor is a process where cells need to receive several mutations subsequently this phase of tumor progression is characterized by the uncontrolled growth of cells the requirement to accumulate multiple mutations over time explains the increased risk of cancer with age there are several mathematical models to explain tumor progression and the age incidence of cancer the models have also provided quantitative insights in the evolution of resistance to cancer therapy the models for tumor progression are branching processes which represent an exponentially growing heterogeneous population of cells where the key parameters for the process are i the fitness landscape of the cells which determine the reproductive rate ii the mutation rate which determines the accumulation of driver mutations and iii the average cell division time or the generation time for new cells the fitness landscapes allow the analysis of the effects of interdependent driver mutations on the evolution of cancer in this work we present a very flexible tool namely ttp tool for tumor progression to study the dynamics of tumor progression the input to our tool could be any fitness landscape mutation rate and cell division time and the tool generates the growth dynamics and all relevant statistics such as the expected tumor detection time or the expected appearance time of surviving mutants etc our stochastic computer simulation is an efficient simulation of a multitype branching process under all possible fitness landscapes driver mutation rates and cell division times our tool provides a quantitative framework to study the dynamics of tumor progression in different stages of tumor growth currently the data to understand the effects of complex fitness landscapes can only be obtained from patients or animals suffering the disease with our tool playing with the parameters once the data is reproduced the computer simulations can provide many simulation examples that would aid to understand these complex effects moreover once the correct mathematical models for specific types of cancer are identified where the simulations match the data verification tools for probabilistic systems can be used to further analyze and understand the tumor progression process such an approach has been followed in for the verification of biological models in this direction results of specific fitness landscapes of our tool have already been used in a biological application paper while we present our tool for the process which provides a good approximation of the process results of our tool for the special case of a uniform fitness landscape in the process have also been shown to have excellent agreement with the data for the time to treatment failure for colorectal cancer model tumor progression is modeled as a branching process galtonwatson process at each time step a cell can either divide or die the phenotype i of a cancerous cell determines its division probability bi and is encoded as a bit string of length four the death probability di follows from bi as di bi if a cell divides one of the two daughter cells can receive an additional mutation a bit flips from wildtype to the mutated type with probability u in one of the wildtype positions cells of phenotype can receive an additional mutation only at positions two and four cells of phenotype can not receive any additional mutations the branching process is initiated by a single cell of phenotype i resident cell the resident cells are wildtype at all four positions and have a strictly positive growth rate fitness landscapes our tool provides two predefined fitness landscapes for driver mutations in tumor progression multiplicative fitness landscape mfl and path fitness landscape pfl additionally the user can also define its own general fitness landscape gfl a fitness landscape defines the birth probability bi for all possible phenotypes i following the convention of the standard modeling approaches we let be the birth probability of the resident cells cells of phenotype the growth coefficient sj indicates the selective advantage provided by an additional mutation at position j in the phenotype multiplicative fitness landscape in the mfl a mutation at position j of the phenotype i of a cell results in a multiplication of its birth probability by specifically the birth probability bi of a cell with phenotype i is given by bi y sbj where sbj if the position of i is otherwise sbj sj hence each additional mutation can be weighted differently and provides a predefined effect or on the birth probability of a cell additional mutations can also be costly or neutral which can be modeled by a negative sj or sj if the fitness landscape reduces to the model studied by bozic et al which we call emfl equal multiplicative fitness landscape and is also predefined in our tool path fitness landscape the pfl defines a certain path on which additional mutations need to occur to increase the birth probability of a cell the predefined path can be and again the growth coefficients sj determine the multiplicative effect of the new mutation on the birth probability see appendix for more details mutations not on this path are deleterious for the growth rate of a cell and its birth probability is set to v the parameter v v specifies the disadvantage for cells of all phenotypes which do not belong to the given path general fitness landscapes our tool allows to input any fitness landscape as follows for bi for i our tool can take as input the value of bi in this way any fitness landscape can be a parameter to the tool density limitation in some situations a tumor needs to overcome current geometric or metabolic constraints when the tumor needs to develop blood vessels to provide enough oxygen and nutrients for further growth such growth limitations are modeled by a density limit carrying capacity for various phenotypes hence the cells of a phenotype i grow first exponentially but eventually reach a steady state around a given carrying capacity ki only cells with another phenotype additional mutation can overcome the density limit logistic growth is modeled with variable growth coefficients sej sj xi where xi is the current number of cells of phenotype i in the tumor in this model initially sej sj xi ki however if xi is on the order of ki sej becomes approximately zero details are given in the appendix tool implementation experimental results our tool provides an efficient implementation of a very general tumor progression model essentially the tool implements the above defined branching processes to simulate the dynamics of tumor growth and to obtain statistics about the expected tumor detection time and the appearance of additional driver mutations during different stages of disease progression ttp can be downloaded from here http for an efficient processing of the branching process the stochastic simulation samples from a multinomial distribution for each phenotype at each time step the sample returns the number of cells which divided with and without mutation and the number of cells which died in the current generation see the appendix for details from the samples for each phenotype the program calculates the phenotype distribution in the next generation hence the program needs to store only the number of cells of each phenotype during the simulation this efficient implementation of the branching process allows the tool to simulate many patients within a second and to obtain very good statistical results in a reasonable time frame a b number of cells number of cells tumor detection size cells time years time years c d probability density number of cells emfl time years mfl path time years fig experimental results illustrating the variability of tumor progression in panels a and b we show examples for two particular simulation runs where the cells grow according to the emfl and resident cells blue are constrained by a carrying capacity of in panel c the cells grow according to the pfl in panel d we show statistical results for the probability density of tumor detection when cells grow according to different fitness landscapes parameter values growth coefficients and v mutation rate u cell division time t days tumor detection size cells modes the tool can run in the following two modes individual or statistics in the individual mode the tool produces the growth dynamics of one tumor in a patient see panels a b and c in fig furthermore both the growth dynamics and the phenotype distribution of the tumor are depicted graphically in the statistics mode the tool produces the probability distribution for the detection time of the tumor see panel d in fig both graphically and quantitatively additionally the tool calculates for all phenotypes the appearance times of the first surviving lineage the existence probability and the average number of cells at detection time features ttp provides an intuitive graphical user interface to enter the parameters of the model and shows plots of the dynamics during tumor progression the phenotype distribution or the probability density of tumor detection these plots can also be saved as files in various image formats furthermore the tool can create data files values of the tumor growth history and the probability distribution of tumor detection for any set of input parameters details on the format are given in the appendix input parameters in both modes the tool takes the following input parameters i growth coefficients and and v in the case of pfl ii mutation rate u iii cell generation time t iv fitness landscape mfl pfl emfl or gfl with the birth probability for each phenotype and optional v density limits for some phenotypes in the individual mode additionally the user needs to provide the number of generations which have to be simulated in the statistics mode the additional parameters are the tumor detection size and the number of patients tumors which survive the initial stochastic fluctuations which have to be simulated experimental results in panels a b and c of fig we show examples of the growth dynamics during tumor progression although we used exactly the same parameters in panels a and b we observe that the time from tumor initiation until detection can be very different in panel d we show the probability density of tumor detection under various fitness landscapes further experimental results are given in the appendix case studies several results of these models have shown excellent agreement with different aspects of data in results for the expected tumor size at detection time using a emfl fit the reported polyp sizes of the patients very well similarly using a branching process and a uniform fitness landscape results for the expected time to the relapse of a tumor after start of treatment agree thoroughly with the observed times in patients future work in some ongoing work we also investigate mathematical models for tumor dynamics occurring during cancer treatment modeled by a continuoustime branching process thus an interesting extension of our tool would be to model treatment as well another interesting direction is to model the seeding of metastasis during tumor progression and hence simulate a full patient rather than the primary tumor alone once faithful models of the evolution of cancer have been identified verification tools such as prism and theoretical results such as might contribute to the understanding of these processes acknowledgments this work is supported by the erc start grant graph games the fwf nfn grant no rise the fwf grant no p a microsoft faculty fellow award the foundational questions in evolutionary biology initiative of the john templeton foundation and the joint program in mathematical biology nih grant references vogelstein kinzler cancer genes and the pathways they control nature medicine hanahan weinberg hallmarks of cancer the next generation cell jones chen parmigiani diehl beerenwinkel antal traulsen nowak siegel velculescu kinzler vogelstein willis markowitz comparative lesion sequencing provides insights into tumor evolution pnas nowak evolutionary dynamics exploring the equations of life the belknap press of harvard university press cambridge ma komarova sengupta nowak networks of cancer initiation tumor suppressor genes and chromosomal instability journal of theoretical biology iwasa michor nowak stochastic tunnels in evolutionary dynamics genetics nowak michor komarova iwasa evolutionary dynamics of tumor suppressor gene inactivation pnas diaz williams wu kinde hecht berlin allen bozic reiter nowak kinzler oliner vogelstein the molecular evolution of acquired resistance to targeted egfr blockade in colorectal cancers nature bozic antal ohtsuki carter kim chen karchin kinzler vogelstein nowak accumulation of driver and passenger mutations during tumor progression pnas sadot fisher barak admanit stern hubbard harel toward verified biological models computational biology and bioinformatics transactions on reiter bozic allen chatterjee nowak the effect of one additional driver mutation on tumor progression evolutionary applications haccou jagers vatutin branching processes variation growth and extinction of populations cambridge university press kerbel tumor angiogenesis past present and the near future carcinogenesis march hinton kwiatkowska norman parker prism a tool for automatic verification of probabilistic systems in tacas etessami stewart yannakakis polynomial time algorithms for multitype branching processesand stochastic grammars in stoc a appendix details of the tool ttp is available for download at http the tool is implemented in java and runs on all operating systems which run a java virtual machine jvm of version or above all the necessary libraries are included in the tool features our tool supports various features in two running modes in the individual mode ttp simulates the tumor growth dynamics for a given number of generations plots of the growth dynamics over time and the current phenotype distribution are produced simultaneously both plots can be saved in a or the full growth history for all cell types can also be stored in a format is described in section in the statistics mode ttp simulates the given number of patients with the same parameters and simultaneously shows the probability density of tumor detection for a given detection size cells correspond to a tumor volume of approximately the average tumor detection time and the average fraction of resident cells at detection are also shown during the simulations after all patients have been simulated the existence probability at detection the average number of cells and the average appearance year of the first surviving cell for all phenotypes are calculated and shown in a new window in addition the tool shows the number of detected and died tumors per year in a separate window all these data is stored in a format is described in section installation and implementation details ttp is written in java and makes use of several other libraries the tool requires the java runtime environment jre of version or above to start ttp on or on the command line type java make sure that you have the permission to execute on mac os invoking the tool from the command line can overcome the security restrictions the tool is composed of the following components the model implementation the statistics thread the graphical user interface and the plot generator model implementation the core component of the tool is the efficient implementation of the branching process following bozic et al the number of cells xi of phenotype i in the next generation t is calculated by sampling from the multinomial distribution prob where xi t p k xi t bi u dyi bi u mik and xi t xi t x k mki the number of cells which give birth to an identical daughter cell is denoted by the number of cells which die is denoted by the number of cells which divide with an additional mutation is given by and the number of cells mutated from phenotype k to i is given by mki in general one can define a mutation matrix to encode the probabilities mki that a cell of phenotype k mutates to a cell of phenotype i in our case this matrix is defined by the sequential accumulation of mutations a cell of some phenotype can receive an additional mutation only on the positions in its encoding which are wildtype only bit flips from to are allowed mutations on all allowed positions are equally likely back mutations are not considered fitness landscapes our tool supports four fitness landscapes for additional driver and passenger mutations i mfl ii emfl iii pfl and iv gfl in principal driver mutations increase the birth rate of a cell whereas passenger mutations have no effect on the cell s birth rate in the tables and we present the complete definition of the mfl and the pfl respectively the definition of emfl and gfl have been given in section table multiplicative fitness landscape additional mutations phenotype birth probability s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s density limit our tool allows a separate carrying capacity ki for each phenotype i when the gfl is used in the beginning of the simulation the growth coefficients si are calculated from the given bi for all phenotypes i since the density limiting effects are based on the values for as a technical detail for sizes xi for which the birth probability bi would fall below or for which equivalently would fall below we set bi statistics thread the statistics thread handles the simulation of many identical branching processes to obtain the statistical results these simulations run table path fitness landscape additional mutations phenotype birth probability s v v v v v v v v s s s s v v v s s s s s in a separate thread such that the gui keeps responsive for user requests after completing all the necessary simulations a with all relevant results is automatically generated and stored to the execution directory of the tool graphical user interface the graphical user interface gui component contains frames and forms required for the functionality of the tool it also handles all the user requests and distributes them to the other components within the gui the plots for the tumor progression dynamics in individual mode and for the probability density of tumor detection in statistics mode are displayed multiple screenshots of the gui are shown in section plot generator the plot generation is based on the free jfreechart library for the generation of the scalable vector graphics svg the apache xml graphics library http is used an example for a plot generated by our tool is shown in figure data files ttp produces various data files which can be used for further analysis and processing the data are given as values where each record is one line of the text files in listing we show an example of a data file generated in the statistics mode average results are given as comments which start with a hash in the individual mode the data file contains the number of cells xi for each phenotype i in all generations fig example for a generated plot of the tumor growth dynamics listing generated data file in the statistics mode used f i t n e s s l a n d s c a p e m u l t i p l i c a t i v e growth c o e f f i c i e n t s mutation r a t e generation time days s i m u l a t i o n f o r p a t i e n t s bin s i z e i s g e n e r a t i o n s died cumul died generation detected cumul det a b s have been d e t e c t e d w i t h i n a t most g e n e r a t i o n s a b s tumors went e x t i n c t average y e a r o f d e t e c t i o n mutant appearance t i m e s mutant appeared i n a v e r a g e i n g e n e r a t i o n y e a r e x i s t a n c e p r o b a b i l i t y a v e r a g e number o f c e l l s mutant appeared i n a v e r a g e i n g e n e r a t i o n y e a r e x i s t a n c e p r o b a b i l i t y a v e r a g e number o f c e l l s runs have been performed t o c r e a t e p a t i e n t s user manual ttp is invoked by a on or by the command java after the tool has started the gui can be used for all operations see figure for a screenshot of the gui input parameters in the control panel the tool takes all the main parameters for tumor progression the fitness landscape the mutation rate and the cell division time if one of the prespecified fitness landscapes mfl pfl or emfl is used the relevant growth coefficients have to be defined if the general fitness landscape is used a window appears after the selection of the gfl and the specific birth probabilities for all the phenotypes can be defined to add density limits for specific phenotypes a window appears after density limit has been checked for each phenotype a different density limit can be given indicates that there is no limitation on this phenotype to obtain statistical results the number of patients the number of tumors with a surviving lineage and the tumor detection size the number of cells when a tumor can be detected need to be provided modes after all the parameter values have been specified the tool can either run in the individual or the statistics mode to simulate the growth dynamics of a single tumor click on new simulation and the tool runs in the individual mode then any number of cell generations can be simulated until the tumor consists of more than cells the statistics mode can be started by clicking on obtain statistics the tool simulates the given number of tumors until they reach the detection size and calculates all relevant statistics output in the individual mode the tool generates plots for the growth dynamics and the phenotype distribution during the simulation furthermore the entire tumor growth dynamics for each phenotype can be stored as a data file and the plots can be saved as png and svg files plots are stored to the folder charts in the execution directory of the tool in the statistics mode the tool generates the plot for the probability density of tumor detection statistics about the appearance time of the mutants and the detection and extinction year are shown in separate windows see figures and for screenshots all the generated statistics are automatically saved to a data file see listing experimental results screenshots in this section we present some additional experimental results and multiple screenshots of the tool in table we compare the probability of tumor detection for the fitness landscapes emfl mfl and pfl in average our tool needs approximately to simulate a tumor with cells on a dual core processor table cumulative probability of tumor detection for different fitness landscapes results are averages over runs parameter values growth coefficients v mutation rate u detection size m generation emfl mfl pfl fig graphical user interface of ttp in the individual mode fig graphical user interface of ttp in the statistics mode fig statistical results of the average detection and extinction year fig statistical results of the average appearance year the existence probability and the number of cells at detection time
| 5 |
composition of gray isometries sierra marie lauresta and virgilio sison institute of mathematical sciences and physics university of the philippines los college laguna vpsison vpsison abstract in coding theory gray isometries are usually defined as mappings between finite frobenius rings which include the ring of integers modulo m and the finite fields in this paper we derive an isometric mapping from to from the composition of the gray isometries on and on the image under this composition of a block code of length n with homogeneous distance d is a not necessarily linear quaternary block code of length with lee distance introduction a block code of length n over a ring r is a set of over r called codewords it is said to be linear of if it is a not necessarily free submodule and is completely determined by a matrix g over codes over rings gained more attention when hammons kumar calderbank sloane and discovered in that certain very good but peculiar nonlinear codes over the binary field can be viewed as images of linear codes over the integer ring under the gray map from onto defined by and the map is an isometry or is that is the lee weight of an element of is equal to the hamming weight of its image under the hamming weight of a binary vector is the number of nonzero components in the vector carlet introduced a generalization of to the ring of integers modulo let k be a positive integer u an element of and its expansion where the image of u by the generalized gray map is the boolean function on given by we identify this boolean function with a binary word of length by simply listing its values thus the generalized gray map is seen as a nonsurjective mapping from to and its image is the code of order rm the generalized gray map is naturally extended to the when k rm is the set of boolean functions on and we obtain the usual gray map from to when k the generalized gray map which we denote by takes onto rm which is the set of boolean functions on that give all the binary words in with even hamming weight methodology we extend the usual gray isometry as a bijective mapping from onto table shows the binary image of an element of under clearly the lee weight of an element of is equal to the hamming weight of its binary image table the isometric gray map on we restrict as a mapping from rm to as follows table the map on for we apply the following homogeneous weight and extend it coordinatewisely table shows the image of an element of in rm under the generalized gray map if has the expansion then the mapping is weight preserving such that the homogeneous weight of an element of is equal to the hamming weight of its image in rm table the isometric gray map on results and discussion we take the composition table shows the quaternary image of an element of under if then table the isometric map the mapping is weight preserving such that the homogeneous weight of an element of is equal to the lee weight of its image in it is extended naturally to the let c be a linear block code of length n over with minimum homogeneous distance the image of c under is the set for proposition the set has the following properties ii iii is a not necessarily linear block code of length over the lee distance of is equal to every codeword of has even lee weight to illustrate consider the linear block code over generated by the matrix this code has codewords minimum hamming distance and minimum homogeneous distance the codewords and generated by the information words and respectively have quaternary images and whose superimposition is not in the code this example also shows that is not an additive homomorphism conclusion and recommedation this paper offers a simple way to define isometric mappings from to in general for and to take the code of a block code sufficient and necessary conditions for the linearity of the can be determined extension of this construction to galois rings is inevitable references hammons kumar calderbank j sloane and the linearity of kerdock preparata goethals and related codes ieee trans inform theory vol no pp january carlet codes ieee trans inform theory vol no pp july greferath and schmidt gray isometries for finite chain rings and a nonlinear ternary code ieee trans vol no pp november
| 7 |
video enhancement with flow tianfan google research baian chen mit csail jiajun wu mit csail donglai wei harvard university nov william freeman mit csail google research frame interpolation input videos epic flow interp by epic flow flow interp by flow video denoising input noisy videos epic flow denoise by epic flow flow denoise by flow figure many video processing tasks temporal top and video denoising bottom rely on flow estimation in many cases however precise optical flow estimation is intractable and could be suboptimal for a specific task for example although epicflow predicts precise movement of objects the flow field aligns well with object boundaries small errors in estimated flow fields result in obvious artifacts in interpolated frames like the obscure fingers in with the flow proposed in this work those interpolation artifacts disappear as in similarly in video denoising our flow deviates from epicflow but leads to a cleaner output frame flow visualization is based on the color wheel shown on the corner of abstract many video processing algorithms rely on optical flow to register different frames within a sequence however a precise estimation of optical flow is often neither tractable nor optimal for a particular task in this paper we propose taskoriented flow toflow a flow representation tailored for specific video processing tasks we design a neural network with a motion estimation component and a video processing component these two parts can be jointly trained in a manner to facilitate learning of the proposed toflow we demonstrate that toflow outperforms the traditional optical flow on three different video processing tasks frame interpolation video and video we also introduce a video dataset for video processing to better evaluate the proposed algorithm this work was done when tianfan xue was a student in mit csail introduction motion estimation is a key component in video processing tasks like temporal frame interpolation video denoising and video most video processing algorithms use a approach they first estimate motion between input frames and register them based on estimated flow fields and then process the registered frames to generate the final output therefore the accuracy of flow estimation greatly affects the performance of these approaches however precise flow estimation can be challenging and slow the brightness constancy assumption which many motion estimation algorithms rely on may fail due to variations in lighting or pose and the presence of motion blur or occlusion also many motion estimation algorithms involve solving a optimization problem making it inefficient for applications for example the widely used epicflow algorithm takes about seconds for each frame of a million pixels moreover most motion estimation algorithms aim to solve for a motion field that matches the actual objects in motion however this may not be the best motion representation for video processing figure shows an example in frame interpolation even though epicflow calculates a precise motion field whose boundary is with the fingers in the image the interpolated frame based on it contains obvious artifacts due to occlusion in contrast using the flow introduced in this work the model generates better interpolation result though the estimated motion field differs from optical flow in magnitude and does not align with object boundaries similarly in video denoising although epicflow shown in matches the boundary of the girl s hair the frame denoised with epicflow is much noisier than the one with our flow this suggests that for specific video processing tasks there exist flow representations that do not match the actual object movement but lead to better results in this paper we propose to learn flow toflow by performing motion analysis and video processing jointly in an trainable convolutional network our network consists of three modules the first one estimates the motion fields between input frames the second one registers all input frames based on estimated motion fields and the third one generates target output from registered frames these three modules are jointly trained to minimize the loss between output frames and ground truth unlike other flow estimation networks the flow estimation module in our framework predicts a motion field tailored to a specific task frame interpolation or video denoising as it is trained together with the corresponding video processing module the proposed toflow has several advantages first it significantly outperforms optical flow algorithms on three video processing tasks second it is highly efficient taking only for an input image with a resolution of third it can be learning from unlabeled video frames to evaluate toflow we build a video dataset for video processing most existing large video datasets like are designed for vision tasks like event classification the videos are often of low resolutions with significant motion blurs making them less useful for video processing to evaluate video processing algorithms systematically we introduce a new dataset which consists of video clips or higher downloaded from we build three benchmarks from these videos for interpolation and respectively we hope this dataset will contribute to future research in video processing through its videos and diverse examples the contributions of this paper are first we propose toflow a flow representation tailored to specific https video processing tasks significantly outperforming standard optical flow second we propose a and trainable video processing framework that can handle various tasks including frame interpolation video denoising and video third we also build a video dataset for video processing related work optical flow estimation dated back to horn and schunck most optical flow algorithms have sought to minimize energy terms for image alignment and flow smoothness current methods like epicflow and dc flow further exploit image boundary and segment cues to improve the flow interpolation among sparse matches recently deep learning methods were proposed for faster inference some trained with supervision and some without in this work we used the same flow network as spynet but instead of training it to minimize the flow estimation error as spynet does we train it jointly with a video processing network to learn a flow representation that is the best for a specific task video processing we focus on three video processing tasks frame interpolation video denoising and video most existing algorithms in these areas explicitly estimate the dense motion among input frames and then reconstruct the reference frame according to image formation models for frame interpolation video and denoising we refer readers to survey articles for comprehensive literature reviews on these flourishing research topics deep learning for video enhancement inspired by the success of deep learning researchers have directly modeled video enhancement tasks as regression problems without representing motions and have designed deep networks for frame interpolation and recently with differentiable image sampling layers in deep learning motion information can be incorporated into networks and trained jointly with the video enhancement task such approaches have been applied to video interpolation video interpolation object novel view synthesis eye gaze manipulation and superresolution although many of these algorithms also jointly train the flow estimation with the rest parts of network there is no systematical study on the advantage of joint training in this paper we illustrate the advantage of the trained flow through toy examples and also demonstrate its superiority over general flow algorithm on various tasks we also present a general framework that can easily adapt to different video processing tasks tasks in the paper we explore three video enhancement tasks frame interpolation video and video temporal frame interpolation given a low frame rate video a temporal frame interpolation algorithm generates a high frame rate video by synthesizing additional frames between two temporally neighboring frames specifically let and be two consecutive frames in an input video the task is to estimate the missing middle frame temporal frame interpolation doubles the video frame rate and can be recursively applied for even higher frame rates video given a degraded video with artifacts from either the sensor or compression video aims to remove the noise or compression artifacts to recover the original video this is typically done by aggregating information from neighboring frames specifically let in be frames in an input video the task of video denoising is to estimate the middle frame iref given degraded frames in as input for the ease of description in the rest of paper we simply call both tasks as video denoising video sr similar to video denoising given n consecutive frames as input the task of video is to recover the middle frame in this work we first upsample all the input frames to the same resolution of the output using bicubic interpolation and our algorithm only needs to recover the component in the output image flow for video processing most video processing algorithms has two steps motion estimation and image processing for example in temporal frame interpolation most algorithms first estimate how pixels move between input frames frame and and then move pixels to the estimated location in the output frame frame similarly in video denoising algorithms first register different frames based on estimated motion fields between them and then remove noises by aggregating information from registered frames in this paper we propose to use flow toflow to integrate the two steps to learn the flow we design an trainable network with three parts figure a flow estimation module that estimates the movement of pixels between input frames an image transformation module that warps all the frames to a reference frame and a image processing module that performs video interpolation denoising or on registered frames because the flow estimation module is jointly trained with the rest of the network it learns to predict a flow field that fits to a particular task toy example before discussing the details of network structure we first start with two synthetic sequences to demonstrate why our toflow can outperform traditional optical flows the left video denoising frame interpolation input frames input frames case i with ground truth flows gt flow warped by interpolated gt flow frame case ii with flows toflow warped by interpolated frame toflow case i with ground truth flows gt flow warped by toflow denoised frame case ii with flows toflow warped by toflow denoised frame figure a toy example that demonstrates the effectiveness of task oriented flow over the traditional optical flow see section for details of figure shows an example of frame interpolation where a green triangle is moving to the bottom in front of a black background if we warp both the first and third frames to the second even using the ground truth flow case i left column there is an obvious doubling artifact in the warped frames due to occlusion case i middle column which is a problem in the optical flow literature the final interpolation result based on these two warp frames still contains this artifact case i right column in contrast toflow does not stick to object motion the background should be static but it has motion case ii left column with toflow however there is barely any artifact in the warped frames case ii middle column and the interpolated frame looks clean case ii right column the hallucinated background motion actually helps to reduce the doubling artifacts this shows that toflow can reduce errors and synthesize frames better than the ground truth flow similarly on the right of figure we show an example of video denoising the random small boxes in the input frames are synthetic noises if we warp the first and the third frames to the second using the ground truth flow the noisy patterns random squares remain and the denoised frame still contains some noise case i right column there are some shadows of boxes on the bottom but if we warp these two frames using toflow case ii left column those noisy patterns are also reduced or eliminated case ii middle column and the final denoised frame base on them contains almost no noise this also shows that toflow learns to reduce the noise in input frames by inpainting them with neighboring pixels which flow network input at diff scales frame spn flow net reference motion not used in interp frame t motion motion image processing network for interpolation with mask improc net output frame warped frame motion mask masked frame motion mask masked frame spn flow net input frames motion fields flow estimation warped input transformation image processing interpolated frame warped frame figure left our model using flow for video processing given an input video we first calculate the motion between frames through a flo estimation network we then warp input frames to the reference using spatial transformer networks and aggregate the warped frames to generate a output image right top the detailed structure of flow estimation network the orange network on the left right bottom the detailed structure of image processing network for interpolation the gray network on the left traditional flow can not do now we discuss the details of each module as follows the later modules of the network can transform the first and the third frames to the second frame for synthesis flow estimation module image transformation module the flow estimation module calculates the motion fields between input frames for a sequence with n frames n for interpolation and n for denoising and we select the middle frame as the reference the flow estimation module consists of n flow networks all of which have the same structure and share the same set of parameters each flow network the orange network in figure takes one frame from the sequence and the reference frame as input and predicts the motion between them we use the motion estimation framework proposed by to handle the large displacement between frames the network structure is shown in the top right subfigure of figure the input to the network are gaussian pyramids of both the reference frame and another frame rather than the reference at each scale a takes both frames at that scale and upsampled motion fields from previous prediction as input and calculates a more accurate motion fields we uses in a flow network three of which are shown figure the yellow networks there is a small modification for frame interpolation where the reference frame frame is not an input to the network but what it should synthesize to deal with that the motion estimation module for interpolation consists of two flow networks both taking both the first and third frames as input and predict the motion fields from the second frame to the first and the third respectively with these motion fields using the predicted motion fields in the previous step the image transformation module registers all the input frames to the reference frame we use the spatial transformer networks for registration which synthesizes a new frame after transformation using bilinear interpolation one important property of this module is that it can the gradients from the image processing module to the flow estimation module so we can learn a flow representation that adapts to different video processing tasks image processing module we use another convolutional network as the image processing module to generate the final output for each task we use a slightly different architecture please refer to our appendix for details occluded regions in warped frames as mentioned section occlusion often results in doubling artifacts in the warped frames to solve this some interpolation algorithms estimate occlusion masks and only use pixels that are not occluded in interpolation inspired by this we also design an optional mask prediction network for frame interpolation in addition to the image processing module the mask prediction network takes the two estimated motion fields as input one from frame to frame and the other from frame to frame and in the bottom right of figure it predicts two occlusion masks is the mask input input warp by epicflow warp by epicflow warp by toflow toflow interp no mask warp by toflow toflow interp use mask figure comparison between epicflow interpolation and toflow interpolation both with and without mask of the warped frame from frame and is the mask of the warped frame from frame the invalid regions in the warped frames and are masked out by multiplying them with their corresponding masks the middle frame is then calculated through another convolutional neural network with both the warped frames and and the masked warped frames and as input please refer to our appendix for details of the network structure even without the mask prediction network our flow estimation is mostly robust to occlusion as shown in the third column of figure the warped frames using toflow has little doubling artifacts therefore just from two warped frames without the learned masks the network synthesizes a decent middle frame the top image of the right most column the mask network helps to remove some tiny artifacts such as the faint ghost of the bottom thumb circled by white visible when zoomed in training to accelerate the training procedure we first some modules of the network and then all of them together details are described below the flow estimation network the flow network consists of two steps first for all tasks we the motion estimation network on the sintel dataset a realistically rendered video dataset with ground truth optical flow by minimizing the difference between estimated optical flow and the ground truth in the second step for video denoising and we it with noisy or blurry input frames to improve its robustness to these input for video interpolation we it with frames and from video triplets as input minimizing the difference between the estimated optical flow and the ground truth flow or this enables the flow network to calculate the motion from the unknown frame to frame given only frames and as input empirically we find that this can improve the convergence speed the mask network we also our occlusion mask estimation network for video interpolation as an optional component of video processing network before joint training two occlusion masks and are estimated together with the same network and only optical flow as input the network is trained by minimizing the loss between the output masks and occlusion masks joint training after we train all the modules jointly by minimizing the loss between recovered frame and the ground truth without any supervision on estimated flow fields for optimization we use adam with a weight decay of we run epochs with batch size for all tasks the learning rate for and superresolution is and the learning rate for interpolation is the dataset to acquire high quality videos for video processing previous works take videos by themselves resulting in video datasets that are small in size and limited in terms of content alternatively we resort to vimeo where many videos are taken with professional cameras on diverse topics in addition we only search for videos without compression so that each frame is compressed independently avoiding artificial signals introduced by video codecs as many videos are composed of multiple shots we use a simple shot detection algorithm to break each video into consistent shots and further use gist feature to remove shots with similar scene background as a result we collect a new video dataset from vimeo consisting of videos with independent shots that are different from each other in content to standardize the input we resize all frames to the fixed resolution as shown in figure frames sampled from the dataset contain diverse content for both indoor and outdoor scenes we keep consecutive frames when the average motion magnitude is between pixels the right column of figure shows the histogram of flow magnitude over the whole dataset where the flow fields are calculated using spynet we further generate three benchmarks from the dataset for the three video enhancement tasks studied in this paper vimeo interpolation benchmark we select frame triplets from video clips with the following three criteria for the interpolation task first more than pixels should have motion larger than pixels between neighboring frames this criterion removes static videos second difference between the reference and the warped frame using optical flow calculated using spynet should be at most pixels the maximum intensity level of an image is this removes frames with large intensity change which are too hard for frame interpolation third the average difference between motion fields of neighboring frames and should be less than pixel this removes motion frequency flow magnitude a sample frames frequency b flow frequency flow magnitude c image mean flow frequency figure the dataset a sampled frames from the dataset demonstrating the high quality and wide coverage of our dataset b the histogram of flow magnitude of all pixels in c the histogram of mean flow magnitude of all images the flow magnitude of an image is the average flow magnitude of all pixels in that image as most interpolation algorithms including ours are based on linear motion assumption vimeo benchmark we select frame septuplets from video clips for the denoising task using the first two criteria introduced for the interpolation benchmark for video denoising we consider two types of noises a gaussian noise with a standard deviation of and mixed noises including a noise in addition to the gaussian noise for video deblocking we compress the original sequences using ffmpeg with codec format and quality value q vimeo benchmark we also use the same set of septuplets for denoising to build the vimeo benchmark with factor of the resolution of input and output images are and respectively to generate the videos from input we use the matlab imresize function which first blurs the input frames using cubic filters and then downsamples videos using bicubic interpolation methods vimeo interp dvf dataset psnr ssim psnr ssim spynet epicflow dvf adaconv sepconv fixed flow fixed flow mask toflow toflow mask table quantitative comparison between different frame interpolation algorithms on vimeo interpolation test set and dvf test set frame interpolation datasets we evaluate on two datasets vimeo interpolation benchmark and the dataset used by evaluation metrics we use two quantitative measure to evaluate the performance of interpolation algorithms peak ratio psnr and structural similarity ssim index in this section we evaluate two variations of the proposed network the first one is to train each module separately we first motion estimation and then train video processing while fixing the flow module this is similar to the video processing algorithms and we refer to it as fixed flow the other one is to jointly train all modules as described in section and we refer to it as toflow both networks are trained on vimeo benchmarks we collected we evaluate these two variations on three different tasks and also compare with other image processing algorithms baselines we first compare our framework with interpolation algorithms for the motion estimation we use epicflow and spynet to handle occluded regions as mentioned in section we calculate the occlusion mask for each frame using and only use regions to interpolate the middle frame further we compare with models deep voxel flow dvf adaptive convolution adaconv and separable convolution sepconv at last we also compare with fixed flow which is another baseline interpolation algorithm epicflow adaconv sepconv fixed flow toflow ground truth figure comparison between different frame interpolation algorithms views are shown in lower right dataset dataset dataset input noisy frame fixed flow toflow ground truth toflow ground truth figure comparison between different algorithms on video denoising the differences are clearer when results table shows our quantitative results on vimeo interpolation benchmark toflow in general outperforms the others interpolation algorithms both the traditional interpolation algorithms epicflow and spynet and recent based algorithms dvf adaconv and sepconv with a significant margin moreover even our model is trained on our dataset it also dvf on dvf dataset in both psnr and ssim there is also a significant boost over fixed flow showing that the network does learn a better flow representation for interpolation during joint training figure also shows qualitative results all the algorithms epicflow and fixed flow generate a doubling artifacts like the hand in the first row or the head in the second row adaconv on the other sides does not have the doubling artifacts but it tends to generate blurry output by directly synthesizing interpolated frames without a motion module sepconv increases the sharpness of output frame compared with adaconv but there are still artifacts see the hat on the bottom row compared with these methods toflow correctly recovers sharper boundaries and fine details even in presence of large motion baselines we compare our framework with the with the standard deviation of gaussian noise as its additional input on two grayscale datasets and as before we also compare with the fixed flow variant of our framework on two rgb datasets and results on two rgb datasets and vimeomixed toflow beats fixed flow in both two measurements as shown in table the output of toflow also contains less noise the differences are clearer when as shown on the left side of figure this shows that toflow learns a motion field for denoising video on two grayscale datasets and toflow outperforms in ssim even we did not finetuned on dataset note that even though toflow only achieves a comparable performance with in psnr the output of toflow is much sharper than as shown in figure the words on the billboard are kept in the denoised frame by toflow the top right of figure and leaves on the tree are also clearer the bottom right of figure therefore toflow beats in ssim which better reflects human s perception than psnr setup we first train and evaluate our framework on vimeo denoising benchmark with either gaussian noise or mixture noise to compare our network with which is a monocular video denoising algorithm we transfer all videos in vimeo denoising benchmark to grayscale to create vimeobw gaussian noise only and retrain our network on it we also evaluate our framework on the dataset for video deblocking table shows that toflow outperforms figure also shows the qualitative comparison between toflow fixed flow and note that the compression artifacts around the girl s hair top and the man s nose bottom are completely removed by toflow the vertical line around the man s eye bottom due to a blocky compression is also removed by our algorithm input compressed frames ground truth toflow fixed flow figure comparison between our algorithm and on video deblocking the difference are clearer when psnr ssim psnr ssim psnr ssim psnr ssim fixed flow toflow methods table quantitative comparisons on video denoising input methods vimeo sr bayessr psnr ssim psnr ssim full clip deepsr bayessr frame bicubic deepsr bayessr frames fixed flow toflow table results of video each clip in vimeo sr contains frames and each clip in bayessr contains frames video datasets we evaluate our algorithm on two dataset vimeo benchmark and the dataset provided by bayessr the later one consists of sequences each of which has to frames baselines we compare our framework with bicubic upsampling two video sr algorithms bayessr we use the version provided by ma et al and deepsr as well as a baseline with a fixed flow estimation module both bayessr and deepsr can take various number of frames as input therefore on bayessr datset we report two numbers one is using the whole sequence the other is to only use seven frames in the middle as toflow and fixed flow only take frames as input methods psnr ssim fixed flow toflow table results on video deblocking results table shows our quantitative results our algorithm performs better than baseline algorithms when using frames as input and it also achieves comparison performance to bayessr when bayessr uses all frames as input while our framework only uses frames we show qualitative results in figure compared with either deepsr or fixed flow the jointly trained toflow generates sharper output notice the words on the cloth top and the tip of the knife bottom are clearer in the frame synthesized by toflow this shows the effectiveness of joint training in all the experiments we train and evaluate our network on a nvidia titan x gpu for an input clip with resolution our network takes about ms for interpolation and for denoising or the input resolution to the network is where the flow module takes ms for each estimated motion field at last figure also visualizes the motion fields learned by different tasks even using the same network structure and taking the same input frames the estimated flows for different tasks are very different the flow field for interpolation is very smooth even on the occlusion boundary while the flow field for has artificial movements along the texture edges this is indicates that the network may learn to encode different information that is useful for different tasks in the learned motion fields conclusion in this work we propose a novel video processing model that exploits motion cues traditional video bicubic deepsr fixed flow toflow ground truth figure comparison between different algorithms a is shown on the top left of each result the differences are clearer when input flow for interpolation flow for denoising flow for sr flow for deblocking figure visualization of motion fields for different tasks processing algorithms normally consist of two steps motion estimation and video processing based on estimated motion fields however a genetic motion for all tasks might be suboptimal and the accurate motion estimation would be neither necessary nor sufficient for these tasks our framework bypasses this difficulty by modeling motion signals in the loop to evaluate our algorithm we also create a new dataset for video processing extensive experiments on temporal frame interpolation video and video demonstrate that our algorithm achieves performance acknowledgements this work is supported by nsf nsf facebook shell research and toyota research institute references kothari lee natsev toderici varadarajan and vijayanarasimhan a video classification benchmark baker scharstein lewis roth j black and szeliski a database and evaluation methodology for optical flow ijcv brox bruhn papenberg and weickert high accuracy optical flow estimation based on a theory for warping in eccv butler wulff stanley and j black a naturalistic open source movie for optical flow evaluation in eccv caballero ledig aitken acosta totz wang and shi video with temporal networks and motion compensation in cvpr fischer dosovitskiy ilg golkov van der smagt cremers and brox flownet learning optical flow with convolutional networks in iccv ganin kononenko sungatullina and lempitsky deepwarp photorealistic image resynthesis for gaze manipulation in eccv ghoniem chahir and elmoataz nonlocal video denoising simplification and inpainting using discrete regularization on graphs signal horn and schunck determining optical flow artif huang wang and wang bidirectional recurrent convolutional networks for in nips jaderberg simonyan zisserman et al spatial transformer networks in nips kappeler yoo dai and katsaggelos video with convolutional neural networks ieee tci kingma and ba adam a method for stochastic optimization in iclr liao tao li ma and jia video via deep learning in cvpr liu and freeman a video denoising algorithm based on reliable motion estimation in eccv liu and sun a bayesian approach to adaptive video super resolution in cvpr liu and sun on bayesian adaptive video super resolution ieee tpami liu yeh tang liu and agarwala video frame synthesis using deep voxel flow in iccv ma liao tao xu jia and wu handling motion blur in in cvpr maggioni boracchi foi and egiazarian video denoising deblocking and enhancement through separable nonlocal spatiotemporal transforms ieee tip mathieu couprie and lecun deep video prediction beyond mean square error in iclr and dense estimation and segmentation of the optical flow with robust techniques ieee tip nasrollahi and moeslund a comprehensive survey mva niklaus mai and liu video frame interpolation via adaptive convolution in cvpr niklaus mai and liu video frame interpolation via adaptive separable convolution in iccv oliva and torralba modeling the shape of the scene a holistic representation of the spatial envelope ijcv ranjan and j black optical flow estimation using a spatial pyramid network in cvpr revaud weinzaepfel harchaoui and schmid epicflow interpolation of correspondences for optical flow in cvpr tao gao liao wang and jia deep video in iccv varghese and wang video denoising based on a spatiotemporal gaussian scale mixture model ieee tcsvt wang zhu kalantari a efros and ramamoorthi light field video capture using a hybrid imaging system in siggraph wedel cremers pock and bischof regularization for high accuracy optic flow in cvpr werlberger pock unger and bischof optical flow guided video interpolation and restoration in emmcvpr xu ranftl and koltun accurate optical flow via direct cost volume processing in cvpr yu harley and derpanis back to basics unsupervised learning of optical flow via brightness constancy and motion smoothness in eccv workshop yu li wang hu and chen video frame interpolation exploiting the interaction among different levels ieee tcsvt zhou tulsiani sun malik and a efros view synthesis by appearance flow in eccv zitnick kang uyttendaele winder and szeliski video view interpolation using a layered representation acm tog appendices additional qualitative results in addition to the qualitative results shown in the main text figures we show additional results on the following benchmarks vimeo interpolation benchmark figure vimeo denoising benchmark figure for grayscale videos vimeo deblocking benchmark figure and vimeo benchmark figure to avoid we randomly select testing images from test datasets and do not show as figures in the main text differences between different algorithms are more clearer when zoomed in flow estimation module we used spynet as our flow estimation module it consists of with the same network structure but each has an independent set of parameters each consists of sets of convolutional with zero padding batch normalization and relu layers the number of channels after each convolutional layer is and respectively the input motion to the first network is a zero motion field image processing module we use slight different structures in the image processing module for different tasks for temporal frame interpolation both with and without masks we build a residual network that consists of a averaging network and a residual network the averaging network simply averages the two transformed frames from frame and frame respectively the residual network also takes the two transformed frames as input but calculates the difference between the actual second frame and the average of two transformed frames through a convolutional network consists of three convolutional layers each of which is followed by a relu layer the kernel sizes of three layers are respectively with zero padding and the numbers of output channels are respectively the final output is the summation of the outputs of these two networks averaging network and residual network for video the image processing module uses the same convolutional structure convolutional layers and relu layers as interpolation but without the residual structure we have also tried the residual structure for but there is no significant improvement for video the image processing module consists of pairs of convolutional layers and relu layers the kernel sizes for these four layers are and respectively with zero padding and the numbers of output channels are and respectively mask network similar to our flow estimation module our mask estimation network is also a convolutional neural network pyramid as in figure each level consists of the same structure with sets of convolutional with zero padding batch normalization and relu layers but an independent set of parameters output channels are and respectively for the first level the input to mask network estimated flow at diff scales masks masks masks figure the structure of our mask network the network is a concatenation of two estimated optical flow fields channels after concatenation and the output is a concatenation of two estimated masks channel per mask from the second level the inputs to the network switch to a concatenation of two estimated optical flow fields at that resolution and masks from the previous level the resolution is twice of the previous level in this way the first level mask network estimates a rough mask and the rest refines high frequency details of the mask epicflow adaconv sepconv fixed flow toflow ground truth figure qualitative results for video interpolation samples are randomly selected from vimeo interpolation benchmark the differences between different algorithms are clear only when zoomed in input fixed flow toflow ground truth input toflow ground truth figure qualitative results for video denoising the top five rows are results on color videos and bottom rows are grayscale videos samples are randomly selected from vimeo denoising benchmark the differences between different algorithms are clear only when zoomed in input toflow ground truth figure qualitative results for video deblocking samples are randomly selected from vimeo deblocking benchmark the differences between different algorithms are clear only when zoomed in bicubic deepsr fixed flow toflow ground truth figure qualitative results for video samples are randomly selected from vimeo benchmark the differences between different algorithms are clear only when zoomed in deepsr was originally trained on images but evaluated on frames in this experiment so there are some artifacts
| 1 |
world of computer science and information technology journal wcsit issn vol no intelligent emergency message broadcasting in vanet using pso ghassan samara tareq alhmiedat department of computer science zarqa university zarqa jordan department of information technology tabuk university tabuk saudi arabia the new type of mobile ad hoc network which is called vehicular ad hoc networks vanet created a fertile environment for research in this research a protocol particle swarm optimization contention based broadcast pcbb is proposed for fast and effective dissemination of emergency messages within a geographical area to distribute the emergency message and achieve the safety system this research will help the vanet system to achieve its safety goals in intelligent and efficient way pso vanet message broadcasting emergency system safety system the new techniques in this system should aim to make the intelligent vehicle to think communicate with other vehicles and act to prevent hazards i introduction recent year s rapid development in wireless communication networks has made car to car and car to infrastructure communications possible in mobile ad hoc networks manets this has given birth to a new type of high mobile manet called vehicular ad hoc networks vanet creating a fertile area of research aiming for road safety efficient driving experience and infotainment information and entertainment and vanet safety applications depend on exchanging the safety information among vehicles communication or between vehicle to infrastructure communication using the control channel see figure creating a safety system on the road is a very important and critical concern for human today each year nearly million people die as a result of road traffic accidents more than deaths each day and more than half of these people are not travelling in a car the injuries are about fifty times of this number the number of cars in is approximately estimated as million cars around the world with an annually constant increase by million car around the world with this constant raise the estimated number of cars nowadays exceeding one billion this raise the possibility to increase the number of crashes and deaths on the roads road traffic accidents are predicted to become the fifth leading cause of death in the world resulting in an estimated million death each year as stated by the world health organization who besides traffic congestion makes a huge waste of time and fuel this makes developing an efficient safety system an urgent need on the road figure vanet structure vanet safety communication can be made by two means periodic safety message called beacon in this paper and event driven message called emergency message in this paper both sharing only one control channel the beacon messages are status messages containing status information about the sender vehicle like position speed heading beacons provide fresh this research is funded by the deanship of research and graduate studies in zarqa jordan wcsit information about the sender vehicle to the surrounding vehicles in the network helping them to know the status of the current network and predict the movement of vehicles beacons are sent aggressively to neighboring vehicles messages each second depending on just one forwarder is not enough in a high mobile network like vanet furthermore authors didn t depend on beacons to gain the information they proposed to use hello message which creates a chance to increase the channel load emergency messages are messages sent by a vehicle detect a potential dangerous situation on the road this information should be disseminated to alarm other vehicles about a probable danger that could affect the incoming vehicles vanet is a high mobile network where the nodes are moving in speeds that may exceed which means that this vehicle move even if these vehicles are very far from the danger they will reach it very soon here milliseconds will be very important to avoid the danger and the contention period schemes which is a waiting time that the receiver waits before rebroadcasting the original message received from the sender are proposed by many researchers and in authors proposed the distributed broadcast ldmb in which all the receivers of the emergency message are potential forwarders each forwarder computes and waits for contention time using equation if the contention time ends the forwarder will start to rebroadcast the emergency message emergency messages in vanet are sent in broadcast fashion where all the vehicle inside the coverage area of the sender should receive the message the coverage area is not enough as it is hardly reaches a which is the dsrc communication range due to attenuation and fading effects away vehicles from the danger should receive this critical information to avoid the danger furthermore the probability of message reception can reach in short distances and can be as low as at half of the communication range moreno therefore there should be a technique to increase the emergency message reception with high reliability and availability in and where authors proposed message forwarding strategy by sending the emergency message in a broadcast fashion and selecting the best forwarder available all vehicles receiving that message are potential forwarders in order to decide which node forwards the message all receivers will be assigned a contention window waiting time the contention window size will be the smallest for the farthest node and the biggest size for the nearest node in other words this protocol will give priority for the farthest node to be the next forwarder the problem of the last two protocols that all the message receivers will compute the waiting time and wait to make the rebroadcast even the closest vehicles to the sender will do and this will make the entire network vehicles busy for any message received duo to the high mobility of vehicles the distribution of nodes within the network changes rapidly and unexpectedly that wireless links initialize and break down frequently and unpredictably therefore broadcasting of messages in vanets plays a crucial rule in almost every application and requires novel solutions that are different from any other form of networks broadcasting of messages in vanets is still an open research challenge and needs some efforts to reach an optimum solution another protocol proposed by called emergency message dissemination for vehicular emdv protocol by enabling the farthest vehicle within the transmission range to make the rebroadcasting of the emergency message choosing one forwarder vehicle is not appropriate in a high mobile network like vanet as the position is always changing and the receiver vehicle may become out of range when sending the message or simply the receiver can t receive the message because of the channel problems like jam or denial of service see figure broadcasting requirements are high reliability and high dissemination speed with short latency in as well as communications problems associated with regular broadcasting algorithms are the high probability of collision in the broadcasted messages the lack of feedback and the hidden node problem in this paper we concerned with proposing a new intelligent broadcasting technique for the emergency message in vanet aiming to increase the reception of the emergency information ii research background emergency message rebroadcast in authors proposed a broadcast scheme that utilizes neighbor s information by exchanging hello messages among vehicles when any probable danger is detected a warning message is broadcasted to all neighbors the farthest vehicle is selected as a forwarder depending on the information gained from the hello message if the preselected forwarder receives the message it will rebroadcast it figure sender utilizing emdv in authors proposed that the receivers of the message will select random waiting times and make acknowledgment wcsit to avoid the from nodes closer to the original sender emergency message rebroadcast by network segments another way to rebroadcast the message is to divide the network into segments proposed in and the acknowledgment scheme causes delay to the rebroadcast in authors proposed a protocol called urban hop broadcast umb aiming to maximize the message progress and avoid broadcast storm hidden node and reliability problems the protocol assigns the duty of forwarding and acknowledging the broadcast packets to only one vehicle by dividing the road portion inside the transmission range into segments and choosing the vehicle in the furthest segment without prior topology information the source node transmits a broadcast control packet called request to broadcast rtb which contains the position of the source and the segment size on receiving the rtb packet nodes compute the distance between the sender and the receiver then nodes transmit a channel jamming signal called that contains several equal to their distance from the source in number of segments the farther the distance the longer the burst each node transmits its and senses the channel if there is no other in the channel it concludes that it is the farthest node from the source then the node returns a ctb control packet containing its identifier id to the source in authors proposed the forwarding cbf protocol where a vehicle sends a packet as a broadcast message to all its neighbors on receiving the packet neighboring vehicle will contend for forwarding the packet the node having the maximum progress to the destination will have the shortest contention time and will first rebroadcast the packet if other nodes receive the rebroadcast message they will stop their contention and delete the previously received message this protocol mainly proposed for forwarding the periodic safety message beacons the problem of this protocol that there should be a management technique to manage the contention for all the neighboring vehicles and there is a chance that the nearest vehicle to the sender may not hear the rebroadcast of another vehicle here this vehicle will rebroadcast the message and this called hidden node problem tobagi and kleinrock also it may lead to broadcast storm problem that makes the protocol useless in authors suggested that the emergency message will be rebroadcasted by the receivers located at farther distances from the sender by the selection of shorter waiting times see equation the smart broadcasting protocol addressed the same objective as umb using a different methodology upon reception of a rtb message each vehicle should determine its segment and set a random time each segment has its own contention window size if this segment has contention window size ts vehicles in the furthest segment should randomly choose a time between to ts vehicles in the next nearer segment choose a value between to ts and so on as vehicles near the sender should wait for longer time in authors proposed the contention based broadcasting cbb protocol for increasing the emergency message reception and performance the emergency message will be broadcasted in fashion and the forwarders will be selected before the original message is sent cbb proven to achieve superiority over the emdv protocol as it choses more than one forwarder to rebroadcast the emergency information and this gives the message a chance to overcome the preselected forwarder failure vehicles will decrement their backoff timers by one in each while listening to the physical channel while waiting if any vehicle receives a valid ctb message it will exit the contention time phase and listen to the incoming broadcast on the contrary if any node finishes its backoff timer it will send the ctb containing its identity and rebroadcast any incoming broadcast the criteria of choosing the forwarders depends on the progress and on the segment localization see figure where all the vehicles located in the final segment are a potential forwarder while in authors proposed the geographic random forwarding geraf protocol which divides the network into equally adjacent sectors the transmitter source elects the sectors starting from the farthest one by sending rtb message all the nodes in the elected sectors reply by ctb message if one node reply the ctb message then this node will become the next forwarder if there are more than one node sent the ctb message the source issue a collision message and make a procedure to elect the next forwarder depending on a probabilistic rule many other approaches are discussed in details in our previous paper emergency message broadcasting iii the proposed protocol figure emergency message sending and transmission range this section presents a detailed design description for the pcbb protocol which aims to increase the percentage of wcsit reception for the emergency information by utilizing a contention window position based forwarding scheme and pso intelligent technique sending the message in single hop enables it to reach a number of vehicles within a limited distance up to m for the best cases however this number should be increased in order to warn more vehicles of possible dangers before they reach the danger area beacons and the emergency messages should be received by all the neighboring vehicles with high probability and reliability because of the critical nature of the information both provide when a vehicle detects danger it issues an emergency message to warn other vehicles within the network and all the vehicles in the opposite direction of the sender movement located in its transmission range must receive such message covering the whole area does not guarantee that all vehicles will receive the message because of channel collisions and fading effects the percentage of emergency message reception for the network vehicles must be as high as possible sen id this paper proposes to categorize any emergency message before sending it to make it easier for the receiver vehicle to recognize the importance of the message being received table lists the codes for each category for example when a vehicle receives two messages containing categories and it processes the message that contains category first because it contains more critical information after assigning the message code the sender should add data to the message such as the coordinates of the danger zone or what the receiver should do however this aspect would not be discussed in detail in the current study the proposed structure of the emergency message is shown in figure where three inputs namely cid minb and maxb are added to help the receiver vehicle determine what action to take after receiving the emergency message safety of life cooperative collision warning safety intersection warning safety transit vehicle signal priority toll collection service announcement movie download hours of mpeg minb maxb as mentioned earlier the network is divided to several segments to help the vehicle determine the next forwarder of the emergency message as proposed in the transmission range of the sender is divided into segments to make it easier for the sender to determine the last vehicle in the last segment which is eventually selected as the next forwarder for this paper the distance is between the sender and the forwarder authors in established a fixed distance of for the current study however if the distance between the sender and the farthest vehicle is application emergency break cid as mentioned earlier assigning the forwarding job of the emergency message to all the receiver vehicles of the message may cause a broadcast storm problem and assigning the forwarding job just for one receiver vehicle may not be appropriate sometimes as this specific forwarder may not receive the emergency message hence vehicles in the last segment which should be the furthest one will make the forwarding of the emergency message if the forwarder fails to receive and forward the message table emergency message classification safety of life data choosing the next candidate forwarder is a process which begins by gathering the information obtained from beacons received from neighbors this information is inserted and ordered into nt the sender vehicle chooses the farthest vehicle and assigns it to be the candidate forwarder the process of forwarding the emergency message to increase the probability of reception so that the forwarded signal can communicate with more vehicles on the road and reach longer distances is the option used in this paper choosing only one forwarder is inappropriate in high mobile networks such as vanet because the forwarder might not receive the emergency message to solve this problem dividing the network to several segments is proposed vehicles inside the last segment the farthest segment from the sender wait for a period of time and determine whether or not the candidate forwarder rebroadcasted the emergency message if none made the rebroadcast the vehicles located in the farthest segment forwards the message as mentioned earlier assigning the forwarding job of the emergency message to all the receiver vehicles of the message may cause a broadcast storm problem and assigning the forwarding job to just one receiver vehicle may be inappropriate because the specific forwarder may not receive the emergency message hence vehicles in the last segment must forward the emergency message if the forwarder fails to receive and forward the message every beacon received by a vehicle provides important information about the sender status this information is utilized to form a rich and real time image about the current network topology which facilitates better network vehicle communication it also helps to be informed about the potential dangers when they occur when a vehicle has a problem or detects a problem it determines if the problem is life critical or not the life critical safety of life messages will be given the highest priority and are then processed and sent before any other kind of messages msg id where sen id sender id code message code ts time stamp msg id message id data data sent cid forwarder candidate id minb minimum boundary maxb maximum boundary preparing to send priority ts figure emergency message illustration in order to cover a wider area for message reception some neighboring vehicles can serve as potential forwarders and each forwarder has to wait for a certain period of time contention time before forwarding the message code code wcsit m anything beyond m is not considered the distance between the last vehicle and the sender is computed using equation segment should be expanded to include more vehicles and this could be determined using equation where dis is the distance between last vehicle and the sender senpos is the position of the sender obtained from gps and forpos is the forwarder position or the last vehicle in the last segment this calculation doubles the size of the last segment and increases the number of the potential forwarders if the calculated number remains to be sucper nmax the minb could be recalculated by multiplying dif by and so on this technique increases the number of the potential forwarders and solves the preselected forwarder rebroadcast failure determining the boundaries of the last segment must be set dynamically depending on the channel status and the network topology available because it would be pointless if this segment does not contain enough number of vehicles for forwarding at the same time determining the number of sufficient vehicles located in the last segment must also depend on the channel status and network topology the sender vehicle has all the information required to analyze the channel and draw the network topology to compute the boundaries the cbb and pcbb protocols are proposed the cbb protocol depends on the selection of boundaries of the last segment based on the number of the vehicles located in this segment and the number of segments in the network suggested that the number of the segments should be segments computing the segments and the boundaries could be done using equations and equation assigns the distance between the sender and the farthest vehicle forwarder is the boundary of the last segment equation computes the length of each segment and equation finds the location of the minimum boundary minb is the minimum boundaries borders where the last segment starts nmax is the maximum number of the segments and dif is the length of the segment the pcbb is an enhancement of the cbb and works when the sender vehicle analyzes the dense locations of the vehicles along its transmission range in figure the vehicle analyzes the location density in the network to form groups for dense locations the resulting network is then divided into several groups figure figure vehicle analysis location density this means that the vehicles located in the area between minb and maxb from the sender are considered as potential forwarders of the emergency message they are rebroadcast if no vehicle makes the rebroadcast sometimes the last segment may have an insufficient number of vehicles the number of the potential forwarders must have a threshold to determine if it is sufficient or not figure analyzed network depending on network density and progress a dense or concentrated area has a number of vehicles within just a small area thus sending a message to vehicles in concentrated areas increases the chance of receiving and rebroadcasting the message the probability of receiving the rebroadcasted messages in this segment is also high thus eliminating the hidden node problem which is considered one of the most difficult problems encountered in rebroadcasting emergency message in vanet equation is used to calculate how vehicles compute the dense locations the progress represents the upper bound of the last segment and the length of the segment is also the distance between the farthest vehicle in the segment and the first vehicle located in the segment for example if the segment that has progressed to m has vehicles in meters between m and m from the location of the sender the number could be generated and tested using equation where sucper is the success percentage that the last segment must fulfill before agreeing on the values of maxb and minb and nein is the total number of neighbor vehicles in nt if sucper nmax this means that the last segment holds enough number of potential forwarder vehicles the result of is subtracted by one vehicle because the last segment also holds the preselected potential forwarder if the sucper nmax this means that the area of the last wcsit the segments with the higher progress and high number of vehicles with smaller segments give higher fitness function where maxb is the highest boundary border for the last segment and minb is the minimum boundary borders from which the segment starts after performing equation the vehicle inserts it in a progress list pl which then helps the vehicle in making quicker analysis and decisions table this means that the vehicles located in the area between minb and maxb from the sender are considered as potential forwarders of the emergency message and would have to wait to rebroadcast in case no vehicle forwards the message when the sender decides to broadcast an emergency message it should examine the number of neighbors within the back end of the coverage area if the number is more than one then this protocol could be carried out if the number of the neighbors is zero then the sender broadcasts the message without specifying any forwarder if the number of the neighbors is equal to one then the sender broadcasts the message and specifies the forwarder without adding any detail about the boundaries table progress list progress m length of segment no of vehicles fitness function to compute for the contention time equation is performed where the vehicles with the largest distance from the sender have the shortest contention time to wait before testing the channel to rebroadcast each vehicle tests its progress from the sender by dividing its current position on the maximum distance computed by the sender the result of this equation gives the waiting time for the contending vehicles inside the last segment giving the opportunity for all the vehicles inside the last segment to recover from the failure of the chosen forwarder thus the protocol increases the probability of resending the emergency message consequently increasing the percentage of sending the emergency message and reaching longer distances at the same time after performing equation on all the vehicles in the segments the sender vehicle takes the upper boundary of the segment scores the higher fitness function then the pso optimization is applied fitv lbestv w pbestv lbestv gbestv lbestv lbestv pbestv fitv where w is random number between w to rand random number to pbest is the last lbest computed by the vehicle w is the inertia weight of the particles random and random are two uniformly distributed random numbers in the range and and are specific parameters which control the relative effect of the individual and global best particles enabling just the last segment to contend eliminates the hidden node problem because all the potential forwarders have high probabilities to sense the rebroadcasted message when a forwarder resends the message all potential forwarders are located in a small and limited area the probability of reception can reach at short distances but it could be as low as at half the distance of the communication range the lbest for the vehicle is obtained from the fitness function computed using which represents the best area segment dimension the results indicate the sufficient number of vehicles depending on the sender analysis pbest is the previous fitness function computed by the vehicle while the gbest is best fitness function computed by the vehicle obtained from the analysis of the information from crnt obtained from our previous paper the crnt gives extended information received from other vehicles located in the neighborhood of the sender which reduces the error possibility that the vehicle might make during the channel dense location analysis because pso depends on taking the neighbor s information and history from the crnt the vehicle can conclude another global fitness function from the neighboring vehicles analysis which influences the current analysis done by the current vehicle sending steps the sender dispatches an emergency message warning other vehicles about any potential danger the sender analyzes the danger and selects the code the sender creates nt for its neighbors and then selects the next forwarder depending on the distance of the farthest vehicle the sender analyzes the dense location computing the fitness function using equation the sender analyzes the information gained from neighbors about dense locations from crnt and concludes the gbest the pso algorithm is then applied to obtain the minb which represents the lower bound of the segment the sender creates the message and inserts the values derived from steps and in the message after which it broadcasts the message to the network to compute for the boundaries of the last segment the sender carries out the following equations the following illustrates the calculations using equation which computes the fitness function for each segment this formula ensures that the vehicles having high progress from the sender and having a large number of vehicles in a small area can produce better fitness function this is because vehicles concentrated in a small area and are located wcsit far from the sender vehicle have better opportunities to rebroadcast the emergency message with little chance of failure tc is the contention time that the segment vehicle has to wait before checking the system to see if the emergency message has been rebroadcasted by other vehicle or not tslot is the system time slot from table and after employing equation receiving emergency message steps this section represents the steps for receiving the emergency message which must be done efficiently the receiver accepts a message then checks the code if it is or the receiver also checks if the message has been received before and if its id is the same as the forwarder id then it rebroadcasts the message immediately if the id is not the same the receiver calculates the distance between its current position and the sender and tests if the current location falls within minb and maxb the receiver then prepares to forward the message the best value for the fitness function is this value means that the lbest is which represents the lower boundary for that segment the gbest is taken from the crnt because this protocol provides the sender vehicle with information from other vehicles this information also enables the sender to analyze the channel depending on the information from the other vehicle giving more accurate data about the network rebroadcasting steps the rebroadcasting job is only assigned to a limited number of vehicles it is not appropriate to assign this job to all the receiver vehicles because this can lead to a broadcast storm problem or the hidden node problem the first forwarder should be the first candidate selected by the sender and the forwarding steps are as follows pbest is the channel analysis history of the network made from the sender vehicle lbest is the boundary of the fitness function and gbest is the best analysis from the neighbors lbest apply pso equation x x x lbest eq the forwarder waits for a random back off time depending on its contention window the back off time is used to avoid channel collision the forwarder then senses the channel and tests whether or not this message has been transmitted from others if no other vehicle rebroadcasts the message forwarder reserves the channel and the forwarder broadcasts the message the contention window for the first candidate forwarder is lbest the minb for this example is m and the maxb is m as obtained from equation the results imply that the vehicles between m and m from the sender can be considered as potential forwarders they would have to contend by means of the time to rebroadcast and be able to overcome the preselected forwarder failure each vehicle inside the last segment is required to contend before trying to send the emergency message and this could be done using the following steps the forwarder computes the contention window using equation the vehicle contends for a random back off time depending on the contention window the vehicle senses the channel and determines whether or not this message has been transmitted from others if no other vehicle rebroadcasts the message the vehicle reserves the channel and broadcasts the message receiving a message vehicles in vanet receive messages all the time these messages could be beacon emergency or service messages each message should be analyzed to determine the level of importance involved if the message is holding safety critical information then the message code should be or and the message is given higher priority for processing the receiver vehicle also has to ensure that this message has not been received before to eliminate duplication the receiver checks the forwarder id if the current receiver id is the same as that of the forwarder the receiver rebroadcasts the message immediately if the receiver is not a forwarder then it must compute the distance between its position and that of the sender to determine if the receiver is located within the last segment if the receiver is located in the last nonempty segment it starts contending and prepares itself to rebroadcast each vehicle inside the last segment must contend the time and compute the cw time waiting time using equation cw depends on the progress and the vehicles having the largest progress from the sender has the shortest contention time before testing the channel to make the rebroadcast pcbb and cbb both have the same goal of increasing the percentage reception of the emergency information the main difference between them however is the potential forwarder selection where cbb depends on choosing the last segment boundaries depending on the number of the vehicles in the segment against a predefined threshold whereas the pcbb depends on selecting the boundaries on vehicle saturation areas and utilizes the pso intelligent technique iv simulation simulation setup in order to test correctness of our protocol we made the simulation using the commercial program the distribution used is nakagami distribution wcsit parameters used in our simulation are summarized in table all the simulations in this paper will adopt these parameters pcbb can achieve better performance past the first m distance representing the dsrc communication range for the sender the signal gets very weak in the last meters of the sender communication range but when the forwarder rebroadcasts the message the signal becomes stronger and reaches greater distances we made our simulation for including vehicles in km road consisting of lanes simulation parameters another difference between and emdv is that after several tries cbb and pcbb never fail to rebroadcast the emergency message but emdv sometimes fails to do so proving the effectiveness of the cbb protocol the pcbb protocol has also been shown to select forwarders more carefully than cbb which depends on a threshold while pcbb depends on traffic saturation and progress and on the analysis made by neighboring vehicles furthermore pso adopts the pso intelligent algorithm which takes other vehicles analysis into consideration allowing for more accurate selection of preselected forwarders parameters used in the simulation experiment are summarized in table all the simulations in this paper will adopt these parameters table simulation configuration parameters parameter radio propagation model value m ieee data rate plcp header length symbol duration noise floor snr db cw min cw max slot time sifs time difs time message size beacon message rate number of vehicles road length car speed simulation time road type number of lanes neighbor entry size bytes message s km s highway lanes bytes description model is fixed value recommended by fixed value fixed value fixed value fixed value adjustable to add noise to the signal fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value figure probability of message reception of emergency message with respect to the distance to the sender figure shows the message delay for cbb and emdv compared with pcbb emergency message delay the simulation computes the delay for broadcasting and rebroadcasting of the original message showing that the emdv during the time has a slightly higher delay than cbb but not exceeding the delay shows a slight increase at about m away from the sender where the rebroadcast starts to take effect if the cbb has a shorter delay starting from this point it means that its rebroadcast efficiency and decisions are made faster than those in emdv pcbb has a slightly shorter delay about shorter than cbb and shorter than emdv at the ninth second because the pso is an intelligent technique that has quick performance and response in safety systems with a highly mobile network like vanet a few microseconds are critical in saving life or avoiding danger results in order to enhance emergency message dissemination in vanet two and protocols have been proposed and implemented namely cbb and pcbb an enhancement of the proposed cbb in this section the emdv protocol which is the outcome of the now project and is compared with the cbb and pcbb the results of the experiment are shown in figures and the test performed concentrated on the probability of emergency message reception channel collision and the delay that the protocols may cause the emdv and dfpav protocols both widely used in vanet today are the results of the now project which is a collaboration between and karlsruhe university figure shows the simulation results for the proposed cbb and pcbb protocols these have been simulated and tested in terms of probability of emergency message reception afterwards their performances are compared with that of the emdv protocol the results show that all the protocols can increase the performance and probability of emergency message reception more noteworthy is the fact that cbb and wcsit references figure delay measured after sending the emergency message with respect to distance figure shows the collision produced by the three protocols all of which generated the same collision when broadcasting emergency information it is worth noting that the collisions produced by cbb pcbb and emdv at the beginning of the experiment do not increase however after a period of time sending a large number of emergency messages resulted in an increase in the number of collision for all the three protocols with the difference between them reaching at the ninth second figure collision measured after sending the emergency message vi conclusion this research has proposed the pcbb aiming to improve road safety by achieving fast and efficient emergency message transmission and delivery utilizing the efficient and newest intelligent technique pso which helped to make more accurate analysis and performance and increased the percentage of the emergency message reception without affecting the channel collision ghassan samara wafaa ah r sures security issues and challenges of vehicular ad hoc networks vanet international conference on new trends in information science and service science niss ghassan samara wafaa ah r sures security analysis of vehicular ad hoc nerworks vanet second international conference on network applications protocols and services netapps who world health organization http visited on april m raya p papadimitratos i aad jp hubaux certificate revocation in vehicular networks laboratory for computer communications and applications lca school of computer and communication sciences epfl switzerland worldometers real time world statistics visited on april ghassan samara waha alsalihy s ramadass increase emergency message reception in vanet journal of applied sciences volume pages ghassan samara wafaa alsalihy sureswaran ramadass increasing network visibility using coded repitition beacon piggybacking world applied sciences journal wasj volume number pp y street broadcast with smart relay for emergency messages in vanet international conference on advanced information networking and applications workshops waina ieee qiong y lianfeng a broadcast scheme for propagation of emergency messages in vanet ieee international conference on communication technology icct ieee biswas tatchikou dion wireless communication protocols for enhancing highway traffic safety ieee communications magazine ieee communications assessing information dissemination under safety constraints annual conference on wireless on demand network systems and services wons ieee mittag santi hartenstein communication fair transmit power control for information transactions on vehicular technology ieee communications achieving safety in a distributed wireless systems and protocols paper universitatsverlag karlsruhe isbn widmer kasemann mauve hartenstein forwarding for mobile ad hoc networks ad hoc networks briesemeister schafers hommel disseminating messages among highly mobile hosts based on communication ieee intelligent vehicles symposium iv ieee korkmaz ekici urban broadcast protocol for communication systems acm international workshop on vehicular ad hoc networks acm fasolo zanella zorzi an effective broadcast scheme for alert message propagation in vehicular ad hoc networks ieee int conf on communications ieee zorzi rao geographic random forwarding geraf for ad hoc and sensor networks energy and latency performance ieee transactions on mobile computing ieee ieee white paper dsrc technology and the dsrc industry consortium dic prototype team wcsit neo project http malaga university visited on march ni tseng chen sheu the broadcast storm problem in a mobile ad hoc network annual international conference on mobile computing and networking acm now network on wheels project http accessed may mendes population topologies and their influence in particle swarm performance phd thesis universidade do minho wait contention time if channel idle if no rebroadcast rebroadcast end if else backoff backoff end if end while end if end if end if end procedure appendix procedure detectdanger gather neighbor information select main forwarder maxb forwarderlocation senderlocation minb pso insert emergency information in the message send message end procedure procedure rebroadcast rebroadcast emergency message end procedure procedure computevehicleconcentration mindist computemindist arrange nt descending for nt size take the average between two successive vehicles if average last average or number of vehicles compared is if this vehicle is the first vehicle add to current segment add the current vehicle to this segment segment vehicles segment vehicles increase the number of vehicles by vehicle location location i takes the location of the current vehicle segment segment count dist vehicle location first element location compute the width of the procedure pso currentsegment computevehicleconcentration to compute the current concentration of vehicles pbest cfitness cfitness for to currentsegment size calculate the best result fitness function if fitness cfitness cfitness fitness end if end for lbest cfitness segment segment segment count segment vehicles segment vehicles to store the number of vehicles for the neighborsegment computevehicleconcentration to compute the current concentration of vehicles gfitness for to currentsegment size calculate the best result current segment else if the new average is more than the double of previous value segment count segment count vehicle location location i segment vehicles segment segment count progress vehicle location return segment fitness function if fitness cfitness gfitness fitness end if end for gbest gfitness end procedure procedure receiveemermessage if code or code if preselctedforwarder rebroadcast else if candidate forwarder compute cw choose random backoff while back figure particle swarm optimization contention based broadcast protocol pcbb the procedure detectdanger works when the vehicle detects any danger the first step for the sender is to order the neighbors information in nt and select the first forwarder afterwards it calls the pso procedure that implements the pso algorithm to select the vehicles that overcome the preselected forwarder s failure wcsit the procedure receiveemermessage works when the vehicle receives an emergency message and checks if the receiver is a forwarder or not the procedure computevehicleconcentration analyzes the neighbors to discover the location of the vehicles concentration
| 9 |
generating the ideals defining unions of schubert varieties may anna bertiger a bstract this note computes a basis for the ideal defining a union of schubert varieties more precisely it computes a basis for unions of schemes given by northwest rank conditions on the space of all matrices of a fixed size schemes given by northwest rank conditions include classical determinantal varieties and matrix schubert of schubert varieties lifted from the flag manifold to the space of matrices i ntroduction we compute a basis and hence ideal generating set for the ideal defining a union of schemes each given by northwest rank conditions with respect to an antidiagonal term a scheme defined by northwest rank conditions is any scheme whose defining equations are of the form all k k minors in the northwest i j of a matrix of variables where i j and k can take varying values these schemes represent a generalization of classical determinantal varieties with defining equations all minors of a matrix of variables one geometrically important collection of schemes defined by northwest rank conditions is the set of matrix schubert varieties matrix schubert varieties are closures of the lift of schubert varieties from the complete flag manifold to matrix space in general a matrix schubert variety for a partial permutation is the subvariety of matrix space given by the rank conditions that the northwest i j must have rank at most the number of in the northwest i j of the partial permutation matrix for notice that the set of matrix schubert varieties contains the set of classical determinantal varieties which are the zero locus of all minors of a fixed size on the space of all matrices of fixed size matrix schubert varieties associated to honest that is permutations are the closures of the lifts of the corresponding schubert varieties in the flag manifold if is the matrix schubert variety for an honest permutation the projection full rank matrices gln c f cn sends gln c onto the schubert variety f cn schubert varieties orbits of stratify f cn and give a basis for f cn it is this application that led to the introduction of matrix schubert varieties in knutson and miller showed that matrix schubert varieties have a rich structure corresponding to beautiful combinatorics fulton s generators are a basis with respect to any antidiagonal term order and their initial ideal is the ideal of the pipe dream further knutson and miller show that the pipe dream complex is shellable hence the original ideal is pipe dreams the elements of the pipe dream complex were originally called rc graphs and were developed by bergeron and billey to describe the monomials in polynomial representatives for the classes corresponding to schubert varieties in f cn the importance of schubert varieties and hence matrix schubert varieties to other areas of geometry has become increasing evident for example zelevinsky showed that certain quiver varieties sequences of vector space maps with fixed rank conditions are isomorphic to date may schubert varieties knutson miller and shimozono produce combinatorial formulae for quiver varieties using many combinatorial tools reminiscent of those for schubert varieties notation and background much of the background surveyed here can be found in let respectively denote the group of invertible lower triangular respectively upper triangular n n matrices let m mi j be a matrix of variables in what follows will be a possibly partial permutation written in notation n with entries for i undefined are written we shall write permutation even when we mean partial permutation in cases where there is no confusion a matrix schubert variety is the closure in the affine space of all matrices where is a permutation matrix and and act by downward row and rightward column operations respectively notice that for an honest permutation is the closure of the lift of c to the space of n n matrices the rothe diagram of a permutation is found by looking at the permutation matrix and crossing out all of the cells weakly below and the cells weakly to the right of each cell containing a the remaining empty boxes form the rothe diagram the essential boxes of a permutation are those boxes in the rothe diagram that do not have any boxes of the diagram immediately south or east of them the rothe diagrams for and are given in figure in both cases the essential boxes are marked with the letter e e e e e f igure the rothe diagrams and essential sets of left and right the rank matrix of a permutation denoted r gives in each cell r ij the rank of the i j of the permutation matrix for for example the rank matrix of is theorem matrix schubert varieties have radical ideal i given by determinants representing conditions given in the rank matrix r that is the r ij r ij determinants of the northwest i j of a matrix of variables in fact it is sufficient to impose only those rank conditions r ij such that i j is an essential box for hereafter we call the determinants corresponding the to essential rank conditions or the analogous determinants for any ideal generated by northwest rank conditions the fulton generators one special form of ideal generating set is a basis to define a basis we set a total ordering on the monomials in a polynomial ring such that m and m n implies mp np for all monomials m n and let init f denote the largest monomial that appears in the polynomial a basis for the ideal i is a set fr i such that init i hinit f f ii hinit init fr i notice that a basis for i is necessarily a generating set for i the antidiagonal of a matrix is the diagonal series of cells in the matrix running from the most northeast to the most southwest cell the antidiagonal term or antidiagonal of a determinant is the product of the entries in the antidiagonal for example the antidiagonal of ac db is the cells occupied by b and c and correspondingly in the determinant ad bc the antidiagonal term is bc term orders that select antidiagonal terms from a determinant called antidiagonal term orders have proven especially useful in understanding ideals of matrix schubert varieties there are several possible implementations of an antidiagonal term order on an matrix of variables any of which would suit the purposes of this paper one example is weighting the top right entry highest and decreasing along the top row before starting deceasing again at the right of the next row monomials are then ordered by their total weight theorem the fulton generators for form a basis under any antidiagonal term order typically we will denote the cells of a matrix that form antidiagonals by a or b in what follows if a is the antidiagonal of a of m we will use the notation det a to denote the determinant of this we shall be fairly liberal in exchanging antidiagonal cells and the corresponding antidiagonal terms thus for any antidiagonal term order a init det a statement of result let ir be ideals defined by northwest rank conditions we will produce a basis and hence ideal generating set for ir for each list of antidiagonals ar where ai is the antidiagonal of a fulton generator of ii we will produce a basis element ar for the generators ar will be products of determinants though not simply the product of the r determinants corresponding to the ai for a fixed list of antidiagonals ar build the generator ar by begin with ar draw a diagram with a dot of color i in each box of ai and connect the consecutive dots of color i with a line segment of color i break the diagram into connected components two dots are connected if they are either connected by lines or are connected by lines to dots that occupy the same box for each connected component remove the longest series of boxes b such that there is exactly one box in each row and column and the boxes are all in the same connected component if there is a tie use the most northwest of the longest series of boxes note that b need not be any of ar multiply ar by det b remove this antidiagonal from the diagram of the connected component break the remaining diagram into components and repeat theorem ar ai is an antidiagonal of a fulton generator of ii i r form a basis and hence a generating set for ii acknowledgements this work constitutes a portion of my phd thesis completed at cornell university under the direction of allen knutson i wish to thank allen for his help advice and encouragement in completing this project thanks also go to jenna rajchgot for helpful discussions in the early stages of this work i d also like to thank the authors of computer algebra system gs which powered the computational experiments nessecary to do this work i m especially grateful to mike stillman who patiently answered many of my questions over the course of this work kevin purbhoo gave very helpful comments on drafts of this manuscript for which i can not thank him enough e xamples we delay the proof of theorem to section and first give some examples of the generators produced for given sets of antidiagonals these examples are given by pictures of the antidiagonals on the left and corresponding determinantal equations on the right note that we only give particular generators rather than entire generating sets which might be quite large we then give entire ideal generating sets for two smaller intersections if r then for each fulton generator with antidiagonal a the algorithm produces the generator ga det a therefore if we intersect only one ideal the algorithm returns the original set of fulton generators the generator for the antidiagonal shown is exactly the determinant of the one antidiagonal pictured the generator for two disjoint antidiagonals is the product of the determinants corresponding to the two disjoint antidiagonals in general if ar are disjoint antidiagonals then the then the algorithm looks at each ai separately as they are part of separate components and the result is that ar det det ar if ar overlap to form one antidiagonal x then the last step of the algorithm will occur only once and will produce ar det x for example in this example there are two longest possible antidiagonals the three cells occupied by the green dots and the three cells occupied by the red dots the ones occupied by the green dots are more northwest hence the generator for the three antidiagonals shown below is in the picture below the longest possible anti diagonal uses all of the cells in the green anti diagonal but only some of the cells in the red antidiagonal however there is only one possible longest antidiagonal thus the generator is we now give two examples where the complete ideals are comparatively small firstly we calculate i i i i i and i i the antidiagonals and corresponding generators are shown below with antidiagonals from generators of i shown in red and antidiagonals of generators of i shown in blue note that the antidiagonals are only one cell each in this case theorem results in i i i i as a slightly larger example consider i i i these generators are given below in the order that the antidiagonals are displayed reading left to right and top to bottom the antidiagonals for i are shown in red while the antidigaonals i are shown in blue for note that the full grid is not displayed but only the northwest portion where antidiagonals for these two ideals may lie here theorem produces p roof of t heorem we now prove the main result of this paper theorem which states that the ar generate ir we begin with a few fairly general statements theorem knu if ii i s are ideals generated by northwest rank conditions then init ii init ii lemma if j k are homogeneous ideals in a polynomial ring such that init j init k then j lemma let ia and ib be ideals that define schemes of northwest rank conditions and let det a ia and det b ib be determinants with antidiagonals a and b respectively such that a b x and a b then det x is in ia ib proof let vx v det x va v ia and vb v ib be the varieties corresponding to the ideals hdet x i ia and ib it is enough to show that va vx and vb vx we will show that given a matrix with antidiagonal x with a with antidiagonal a x where the northwest of the cells occupied by a has rank at most length a then the full matrix has rank at most length x the corresponding statement for with antidiagonal b can be proven by replacing a with b everywhere the basic idea of this proof is that we know the rank conditions on the rows and columns northwest of those occupied by a the rank conditions given by a then imply other rank conditions as adding either a row or a column to a can increase its rank by at most one column t column c rank at most l northwest of row k t column c rank at most l k c k t northwest of column k row k t rank at most k row k t northwest of column k row k row k f igure the proof of lemma the antidiagonal cells in a are marked in black and the antidiagonal cells in x a b are marked in white let k be the number of rows also the number of columns in the antidiagonal x let the length of a be l so the rank condition on all rows and columns northwest of those occupied by a is at most assume that the rightmost column of a is c and the leftmost column of a is t notice that this implies that the bottom row occupied by a is k t as the antidiagonal element in column t is in row k thus the northwest k t c of matrices in va has rank at most notice c with equality if a occupies a continuous set of columns so matrices in va have rank at most l in the northwest adding columns to this gives a with rank at most further by the same principle moving down t rows the northwest k k the whole matrix with antidiagonal x has rank at most k t t k hence has rank at most k and so is in vx for a visual explanation of the proof of lemma see figure lemma ar ii for i r and hence ar ai ranges over all antidiagonals for fulton generators of ii i ii proof fix i let s be the first antidiagonal containing a box occupied by a box contained in ai added to ar we shall show that det s is in ii and hence ar ii as it is a multiple of det s if ai s then det s ii either because s ai or s ai in which case we apply lemma otherwise and s is weakly to the northwest of ai therefore there is a subset b of s such that and b is weakly northwest of ai hence b is an antidiagonal for some determinant in ii and again by lemma det s ii lemma init ar ar under any antidiagonal term order proof init ar is a product of determinants with collective antidiagonals ar when we combine lemma and theorem we see that ar i init then lemmas and combine to complete the proof of theorem note that theorem may produce an oversupply of generators for example if then inputting the same set of p fulton generators twice results in a basis of polynomials for r eferences nantel bergeron and sara billey and schubert polynomials experiment math no mr william fulton flags schubert polynomials degeneracy loci and determinantal formulas duke math j no mr gs daniel grayson and michael stillman a software system for research in algebraic geometry available at http allen knutson and ezra miller geometry of schubert polynomials ann of math no mr allen knutson ezra miller and mark shimozono four positive formulae for type a quiver polynomials invent math no mr knu allen knutson frobenius splitting and degeneration preprint ezra miller and bernd sturmfels combinatorial commutative algebra graduate texts in mathematics vol new york mr two remarks on graded nilpotent classes uspekhi mat nauk no mr
| 0 |
an o log k n randomized algorithm for the problem oct wenbin abstract in this paper we show that there is an o log k n randomized algorithm for the problem on any metric space with n points which improved the previous best competitive ratio o k n log log n by nikhil bansal et al focs pages keywords problem online algorithm method randomized algorithm introduction the problem is to schedule k mobile servers to serve a sequence of requests in a metric space with the minimum possible movement distance in manasse et al introduced the ksever problem as a generalization of several important online problems such as paging and caching problems its conference version is in which they proposed a algorithm for the problem and a n algorithm for the n sever problem in a metric space they still showed that any deterministic online algorithm for the problem is of competitive ratio at least they proposed the conjecture for the problem on any metric space with more than k different points there exists a deterministic online algorithm with competitive ratio it was in shown that the conjecture holds for two special cases k and n k the conjecture also holds for the problem on a uniform metric the special case of the problem on a uniform metric is called the paging also known as caching problem slator and tarjan have proposed a algorithm for the paging problem for some other special metrics such as line tree there existed online algorithms yair bartal email department of computer science guangzhou university china state key laboratory for novel software technology nanjing university china and elias koutsoupias show that the work function algorithm for the problem is of kcompetitive ratio in the following special metric spaces the line the star and any metric space with k points marek chrobak and lawrence larmore proposed the algorithm for the problem on trees for the problem on the general metric space the conjecture remain open fiat et al were the first to show that there exists an online algorithm of competitive ratio that depends only on k for any metric space its competitive ratio is k the bound was improved later by grove who showed that the harmonic algorithm is of competitive ratio o the result was improved to log k by and grove a significant progress was achieved by koutsoupias and papadimitriou who proved that the work function algorithm is of competitive ratio generally people believe that randomized online algorithms can produce better competitive ratio than their deterministic counterparts for example there are several o log k algorithms for the paging problem and a log k lower bound on the competitive ratio in although there were much work the log k lower bound is still best lower bound in the randomized case recently bansal et al propose the first randomized algorithm for the problem on a general metrics spcace their randomized algorithm is of competitive ratio o k n log log n for any metric space with n points which improves on the deterministic competitive ratio of koutsoupias and papadimitriou whenever n is for the problem on the general metric space it is widely conjectured that there is an o log k randomized algorithm which is called as the randomized conjecture for the paging problem it corresponds to the problem on a uniform metric there is o log k algorithms for the weighted paging problem it corresponds to the problem on a weighted star metric space there were also o log k algorithms via the online method more extensive literature on the problem can be found in in this paper we show that there exists a randomized algorithm of o log k n competitive ratio for any metric space with n points which improved the previous best competitive ratio o k n log log n by nikhil bansal et al in order to get our results we use the online method which is developed by buchbinder and naor et al in recent years buchbinder and naor et al have used the method to design online algorithms for many online problems such as covering and packing problems the problem and so on first we propose a formulation for the fraction problem on a weighted hierarchical tree hst then we design an o log k online algorithm for the fraction problem on a weighted hst with depth since any hst with n leaves can be transformed into a weighted hst with depth o log n with any leaf to leaf distance distorted by at most a constant thus we get an o log k log n online algorithm for the fraction problem on an hst based on the known relationship between the fraction problem and the randomized problem we get that there is an o log k log n randomized algorithm for the problem on an hst with n points by the metric embedding theory we get that there is an o log k n randomized algorithm for the problem on any metric space with n points preliminaries in this section we give some basic definitions definition competitive ratio adapted from for a deterministic online algorithm dalg we call it if there exists a constant c such that for any request sequence costdalg r costop t c where costdalg and costop t are the costs of the online algorithm dalg and the best offline algorithm op t respectively for a randomized online algorithm we have a similar definition of competitive ratio definition adapted from for a randomized online algorithm ralg we call it rcompetitive if there exists a constant c such that for any request sequence e costralg r costop t c where e costralg is the expected cost of the randomized online algorithm ralg in order to analyze randomized algorithms for the problem introduce the fractional problem on the fractional problem severs are viewed as fractional entities as opposed to units and an online algorithm can move fractions of servers to the requested point definition fractional problem adapted from suppose that there are a metric space s and a total of k fractional severs located at the points of the metric space given a sequence of requests each request must be served by providing one unit server at requested point through moving fractional servers to the requested point the cost of an algorithm for servicing a sequence of requests is the cumulative sum of the distance incurred by each sever where moving a w fraction of a server for a distance of costs in bartal introduce the definition of a hierarchical tree hst into which a general metric can be embedded with a probability distribution for any internal node the distance from it to its parent node is times of the distance from it to its child node the number is called the stretch of the hst an hst with stretch is called a in the following we give its formal definition definition hierarchically trees hsts for a tree is a rooted tree t v e whose edges length function d satisfies the following properties for any node v and any two children of v d v d v for any node v d p v v d v w where p v is the parent of v and w is a child of for any two leaves and d p d p fakcharoenphol et al showed the following result lemma if there is a randomized algorithm for the problem on an with all requests at the n leaves then there exists an o log n competitive randomized online algorithm for the problem on any metric space with n points we still need the definition of a weighted hierarchically tree introduced in definition weighted hierarchically trees weighted hsts a weighted is a rooted tree satisfying the property of the definition and the property d p v v d v w for any node v which is not any leaf or the root where p v is the parent of v and w is a child of in banasl et al show that an arbitrary depth with n leaves can be embedded into an o log n depth weighted with constant distortion which is described as follows lemma let t be a t with n leaves which is of possibly arbitrary depth then t can be transformed into a weighted with depth o log n such that the leaves of and t are factor the same and leaf to leaf distance in t is distorted in by a at most an o k randomized algorithm for the problem on an hst when n k in this paper we view the problem as the weighed caching problem such that the cost of evicting a page out of the cache using another page satisfies the triangle inequality a point is viewed as a page the set of k points that are served by k severs is viewed as the cache which holds k pages the distance of two points i and j is viewed the cost of evicting the corresponding page pi out of the cache using the corresponding page pj let n pn denotes the set of n pages and d pi pj denotes the cost of evicting the page pi out of the cache using the page pj for any pi pj n which satisfies the triangle inequality for any pages i j s d pi pi d pi pj d pj pi d pi pj d pi ps d ps pj let pm be the requested pages sequence until time m where pt is the requested page at time at each time step if the requested page pt is already in the cache then no cost is produced otherwise p the page pt must be fetched into the cache by evicting some other pages p in the cache and a cost d p pt is produced in this section in order to clearly describe our algorithm design idea we consider the case n k first we give some notations let denote a hierarchically trees with stretch factor let n be the number of nodes in a and leaves be pn let v denote the depth of a node let r denote the root node thus r for any leaf p let denote its depth p let p v denote the parent node of a node c v denote the set of children of a node let d denote the distance from the root to its a child d v denote the d distance from a node v to its parent d v d v p v it is easy to know that d v v let tv denote the subtree rooted at v and l tv denote the set of leaves in tv let denote the number of the leaves in tv for a leaf pi let a pi j denote the ancestor node of pi at the depth j thus a pi is pi a pi is the root r and so on at time t let variable xpi t denote the fraction of pi that is in pthe cache and upi t denote the fraction pof pi that is out of cache obviously xpi t upi t and xp t for a node v let uv t up t it is the total fraction of n tv p pages in the subtree tv which is out of the cache it is easy to see that uv t uw t suppose v that at time the set of initial k pages in the cache is i pik at time t when the request pt arrives if page pt is fetched mass pt p into the cache by evicting out the page p in the cache then the evicting cost is d p pt pt p for a metric suppose the path from pt to p in it is pt vj v p where v is the first common ancestor node of pt and by the definition of a we have d pt d p and d vi d j j j p p p for any i j thus d p pt d pt d vi d p d pt vi so the evicting cost is pt cost incurred at time t is n p j p vi pt p since p can be any page in n pt the evicting v max uv uv t thus we give the lp formulation for the fractional problem on a as follows m p n m p p p minimize v zv t upt t p up t k subject to and s n with k p up up t and a subtree tv v r zv t tv and node v zv t uv t for t and any leaf node p i for t and any leaf node p i the first primal constraintp states that pat any time t if we take any set s of vertices with p xp t k the total number of pages out xp t up t k then n of the cache is at lease the variables zv t denote the total fraction mass of pages in tv that are moved out of the subtree tv obviously it is not needed to define a variable zr t for the root node the fourth and fifth constraints and enforce the initial k pages in the cache are pik the first term in the object function is the sum of the moved cost out of the cache and the second term enforces the requirement that the page pt must be in the cache at time t upt t its dual formulation is as follows m p p p k as t d maximize n k subject to and p n pt p as t s and n p p b a p j b a p j t ba p j and any subtree tv bv t v and v and k as t bv t in the dual formulation the variable as t corresponds to the constraint of the type the variable bv t corresponds to the constraint of the type the variable corresponds to the constraint of the type and based on above formulation we extend the design idea of bansal et s primaldual algorithm for the metric task system problem on a to the problem on a the design idea of our online algorithm is described as follows during the execution of our algorithm it always maintains the following relation between the primal variable uv t and dual bv ln k when the request pt arrives at variable bv uv t f bv exp v time t the page pt is gradually fetched into the cache and other pages are gradually moved out of the cache by some rates until pt is completely fetched into the cache upt t is decreased at some rate and other up t is increased at some rate for any p n pt until upt t becomes it can be viewed that we move mass upt t out of leaf pt through its ancestor nodes and distribute it to other leaves p n pt in order to compute the exact distributed amount at each page p n pt the online algorithm should maintain the following invariants satisfying dual constraints it is tight for all dual constraints of type on other leaves n pt p node identity property uv t uw t holds for each node v v we give more clearer description of the online algorithm process at time t when the request pt arrives we initially set upt t upt if upt t then we do nothing thus the primal cost and the dual profit are both zero all the invariants continue to hold if upt t then we start to increase variable as at rate at each step we would like to keep the dual constraints tight and maintain the node identity property however increasing variable as violates the dual constraints on leaves in n pt hence we increase other dual variables in order to keep these dual constraints tight but increasing these variables may also violate the node identity property so it makes us to update other dual variables this process results in moving initial upt t mass from leaf pt to leaves n pt we stop the updating process when upt t become in the following we will compute the exact rate at which we should move mass upt t from pt through its ancestor nodes at time t to other leaves in n pt in the because of the space limit we put proofs of the following some claims in the appendix first we show one property of the function f lemma duv t dbv proof since uv t claim ln v uv t bv k exp v k ln k we take the derivative over bv and get the in order to maintain the node identity property uv t p uw t for each node v at any v time t when uv t is increased or decreased it is also required to increase or decrease the children of v at some rate the connection between these rates is given lemma for a node v if we increase variable bv at rate h then we have the following equality p dbw dbv uw t uv t k dh dh v we need one special case of lemma when the variable bv is increased decreased at rate h it is required that the increasing decreasing rate of all children of v is the same by above lemma we get lemma for v a node assume that we increase or decrease the variable bv at rate if the increasing or decreasing rate of each w c v is the same then in order to keep the node identity property we should set the increasing or decreasing rate for each child w c v as follows db dbw v dh dh repeatedly applying this lemma we get the following corollary corollary for a node v with v j and a path p from leaf pi tv to v if bv is increased or decreased at rate h and the increasing decreasing rate of all children of any v p is the p db v j where j same then dh dh we still require the following special case of lemma let be the first child of the node assume that is increased or decreased at some rate and the rate of increasing or decreasing is the same for every c v if bv is unchanged then the following claim should hold lemma let wm be the children of a node assume that we increase or decrease db db i at rate h and also increase to wm at the same rate for i let wdh be wdh if we would like to maintain the amount uv t unchanged then we should have dh k v uv t k t dh dh theorem when request pt arrives at time t in order to keep the dual constraints tight and node identity property if as t is increased with rate we should decrease every ba pt j j with rate dba pt j das t j ua pt j t a pk t j ua pt t a ptk for each sibling w of a pt j increase bw with the following rate dbw das t j ua pt t pt k thus we design an online algorithm for the fractional problem as follows see algorithm at time t we set for all p and set ba p j for any j at time t when a request pt arrives initially we set up t up for all p and bp is initialized to bp t if upt t then do nothing otherwise do the following p xp t k n so s n let s p up t since k n while upt t increasing as t with rate for each j decrease every ba pt j with rate dba pt j das t j ua pt j t a pk t j ua pt t a ptk for each sibling w of a pt j increase bw with the following rate dbw das t for any pt k j ua pt t node v in the path from w to a leaf dh in tw if be the child of v dh algorithm the online algorithm for the fractional problem on a theorem the online algorithm for the fractional problem on a is of competitive ratio k in duru study the relationship between fractional version and randomized version of the problem which is given as follows lemma the fractional problem is equivalent to the randomized problem on the line or circle or if k or k n for arbitrary metric spaces thus we get the following conclusion theorem there is a randomized algorithm with competitive ratio k for the problem on a when n k by lemma we get the following conclusion theorem there is an o k log n competitive randomized algorithm for the problem on any metric space when n k an o log k fractional algorithm for the problem on a weighted hst with depth in this section we first give an o log k fractional algorithm for the problem on a weighted with depth we give another some notations for a weighted hst let be a weighted for a node d v d p v v whose depth is j let d w d v w where w is a child of by the definition of a weighted for all j for a node v if any leaf p l t v such that up t we call it a full node by this definition for a full node uv t tv otherwise we call it node let n f c v is the set of children node of v n f c v c v and w is a node for a node v let n l tv denote the set of leaf nodes in let s t let p denotes the path from pt to root r a pt pt a pt a pt a pt r for a node v p if there exists a p pt such that v is the first common ancestor of pt and p we call it a common ancestor node in p let ca pt s denote the set of common ancestor nodes in p suppose thatp ca pt s a pt a pt a pp t where for a node uw t thus for a full node v uv t up t it is easy to know that uv t v let uv t f c v for any j ua pt j t ua pt t for any j ua pt t ua pt j t ua pt t for any j ua pt j t ua pt t the formulation for the fractional problem on a weighted hst is the same as that on a hst in section based on the formulation the design idea of our online algorithm is similar to the design idea in section during the execution of our algorithm it keeps the following relation between the primal variable uv t and dual variable bv uv t f bv bv l tv exp v ln k this relation determines how much mass of upt t should be k gradually moved out of leaf pt and how it should be distributed among other leaves s pt until pt is completely fetched into the cache upt t thus p at any time t the algorithm maintains up t n a distribution t upn t on the leaves such that n in order to compute the the exact rate at which we should move mass upt t from pt through its ancestor nodes at time t to other leaves s pt in the weighted using similar argument to that in section we get following several claims because of the space limit we put their proofs in the appendix lemma duv t dbv proof since uv t claim ln v uv t l tv k bv l tv exp v k ln k we take the derivative over bv and get the lemma for a node v with v j if we increase variable bv at rate h then we have the following equality p dbw db l tv w v uw t l t uv t k dh dh k f c v lemma for v a node with v j assume that we increase or decrease the variable bv at rate if the increasing or decreasing rate of each w n f c v is the same then in order to keep the node identity property we should set the increasing or decreasing rate for each child w n f c v as follows dbw db v dh dh repeatedly applying this lemma we get the following corollary corollary for a node v with v j and a path p from leaf pi tv to v if bv is increased or decreased at rate h and the increasing decreasing rate of all children of any v p is the same p db v j where j then dh dh lemma let wm be the children node of a node v any wi n f c v assume that we increase or decrease at rate h and also increase to wm at the same rate db db i if we would like to maintain the amount uv t unchanged then for i let wdh be wdh we should have dh k v uv t t k t dh dh theorem when request pt arrives at time t in order to keep the dual constraints tight and node identity property if as t is increased with rate we should decrease every ba pt j for each j with rate dba pt j das t ur t l t l t a pt a pt j j k ua pt j t ua pt t k k for each sibling w n f c v of a pt j increase bw with the following rate dbw das t ur t k j ua pt t l ta pt k thus we design an online algorithm for the fractional problem on a weighted as follows see algorithm theorem the online algorithm for the fractional problem on a weighted with depth is of competitive ratio ln k by lemma we get theorem there exists an o log k log n fractional algorithm for the problem on any in nikhil bansal et al show the following conclusion at time t we set for all at time t when a request pt arrives initially we set up t up for all p and bp is initialized to bp t if upt t then do nothing otherwise do the following let s p up t suppose that ca pt s a pt a pt a pt where while upt t increasing as t with rate for each j decrease every ba pt j with rate dba pt j ur t k das t j ua pt j t l ta pt k l ta pt j k ua pt t for each sibling w n f c v of a pt j increase bw with the following rate dbw das t ur t l t a pt j k ua pt t k for any node v in the path from w to a leaf in n l tw if n f c v and db db v j wdh vdh for p s pt if some up t reaches the value of then we update s s p and the set n f c v for each ancestor node v of algorithm the online algorithm for the fractional problem on a weighted lemma let t be a with then any online fractional algorithm on t can be converted into a randomized algorithm on t with an o factor loss in the competitive ratio thus we get the following conclusion by theorem theorem let t be a with there is a randomized algorithm for the problem with a competitive ratio of o log k log n on t by lemma we get the following conclusion theorem for any metric space there is a randomized algorithm for the problem with a competitive ratio of o log k n conclusion in this paper for any metric space with n points we show that there exist a randomized algorithm with o log k n ratio for the problem which improved the previous best competitive ratio o k n log log n acknowledgments we would like to thank the anonymous referees for their careful readings of the manuscripts and many useful suggestions wenbin chen s research has been partly supported by the national natural science foundation of china nsfc under grant the research projects of guangzhou education bureau under grant no and the project from state key laboratory for novel software technology nanjing university references dimitris achlioptas marek chrobak and john noga competitive analysis of randomized paging algorithms theoretical computer science avrim blum carl burch and adam kalai paging proceedings of the annual symposium on foundations of computer science page nikhil bansal niv buchbinder aleksander madry joseph naor a polylogarithmiccompetitive algorithm for the problem focs pages nikhil bansal niv buchbinder and joseph seffi naor a randomized algorithm for weighted paging proceedings of the annual ieee symposium on foundations of computer science pages buchbinder jain and naor online algorithms for maximizing revenue proc european symp on algorithms esa pp buchbinder and naor online algorithms for covering and packing problems proc european symp on algorithms esa volume of lecture notes in comput pages springer buchbinder and naor improved bounds for online routing and packing via a approach proc symp foundations of computer science pages niv buchbinder joseph naor the design of competitive online algorithms via a approach foundations and trends in theoretical computer science nikhil bansal niv buchbinder and joseph seffi naor towards the randomized conjecture a approach proceedings of the annual siam symposium on discrete algorithms pp nikhil bansal niv buchbinder and joseph seffi naor metrical task systems and the ksever problem on hsts in icalp proceedings of the international colloquium on automata languages and programming yair bartal probabilistic approximations of metric spaces and its algorithmic applications proceedings of the annual ieee symposium on foundations of computer science pages yair bartal on approximating arbitrary metrices by tree metrics proceedings of the annual acm symposium on theory of computing pages yair bartal bela bollobas and manor mendel a theorem for metric spaces and its applications for metrical task systems and related problems proceedings of the annual ieee symposium on foundations of computer science pages yair bartal and eddie grove the harmonic algorithm is competitive journal of the acm yair bartal nathan linial manor mendel and assaf naor on metric phenomena proceedings of the annual acm symposium on theory of computing pages yair bartal elias koutsoupias on the competitive ratio of the work function algorithm for the problem theoretical computer science avrim blum howard karloff yuval rabani and michael saks a decomposition theorem and bounds for randomized server problems proceedings of the annual ieee symposium on foundations of computer science pages allan borodin and ran online computation and competitive analysis cambridge university press chrobak and larmore an optimal algorithm for on trees siam journal on computing meyerson and poplawski randomized on hierarchical binary trees proceedings of the annual acm symposium on theory of computing pages csaba and lodha a randomized algorithm for the problem on a line random structures and algorithms jittat fakcharoenphol satish rao and kunal talwar a tight bound on approximating arbitrary metrics by tree metrics proceedings of the annual acm symposium on theory of computing pages fiat rabani and ravid competitive algorithms journal of computer and system sciences amos fiat richard karp michael luby lyle mcgeoch daniel dominic sleator and neal young competitive paging algorithms journal of algorithms edward grove the harmonic online algorithm is competitive proceedings of the annual acm symposium on theory of computing pages elias koutsoupias the problem computer science review vol no pages elias koutsoupias and christos papadimitriou on the conjecture journal of the acm manasse mcgeoch and sleator competitive algorithms for online problems proceedings of the annual acm symposium on theory of computing pages manasse mcgeoch and sleator competitive algorithms for server problems journal of algorithms lyle mcgeoch and daniel sleator a strongly competitive randomized paging algorithm algorithmica daniel sleator and robert tarjan amortized efficiency of list update and paging rules communications of the acm duru the problem and fractional analysis master s thesis the university of chicago http appendix proofs for claims in section the proof for lemma is as follows p proof since it is required to maintain uv t uw t we take the derivative of both sides and v get that duv t dbv dbv dh p v by lemma we get duw t dbw dbw dh ln v uv t d v since d w we get p dbw dbv uv t k dh dh v k p v uw t dbw dh ln w uw t k k the proof for lemma is as follows proof by above lemma if the increasing or decreasing rate of each w c v is the same we get that p dbv db db w uv t uw t w uv t k dh dh dh so we get that dbw dh v dbv dh the proof for lemma is as follows proof by lemma in order to keep the amount uv t unchanged we get p dbw t w uw t w dh k dh k thus db wdh db db dh v t w k so wdh wdh t hence we get the claim k dh dh p uw t v uv t v k w k the proof for theorem is as follows proof when request pt arrives at time t we move mass upt t from pt through its ancestor nodes to other leaves n pt upt t is decreased and up t is increased for any p n pt since these mass moves out of each subtree ta pt j for each j ua pt j t is decreased by b t j ua pt j t f ba pt j exp a p ln k we need to keep this relation during v the algorithm ba pt j also decreases for each j on the other hand up t is increased for each p n pt thus for each node v whose tv doesn t contain pt its mass uv t is also increased for each node v whose tv doesn t contain pt it must be a sibling of some node a pt j for each j we assume that all siblings v of node a pt j increase at the same rate in the following we will compute the increasing or decreasing rate of all dual variables in the b t j be the decreasing rate of ba pt j regarding as for j let a pda s b regarding as for j let w das be the increasing rate of bw for any siblings w of a pt j regarding as using from top to down method we can get a set of equations about the quantities and first we consider the siblings of a pt those nodes are children of root r but they are not a pt let w be one of these siblings if bw is raised by by corollary the sum of on any path from a leaf in tw to w must be since as is increasing with rate it forces in order to maintain the dual constraint tight for leaves in tw this considers the dual constraints for these leaves now this increasing mass must be canceled out by decreasing the mass in ta pt since the mass ur t in tr is not changed thus in order to maintain the node identity property of root by lemma we must set such that a p t k ua pt t n for siblings of node a pt we use the similar argument let w be a sibling of a pt consider a path from a leaf in tw to the their dual constraint already grows at rate this must be canceled out by increasing bw and if bw is raised by by corollary the sum of on any path from a leaf in tw to w must be thus must be set such that again this increasing mass must be canceled out by decreasing the mass in ta pt in order to keep the node identity property of a pt by lemma we must set such that p t k a p t ua pt t k ua pt t continuing this method we obtain a system of linear equations about all and j for maintaining the dual constraints tight we get the following equations p i for keeping the node identity property we get the following equations a p t k ua pt t n a p t k a p t ua pt t k ua pt t a p t k a p t ua pt t k ua pt t we continue to solve the system of linear equations for each j p j i p i j j j j a p t k a p t ua pt t k a p t ua pt t k a p t ua pt t k since j j ua pt t we get solving the recursion we get n a p j ua pt t n j ua pt j t t k a pt j k ua pt t a pt k the proof for theorem is as follows proof let p denote the value of the objective function of the primal solution and d denote the value of the objective function of the dual solution initially let p and d in the following we prove three claims the primal solution produced by the algorithm is feasible the dual solution produced by the algorithm is feasible p k by three claims and weak duality of linear programs the theorem follows immediately first we prove the claim as follows at any time t since s n and the algorithm keeps p n k so the primal constraints are satisfied n second we prove the claim as follows by theorem the dual constraints are satisfied obviously dual constraints are satisfied for any node v if bv then uv t if bv v then uv t thus the dual constraints are satisfied third we prove claim as follows if the algorithm increases the variables as t at some time k n n let s compute the primal cost at depth j j t then s t we compute the movement cost of our algorithm by the change of as follows p du w dbw t asj j a pt a pt j p w a pt a pt j ln w uw t k j p uw t a pt a pt j p t ua p t k p t uw t kw a pt a pt j p t ua pt t k k ln k ln k let bj denote ua pt j t pt j k p then uw t a pt a pt j k bj hence the total cost over all levels is movement ln k p bj ln k p b j ln k ln ln k ln b ln k ln u pt t k ln k ln ln k ln k ln k ln k ln k where the first inequality holds since y ln y for any y thus we get p k let op t be the cost of the best offline algorithm pmin be the optimal primal solution and dmax be the optimal dual solution then pmin op t since op t is a feasible solution for the d p p primal program based on the weak duality dmax pmin hence op t pmin pmin dmax pmin min ln p k pmin so the competitive ratio of this algorithm is k proofs for claims in section the proof for lemma is as follows p proof since it is required to maintain uv t uw t we take the derivative of both sides f c v and get that dbv duv t dbv dh p duw t dbw f c v ln v uv t by lemma we get d v since d w we get db l tv v uv t k dh dbw dh l tv k p f c v dbw dh p f c v uw t dbw dh ln w uw t l tw k l tw k the proof for lemma is as follows proof by lemma if the increasing or decreasing rate of each w n f c v is the same we get that uv t db l tv v k dh so we get that dbw dh dbw dh p uw t f c v l tw k dbw dh uv t l tv k dbv dh the proof for lemma is as follows proof by lemma in order to keep the amount uv t unchanged we get p l dbw w t uw t l t dh k dh k thus db wdh db db dh f c v l t k wdh t so wdh hence we get the claim l k dh dh p uw t f c v v uv t l t k l tw k the proof for theorem is as follows proof when request pt arrives at time t we move mass upt t from pt through its ancestor nodes to other leaves nodes s pt upt t is decreased and up t is increased for any p s pt since these mass moves out of each subtree ta pt j for each j ua pt j t is decreased by b v t j exp a p ua pt j t f ba pt j l t ln k we need to keep this relation k v during the algorithm ba pt j also decreases for each j on the other hand up t is increased for each p s pt thus for each node v whose tv doesn t contain pt its mass uv t is also increased for each node v whose tv doesn t contain pt it must be a sibling of some node a pt j where j we assume that all siblings v of any node v increase at the same rate in the following we will compute the increasing or decreasing rate of all dual variables in b t j be the decreasing rate of the weighted regarding as for j let a pda s b ba pt j regarding as for each j let w das be the increasing rate of bw for any siblings w n f c a pt j of a pt j regarding as using from top to down method we can get a set of equations about the quantities and first we consider the siblings of a pt those nodes are children of a pt but they are not a pt let w be one of these siblings if bw is raised by by corollary the sum of on any path from a leaf in tw to w must be since as is increasing with rate it forces in order to maintain the dual constraint tight for leaf nodes in tw this considers the dual constraints for these leaf nodes now this increasing mass must be canceled out by decreasing the mass in ta pt since the mass ua pt t in ta pt is not changed thus in order to maintain the node identity property of a pt by lemma we must set such that t a p t k ua pt t k t a p t ua pt t k ur k ua pt t for siblings of node a pt we use the similar argument let w be a sibling of a pt consider a path from a leaf node in tw to the their dual constraint already grows at rate this must be canceled out by increasing bw and if bw is raised by by corollary the sum of on any path from a leaf in tw to w must be thus must be set such that again this increasing mass must be canceled out by decreasing the mass in ta pt in order to keep the node identity property of a pt by lemma we must set such that ta p t k t a p t ua pt t k ta p t ua pt t k t a p t ua pt t k ua pt t continuing this method we obtain a system of linear equations about all and j for maintaining the dual constraints tight we get the following equations p for keeping the node identity property we get the following equations t a p t k ur k t a p t ua pt t k t a p t ua pt t k ua pt t t a p t k t a p t h ua pt t k ua pt t we continue to solve the system of linear equations for each j h p p j since t a p t j k t a p t ua pt t k t a p t ua pt t k t a p t j ua pt t k ua pt t we get solving the recursion we get ur k ua pt t t a p t j k l t a pt l t a pt ua pt t k k l t a pt l t a pt ur k ua pt t k k ua pt t ur k ua pt t the proof for theorem is as follows proof let p denote the value of the objective function of the primal solution and d denote the value of the objective function of the dual solution initially let p and d in the following we prove three claims the primal solution produced by the algorithm is feasible the dual solution produced by the algorithm is feasible p ln k by three claims and weak duality of linear programs the theorem follows immediately the proof of claim and are similar to that of claim and in section third we prove claim as follows if the algorithm increases the variables as t at some time let s compute the primal cost at depth j t then s t we compute the movement cost of our algorithm by the change of as follows p du w dbw t asj j f c a pt a pt j p w a pt a pt j ur t k j p up t k j p l tw k p w uw t t k a pt a pt j ta p t ua pt t k p w uw t t k a pt a pt j ta p t ua pt t k ln k p w uw t t k a pt a pt j ta p j t ua pt t k p p w uw t t up t k k a pt a pt j pt ta p j t ua pt t k p w uw t k a pt a pt j ta p k t ua pt t k p w uw t t k a pt a pt j ta p t ua pt t k ln k ln w uw t pt up t k ln k ln k k ln k ln k k ln k k since p uw t a pt a pt j l t a pt ua pt t k where the first inequality holds since l tw k p up t k the reason is that the constraint p up t at time t is not satisfied otherwise the algorithm stop increasing the variable up t since p up t k upt t the algorithm stop increasing the variables in k pt pt addition when k k thus the total cost of all j depth is at most ln k k hence we get p ln k so the competitive ratio of this algorithm is ln k
| 8 |
on the exact solution to a smart grid analysis problem i ntroduction a modern society relies critically on the proper operation of the electric power distribution and transmission system which is supervised and controlled through supervisory control and data acquisition scada systems through remote terminal units rtus scada systems measure data such as transmission line power flows bus power injections and part of the bus voltages and send them to the state estimator to estimate the power network states the bus voltage phase angles and bus voltage magnitudes the estimated states are used for vital power network operations such as optimal power flow opf dispatch and contingency analysis ca see fig for a block diagram of the above functionalities any malfunctioning of these operations can delay proper reactions in the control center and lead to significant social and economical consequences such as the northeast us blackout of the technology and the use of the scada systems have evolved a lot since the when they were introduced the scada systems now are interconnected to office lans and through them they are connected to the internet hence today there are more access points to the scada systems and also more functionalities to tamper with for example the rtus can be subjected to attacks in fig the communicated data can be subjected to false data attacks furthermore the scada master itself can be attacked this paper focuses on the cyber security issue related to false data attacks where the communicated metered measurements are subjected to additive data attacks a false data attack can potentially lead to erroneous state estimates the authors are with the access linnaeus center and the automatic control lab the school of electrical engineering kth royal institute of technology sweden sou hsan kallej this work is supported by the european commission through the viking project the swedish research council vr under grant and grant and the knut and alice wallenberg foundation power network rtus rtus agc optimal power flow ems x sscada masster index network state estimation security operation research optimization methods state estimatorr paper considers a smart grid problem analyzing the vulnerabilities of electric power networks to false data attacks the analysis problem is related to a constrained cardinality minimization problem the main result shows that an relaxation technique provides an exact optimal solution to this cardinality minimization problem the proposed result is based on a polyhedral combinatorics argument it is different from results based on mutual coherence and restricted isometry property the results are illustrated on benchmarks including the ieee and systems sscada masster sep kin cheong sou henrik sandberg and karl henrik johansson human operator control center fig block diagram of power network control center and scada rtus connected to the substations transmit and receive data from the control center using the scada system at the control center a state estimate is computed and then used by energy management systems ems to send out commands to the power network the human figures indicate where a human is needed in the control loop this paper considers the false data attack scenario in by the state estimator which can result in gross errors in opf dispatch and ca in turn these can lead to disasters of significant social and economical consequences false data attack on communicated metered measurements has been considered in the literature was the first to point out that a coordinated intentional data attack can be staged without being detected by state estimation bad data detection bdd algorithm which is a standard part of today s system investigate the construction problem for such unobservable data attack especially the sparse ones involving relatively few meters to compromise under various assumptions of the network dc power flow model in particular poses the attack construction problem as a cardinality minimization problem to find the sparsest attack including a given set of target measurements set up similar optimization problems for the sparsest attack including a given measurement seek the sparsest nonzero attack and finds the sparsest attack including exactly two injection measurements the solution information of the above optimization problems can help network operators identify the vulnerabilities in the network and strategically assign protection resources encryption of meter measurements to their best effect on the other hand the unobservable data attack problem has its connection to another vital ems functionality namely observability analysis in particular solving the attack construction problem can also solve an observability analysis problem this is to be explained in section this connection was first reported in and was utilized in to compute the sparsest critical for some integer this is a generalization of critical measurements and critical sets to perform the analysis in a timely manner it is important to solve the data attack construction problem efficiently this effort has been discussed for instance in the efficient solution to the attack construction problem in is the focus of this paper the matching pursuit method employed in and the basis pursuit method relaxation and its weighted variant employed in are common efficient approaches to suboptimally solve the attack construction problem however these methods do not guarantee exact optimal solutions and in some cases they might not be sufficient see for instance for a naive application of basis pursuit and its consequences while provide solution procedures for their respective attack construction problems the problems therein are different from the one in this paper furthermore the considered problem in this paper can not be solved as a special case of in particular in the attack vector contains at least one nonzero entry however this nonzero entry can not be given a priori needs to restrict the number of nonzero injection measurements attacked while there is no such requirement in the problem considered in this paper in a simple heuristics is provided to find suboptimal solutions to the attack construction problem this heuristics however might not be sufficiently accurate is most closely related to the current work the distinctions will be elaborated in section the main conclusion of this paper is that basis pursuit relaxation can indeed solve the data attack construction problem exactly under the assumption on the network metering system that no injection measurements are metered the limitations of this assumption will be discussed in section in fact the main result identifies a class of cardinality minimization problems where basis pursuit can provide exact optimal solutions this class of problems include as a special case the considered data attack construction problem under the assumption above outline section ii describes the state estimation model and introduces the optimization problems considered in this paper section iii describes the main results of this paper the solution to the considered optimization problems section iv compares the proposed result to related works section v provides the proof the proposed main results section vi numerically demonstrates the advantages of the proposed results ii s tate e stimation and c yber ecurity a nalysis o ptimization p roblems a power network model and state estimation a power network with n buses and ma transmission lines can be described as a graph with n nodes and ma edges the graph topology can be specified by the directed incidence matrix r in which the direction of the edges can be assigned arbitrarily the physical property of the network is described by a nonsingular diagonal matrix d rma whose nonzero entries are the reciprocals of the reactance of the transmission lines the states of the network include bus voltage phase angles and bus voltage magnitudes the latter of which are typically assumed to be constant equal to one in the per unit system in addition since one arbitrary bus is assigned as the reference with zero voltage phase angle the network states considered n can be captured in a vector the state estimator estimates the states based on the measurements obtained from the network under the dc power flow model the measurement vector denoted as z is related to by p db t z where h qbdb t in can be either a vector of random error or intentional additive data attack b is the truncated incidence matrix with the row corresponding to the reference node removed and p consists of a subset of rows of an identity matrices of appropriate dimension indicating which line power flow measurements are actually taken together p db t is a vector of the power flows on the transmission lines to be measured analogously the matrix q selects the bus power injection measurements that are taken qbdb t is a vector of power injections at the buses to be measured therefore h is the measurement matrix relating the measured power quantities to the network states the number of rows of h is denoted the measurements z and the network information h are jointly used to find an estimate of the network states denoted as assuming that the network is observable it is wellestablished that the state estimate can be obtained using the weighted least squares approach chapter chapter h t w h w h t z where w is a positive definite diagonal weighting matrix typically weighting more on the more accurate measurements the state estimate is subsequently fed to other vital scada functionalities such as opf dispatch and ca therefore the accuracy and reliability of is of paramount concern to detect possible faults in the measurements z the bdd test is commonly performed see in one typical strategy if the norm of the residual residual z h i h h t w h w h t is too big then the bdd alarm will be triggered unobservable data attack and security index the bdd test is in general sufficient to detect the presence of if it contains a single random error however in face of a coordinated malicious data attack on multiple measurements the bdd test can fail in particular considers unobservable attack of the form for an arbitrary rn since as defined in would result in a zero residual in it is unobservable from the bdd perspective this was also experimentally verified in in a realistic scada system testbed to quantify the vulnerability of a network to unobservable attacks introduced the notion of security index for an arbitrarily specified measurement the security index is the optimal objective value of the following cardinality minimization problem minimize n subject to h k where k is given indicating that the security index is computed for measurement the symbol k denotes the cardinality of a vector and h k denotes the k th row of the security index is the minimum number of measurements an attacker needs to compromise in order to attack measurement k undetected in particular a small security index for a particular measurement k means that in order to compromise k undetected it is necessary to compromise only a small number of additional measurements this can imply that measurement k is relatively easy to compromise in an unobservable attack as a result the knowledge of the security indices allows the network operator to pinpoint the security vulnerabilities of the network and to better protect the network with limited resource to model the case where certain measurements are protected hence can not be attacked problem becomes minimize n subject to h k h i where the protection index set i m is given h i denotes a submatrix of h with rows indexed by i by convention the constraint h i is ignored when i hence is a special case of measurement set robustness analysis j subject to kj rank h n rank h k n k i ii h k h has full column rank n the following three statements are true a b c problem is feasible if and only if condition i is satisfied problem is feasible if and only if conditions i and ii are satisfied if conditions i and ii are satisfied then and are equivalent see definition in section note that if condition i is not satisfied then the corresponding measurement k should be removed from consideration also since measurement redundancy is a common practice in power networks h can be assumed to have full column rank n therefore conditions i and ii in proposition can be justified in practice finally note that proposition remains true for arbitrary matrix h not necessarily defined by iii p roblem s tatement and m ain r esult a problem statement problem is also motivated from another important state estimation analysis problem namely observability analysis the measurement set described by h in is observable if can be uniquely determined by an important question of observability analysis is as follows minimize in above k is a given index and denotes the complement of j for index set i in the rest of the paper denotes its complement the meaning of is as follows j denotes a subset of measurements from the measurement system described by the condition that rank h n means that the measurement system becomes unobservable if the measurements associated with j are lost that is it becomes impossible to uniquely determine from h the problem in seeks the minimum cardinality j which must include a particular given measurement therefore if there exist a measurement k which leads to an instance of with a very small objective value then the measurement system is not robust against meter failure special cases of have been extensively studied in the power system community for instance the solution label sets of cardinalities one and two are respectively referred to as critical measurements and critical sets containing measurement k their calculations have been documented in for example for the more general cases where the minimum cardinality is p the solution label set in is a critical which contains the specified measurement k solving solves as well the justification is given by the following statement inspired by and proved in appendix proposition let h and k m be given for problems and denote the two conditions as discussed previously this paper proposes an efficient solution to the security index attack construction problem in however the proposed result focuses only on a generalization of a special case of in this special case h in does not contain injection measurements h p db t the limitation of the assumption in will be discussed in section after the main result is presented in the appendix it is shown that the special case of with the assumption in is equivalent to p b t minimize n subject to p k b t p i b t instead of considering directly the proposed result pertains to a more general optimization problem associated with a totally unimodular matrix the determinant of every square submatrix is either or in particular the following problem is the main focus of this paper minimize n subject to a x a k x a i x where a is a given totally unimodular matrix and k m and i m are given since b in is an incidence matrix p b t is a totally unimodular matrix therefore is a generalization of however neither nor includes each other as special cases statement of main result theorem let be an optimal basic feasible solution to where a k and i are defined in then is an optimal solution to remark theorem provides a complete procedure for solving via if the standard form lp problem in is feasible then it contains at least one basic feasible solution see the definition in section together with the fact that the objective value is bounded from below by zero theorem implies that problem contains at least one optimal basic feasible solution which can be used to construct an optimal solution to according to theorem conversely if the feasible set of is empty then the feasible set of must also be empty because a feasible solution to can be used to construct a feasible solution to remark to ensure that an optimal basic feasible solution to is found if one exists the simplex method chapter can be used to solve the proof of theorem will be given in section before that the related work are reviewed and the assumption in is discussed iv r elated w ork relaxation problem is a cardinality minimization problem in general no efficient algorithms have been found for solving cardinality minimization problems so heuristic or relaxation based algorithms are often considered the relaxation basis pursuit is a relaxation technique which has received much attention in relaxation instead of the following optimization problem is set up and solved minimize n subject to a x a k x a i x where in the objective function in the vector replaces the cardinality in problem can be rewritten as a linear programming lp problem in standard form pp minimize p j j a a k a i where denotes the cardinality of the index set if is a feasible solution to then x is feasible to hence an optimal solution to if it exists corresponds to a suboptimal solution to the original problem in an important question is under what conditions this suboptimal solution is actually optimal to an answer is provided by our main result based on the special structure in and the fact that matrix a is totally unimodular subject to a rationale of the no injection assumption in consider the case of where i corresponds only to line power flow measurements then with the definition of h in it can be verified that is equivalent to the following minimize n p b t qb t t subject to p k b p i b t this indicates that the considered problem in is a relaxation of the general case in utilizes this observation and obtains satisfactory suboptimal solution to alternatively considers indirectly accounting for the term qb t in the objective function of demonstrates that solving the following problem provides satisfactory suboptimal solution to minimize n subject to b t b t b t with appropriately defined and notice that has the same form as in conclusion the no injection assumption in which leads to introduces limitation but it need not be as restrictive as it might appear the proposed result in theorem still leads to a lp based approach to obtain suboptimal solutions to and hence b relationship with minimum cut based results nevertheless the main strength of the current result lies in the fact that it solves problem where the a matrix is totally unimodular includes as a special case where the corresponding constraint matrix is a transposed graph incidence matrix this distinguishes the current work with other ones such as which specialize in solving using minimum cut algorithms one example of a which is totally unimodular but not associated with a graph is the matrix with consecutive ones property if either for each row or for each column the s appear consecutively for a possible application consider a networked control system with one controller and n sensor nodes each node contains a scalar state value constant over a period of m time slots the nodes need to transmit their state values through a shared channel to the controller each node can keep transmitting over an arbitrary period of consecutive time slots at each time slot the measurement transmitted to the controller is the sum of the state values of all transmitting nodes denote z rm as the vector of measurements transmitted over all time slots and rn as the vector of node state values then the measurements and the states are related by z where a is a matrix with consecutive ones in the each column solving the observability problem in with h a can identify the vulnerable measurement slots which should have higher priority in communication relationship with compressed sensing type results problem can be written in a form more common in the literature consider only the case where the null space of at is not empty otherwise rank a m and is trivial with a change of decision variable z ax can be posed as minimize kz m subject to lz z i z k where l has full rank and la and z denotes a of z containing the entries corresponding to the index set can be written as the cardinality minimization problem considered for instance in minimize subject to b with appropriately defined matrix and vector b in this subsection we restrict the discussion to the standard case that is is feasible and is a full rank matrix with more columns than rows as is certain conditions regarding when its optimal solution can be obtained by relaxation are known for example report a sufficient condition based on mutual coherence which is denoted as and defined as t max i j i j the sufficient condition states that if there exists a feasible solution in which is sparse enough then is the unique optimal solution to and its relaxation problem with replacing another sufficient condition is based on the restricted isometry property rip for any integer s the rip constant of matrix is the smallest number satisfying for all vector x such that the sufficient condition states that if for some s has a rip constant then any satisfying b and s is necessarily the unique optimal solution to both and its relaxation it has been shown that certain type of randomly generated matrices satisfy the above conditions with overwhelming probabilities provides a result however the above conditions might not apply to which is the focus of this paper for instance consider a in being a submatrix of the transpose of the incidence matrix of the power network from let k and i b are then the corresponding in and b for this implies that therefore the sparsity bound in becomes this is too restrictive to be practical similarly for all s the rip constants are at least one because hence the sufficient condition would not be applicable either nevertheless the failure to apply these sufficient conditions here does not mean that it is impossible to show that relaxation can exactly solve the mutual coherence and conditions characterize when a unique optimal solution exists for both and its relaxation while in this paper uniqueness is required indeed for with and b defined in both and are optimal this can be verified by inspection using the cplex lp solver in matlab to solve the relaxation leads to the first optimal solution it is the main contribution of this paper to show that this is the case in general when is defined by even though the optimal solution might not be unique the reason why the proposed result is applicable is that it is based on a polyhedral combinatorics argument which is different from those of the mutual coherence and rip based results p roof of the m ain r esult definitions the proof requires the following definitions definition two optimization problems are equivalent if there is an correspondence of their instances the corresponding instances either are both infeasible both unbounded or both have optimal solutions in the last case it is possible to construct an optimal solution to one problem from an optimal solution to the other problem and vice versa in addition the two problems have the same optimal objective value definition a polyhedron in rp is a subset of rp described by linear equality and inequality constraints a standard form polyhedron as associated with a standard form lp problem instance is specified by d for some given matrix c and vector definition a basic solution of a polyhedron in rp is a vector satisfying all equality constraints in addition out of all active constraints p of them are linearly independent for a standard form polyhedron with a constraint matrix of full row rank basic solutions can alternatively be defined by the following statement theorem consider a polyhedron d and assume that c and c has full row rank a vector is a basic solution if and only if d and there exists an index set j p with l such that det c j and i if i j definition a basic feasible solution of a polyhedron is a basic solution which is also feasible by convention the terminology a basic feasible solution to a lp problem instance should be understood as a basic feasible solution of the polyhedron which defines the feasible set of the instance b proof two lemmas key to the proof are presented first the first lemma states that problem as set up by relaxation has optimal basic feasible solutions lemma let be an optimal basic feasible solution to then it holds that i i i for all i in addition y j j for all j where j denotes the j th element of proof assume that the feasible set of is nonempty otherwise there is no basic feasible solution cf definition the following two claims are made a k can not be a linear combination of the rows of a i b there exists i i such that either i or the rows of a i are linearly independent in addition in both cases a i and a i define the same constraints claims a and b together imply that problem can be written as a standard form lp problem with a constraint matrix with full row rank matrix c below a minimize ft subject to d with a c i i a k k d f where is an identity matrix of dimension and is a vector of all ones to see the claims first note that a is implied by the feasibility of for b if i or a i then set i otherwise there exists i i with the properties that rank a i a i has linearly independent rows and a i sa i for some matrix on the other hand a i s a i for some matrix s because i i hence a i and a i define the same constraints this shows b the next step of the proof is to show that every basic solution of has its entries being either or denote the matrix as the first columns of c and let be any square submatrix of if has two columns or rows which are the same or negative of each other then det otherwise is a possibly row column permuted square submatrix of a and a is assumed to be totally unimodular hence det and is totally unimodular next consider the matrix b defined as a b c d i i a k k denote the number of rows and the number of columns of b as mb and nb respectively let j nb be any set of column indices of b such that mb so that b j is square if b j contains only columns of then det b j since is totally unimodular otherwise by repeatedly applying laplace expansion on the columns of b j which are not columns of it can be shown that det b j is equal to the determinant of a square submatrix of which can only be or hence by cramer s rule the following holds if v is the solution to the following system of linear equations b j v b nb j nb mb and det b j then v j j theorem and together imply that the nonzero entries of all basic solutions to are either or therefore the basic feasible solutions which are also basic solutions to the polyhedron in also satisfy this integrality property finally let be an optimal basic feasible solution then feasibility nonnegativity implies that j j j j j minimize the minimization excludes the possibility that at optimality j j hence it is possible to define and y such that i i i i y j j j j j the second lemma is concerned with a restricted version of with an infinity norm bound as follows minimize x a x lemma optimization problems and are equivalent proof suppose is feasible then it has an optimal solution denoted as let be the row index set such that a j if and only if j then it is claimed that there exists a common optimal solution to both and with the same optimal objective value the argument is as follows the property of implies the feasibility of which is denoted as a variant of with replaced by by corollary problem as a standard form lp problem has at least one basic feasible solution furthermore since the optimal objective value of is bounded from below by zero theorem implies that has an optimal basic feasible tion which is as specified by lemma denote then is feasible to both and since and k also a a a a as the inequality is true because is an optimal solution to hence is optimal to both and with the same objective value conversely suppose is infeasible then is also infeasible this concludes that and are equivalent proof of theorem let be an optimal basic feasible solution to then there exist and y as defined in lemma in particular it can be verified that y is an optimal solution to the following optimization problem minimize x y subject to p y i a x y x y a k x a i x y j j x y p y i subject to a x y x y a k x a i x y j j it can be verified that is equivalent to then lemma states that is also equivalent to consequently y being an optimal solution to implies that p is feasible with optimal objective value being y j a subject to a k x a i x where the inequalities above hold because of the property that y j for all j y is also an optimal solution to feasible solution to is since y j j p it holds that a y j hence is an optimal solution to vi n umerical d emonstration as a demonstration instances of the restricted security index problem in are solved with p being an identity matrix and i being empty the incidence matrix b describes the topology of one of the following benchmark systems ieee ieee ieee ieee and polish and polish for each benchmark is solved for all possible values of k choices in the case and choices in the case two solution approaches are tested the first approach is the one proposed it is denoted the approach and includes the following steps set up the lp problem in with a being b t solve using a lp solver cplex lp let be its optimal solution define it is the optimal solution to according to theorem the second solution approach to is standard and it was applied also in this second approach is referred to as approach as is formulated into the following problem p minimize y j y subject to j b t t t b k y j my my j where m is a constant required to be at least b t the maximum column sum of the absolute values of the entries of b because of the binary decision variables in y is a mixed integer linear programming milp problem it can be solved by a standard solver such as cplex the correctness of the approach is a direct consequence that is a reformulation of as a result both the and approaches are guaranteed to correctly solve by theory fig shows the sorted security indices optimal objective values of for the four larger benchmark systems bus bus bus bus bus approach approach bus solve time sec the security indices are computed using the approach as a comparison the security indices are also computed using the approach and they are shown in fig the two figures reaffirm the theory that the proposed approach computes the security indices exactly fig or fig indicates that the measurement systems are relatively insecure as there exist many measurements with very low security indices equal to or bus bus bus bus security index case number fig for computing all security indices for different benchmark systems vii c onclusion ranked measurement index fig security indices using the approach bus bus bus bus security index the cardinality minimization problem is important but in general difficult to solve an example is shown in this paper as the smart grid security index problem in the relaxation has demonstrated promise but to establish the cases where it provides exact solutions is results based on mutual coherence and rip provide sufficient conditions under which a unique optimal solution solves both the cardinality minimization problem and its relaxation however this paper identifies a class of application motivated problems as in which can be shown to be solvable by relaxation even though results based on mutual coherence and rip can not make the assertion in fact the optimal solution to might not be unique the key property that leads to the conclusion of this paper is total unimodularity of the constraint matrix the total unimodularity of matrix a in leads to two important consequences is equivalent to its restricted version in furthermore can be solved exactly by solving the lp problem in thus establishing the conclusion that relaxation exactly solves a ppendix a proof of the equivalence between and ranked measurement index fig security indices using the approach in terms of computation time performances it is that the approach is much more than the approach since a milp problem is much more difficult to solve than a lp problem of the same size fig shows the for computing all security indices for each benchmark system using the and approaches it verifies that the proposed approach is more effective in the above illustration all computations are performed on a dualcore windows machine with cpu and of ram note that the constraint h i implies that h since p consists of rows of an identity matrix and d is diagonal and nonsingular for all j m there exists a diagonal and nonsingular matrix dj such that p j d dj p j in particular let dkk be a positive scalar such that p k d dkk p k p k dkk the above implies that for all p k b t if and only if p k dkk b t ddd p k db t ddd in addition for all p i b t if and only if dkk di p i b t p i db t ddd finally for all t p b t p b dkk p db t ddd applying the definition of h in and a change of decision variable to dkk shows that and are equivalent b proof of proposition part a is trivial for the necessary part of b condition i is necessary because if h k then rank h rank h k for all j meaning that is infeasible condition ii is also necessary because if rank h then there does not exist any j such that rank h for the sufficiency part of b assume that conditions i and ii are satisfied then by part a problem is feasible hence it has an optimal solution denoted as define m such that p if and only if h p by definition of rank h also k because h k if rank h k n then is feasible to thus showing that is feasible to show this first consider the case when then k and rank h k rank h n because of condition ii h has full column rank next consider the case when k if rank h k n then there exists such that h k in particular h k also condition ii implies that h k since otherwise h let q k such that h q note also that by definition of h q construct h q h q then h k h p whenever h p but h q while h q this implies that is feasible to with a strictly less objective value than that of contradicting the optimality of therefore the claim that rank h k n is true this implies that is feasible to establishing the sufficiency of part b for part c under conditions i and ii both and are feasible in addition constructed in the proof of the sufficiency part of b satisfies for being an optimal solution to this means that the optimal objective function value of is less than or equal to that of for the converse suppose that j is optimal to then the feasibility of j implies that there exists such that h this also implies that if h k then h k this implies that rank h k n contradicting the feasibility of j therefore there exists a scalar such that h k consequently is feasible to with an objective function value less than or equal to the optimal objective function value of r eferences abur and power system state estimation marcel dekker monticelli state estimation in electric power systems a generalized approach kluwer academic publishers liu reiter and ning false data injection attacks against state estimation in electric power grids in acm conference on computer and communication security new york ny usa pp sandberg teixeira and johansson on security indices for state estimators in power networks in first workshop on secure control systems cpsweek and sandberg stealth attacks and protection schemes for state estimators in power systems in ieee smartgridcomm bobba rogers wang khurana nahrstedt and overbye detecting false data injection attacks on dc state estimation in the first workshop on secure control systems cpsweek kosut jia thomas and tong malicious data attacks on the smart grid ieee transactions on smart grid vol pp sou and sandberg and johansson electric power network security analysis via minimum cut relaxation in ieee conference on decision and control december giani bitar mcqueen khargonekar and poolla smart grid data integrity attacks characterizations and countermeasures in ieee smartgridcomm kim and poor strategic protection against data injection attacks on power grids ieee transactions on smart grid vol pp june sou sandberg and johansson computing critical ktuples in power networks ieee transactions on power systems vol no pp mallat and zhang matching pursuit with dictionaries ieee transactions on signal processing vol pp chen donoho and saunders atomic decomposition by basis pursuit siam journal on scientific computing vol teixeira dan sandberg and johansson cyber security study of a scada energy management system stealthy deception attacks on the state estimator in ifac world congress milan italy korres and contaxis identification and updating of minimally dependent sets of measurements in state estimation ieee transactions on power systems vol no pp aug de almeida asada and garcia identifying critical sets in state estimation using gram matrix in powertech ieee bucharest pp ayres and haley bad data groups in power system state estimation ieee transactions on power systems vol no pp clements krumpholz and davis power system state estimation residual analysis an algorithm using network topology power apparatus and systems ieee transactions on vol no pp april london alberto and bretas network observability identification of the measurements redundancy level in power system technology proceedings powercon international conference on vol pp schrijver a course in combinatorial optimization cwi amsterdam netherlands online document available from http and tao decoding by linear programming information theory ieee transactions on vol no pp tsitsiklis and bertsimas introduction to linear optimization athena scientific hendrickx johansson jungers sandberg and sou an exact solution to the power networks security index problem and its generalized min cut formulation in preparation online available http stoer and wagner a simple algorithm acm vol pp july schrijver theory of linear and integer programming wiley hespanha naghshtabrizi and xu a survey of recent results in networked control systems proceedings of the ieee vol no pp bemporad heemels and johansson networked control systems springer donoho and elad optimally sparse representation in general nonorthogonal dictionaries via minimization proceedings of the national academy of sciences vol pp wakin and boyd enhancing sparsity by reweighted minimization journal of fourier analysis and applications vol pp bruckstein donoho and elad from sparse solutions of systems of equations to sparse modeling of signals and images siam review vol gribonval and nielsen sparse representations in unions of bases information theory ieee transactions on vol no pp the restricted isometry property and its implications for compressed sensing comptes rendus mathematique vol no pp a wood and wollenberg power generation operation and control wiley sons cplex http zimmerman and thomas matpower operations planning and analysis tools for power systems research and education ieee transacations on power systems vol no pp
| 5 |
accelerating learning in constructive predictive frameworks with the successor representation mar craig marlos patrick here we propose using the successor representation sr to accelerate learning in a constructive knowledge system based on general value functions gvfs in settings like robotics for unstructured and dynamic environments it is infeasible to model all meaningful aspects of a system and its environment by hand due to both complexity and size instead robots must be capable of learning and adapting to changes in their environment and task incrementally constructing models from their own experience gvfs taken from the field of reinforcement learning rl are a way of modeling the world as predictive questions one approach to such models proposes a massive network of interconnected and interdependent gvfs which are incrementally added over time it is reasonable to expect that new incrementally added predictions can be learned more swiftly if the learning process leverages knowledge gained from past experience the sr provides such a means of separating the dynamics of the world from the prediction targets and thus capturing regularities that can be reused across multiple gvfs as a primary contribution of this work we show that using predictions can improve sample efficiency and learning speed in a continual learning setting where new predictions are incrementally added and learned over time we analyze our approach in a and then demonstrate its potential on data from a physical robot arm i introduction a long standing goal in the pursuit of artificial general intelligence is that of knowledge modeling and explaining the world and the agent s interaction with it directly from the agent s own experience this is particularly important in fields such as continual learning and developmental robotics where we expect agents to be capable of learning dynamically and incrementally to interact and succeed in complex environments one proposed approach for representing such world models is a collection of general value functions gvfs which models the world as a set of predictive questions each defined by a policy of interest a target signal of interest under that policy and a timescale discounting schedule for accumulating the signal of interest for example a gvf on a mobile robot could pose the question how much current will my wheels consume over the next second if i drive straight forward gvf questions are typically answered using temporaldifference td methods from the field of reinforcement learning rl a learned gvf approximates the expected future value of a signal of interest directly representing the relationship between the environment policy timescale and target signal as the output of a single predictive unit university of alberta canada sherstan machado pilarski nevertheless despite the success rl algorithms have achieved recently methods for answering multiple predictive questions from a single stream of experience critical in a robotic setting are known to exhibit sample inefficiency in our setting of interest where multitudes of gvfs are learned in an incremental sample by sample way this problem is multiplied ultimately the faster an agent can learn to approximate a new gvf the better in this paper we show how one can accelerate learning in a constructive knowledge system based on gvfs by sharing the environment dynamics across the different predictors this is done with the successor representation sr which allows us to learn the world dynamics under a policy independently of any signal being predicted we empirically demonstrate the effectiveness of our approach on both a tabular representation and on a robot arm which uses function approximation we evaluate our algorithm in the continual learning setting where it is not possible to specify all gvfs a priori but rather gvfs are added incrementally during the course of learning as a key result we show that using a learned sr enables an agent to learn newly added gvfs faster than when learning the same gvfs in the standard fashion without the use of the ii background we consider an agent interacting with the environment sequentially we use the standard notation in the reinforcement learning rl literature modeling the problem as a markov decision process starting from state s at each timestep the agent chooses an action at a according to the policy distribution s a and transitions to state s according to the probability at transition p at for each transition st the agent receives a reward rt from the reward function r st at in this paper we focus on the prediction problem in rl in which the agent s goal is to predict the value of a signal from its current state the cumulative sum of future rewards note that throughout this paper we use upper case letters to indicate random variables a general value functions gvfs the most common prediction made in rl is about the expected return the return is defined to be the sum of future discounted rewardspunder policy starting from state t t formally gt rt with being the discount factor and t being the final timestep where t denotes a continuing task the function encoding the prediction about the return is known as the value function s gt s gvfs extend the notion of predictions to different signals in the environment this is done by replacing the reward signal rt by any other target signal which we refer to as the cumulant ct and by allowing a discounting function st instead of using a fixed discounting factor the general value of state s under policy is defined as x s where is the average cumulant from state s is the average from state s and is the probability of transitioning from state s to under policy this can also be written in matrix form where denotes multiplication such an equation when solved gives us i where i is the identity matrix v and is a probability matrix such that ij e sj si b the successor representation sr the successor representation was initially proposed as a representation capable of capturing state similarity in terms of time it is formally defined for a fixed as i hx s t st s in words the sr encodes the expected number of times the agent will visit a particular state when the sum is discounted by over it can be in matrix form as x t i importantly the sr can be easily computed incrementally by standard rl algorithms such as td learning since its primary modification is to replace the reward signal by a state visitation counter nevertheless despite its simplicity the sr holds important properties we leverage in this paper the sr in the limit for a constant see eq corresponds to the first factor of the solution in eq thus the sr can be seen as encoding the dynamics of the markov chain induced by the policy and by the environment s transition probability if the agent has access to the sr it can accurately predict the discounted accumulated value of any signal from any state by simply learning the expected immediate value of that signal in each state on the other hand if the agent does not use the sr the agent must also deal with the problem of credit assignment having to look at returns to control for delayed consequences note that dayan describes the sr as predicting future state visitation from time t onward this is in rl as we typically describe the return as predicting the signal from t onward importantly the dynamics encoded by the sr for all signals learned under the same policy function this factorization of the solution property we use in our work described in the are the same and discount is the main next section iii methods as aforementioned we are interested in the problem of knowledge acquisition in the continual learning setting where knowledge is encoded as predictive questions gvfs in this setting it is not possible to specify all gvfs ahead of time instead gvfs must be added incrementally by some as yet unknown mechanism the standard approach would be to learn each newly added prediction from scratch in this section we discuss how we can use the sr to accelerate learning by taking advantage of the factorization shown in eq our method leverages the fact that the sr is independent of the target signal being predicted learning the sr separately and it when learning to predict new signals in the previous section for clarity we discussed the main concepts in the tabular case in real world applications where the state space is too large assuming states can be uniquely identified is not often feasible instead we generally represent states as a set of features s rd where d because both and are a function of feature vector s they can easily be represented using function approximation and learned using td algorithms in order to present a more general version of our algorithm we introduce it here using the function approximation notation the first step in our algorithm is to compute the average cumulant which we do with td error st if we use linear function approximation to estimate then s s the td error for is given as note that is a vector of length n t st st this generalization of the sr to the function approximation case is known as successor features if we use function approximation to estimate then s m s where m using the usual often used with td we derive a td stochastic gradient descent update as mt mt st t mt st t where is the gradient with respect to m and is the outer product based on this derivation as well as eq and we obtain algorithm note that the last two lines of algorithm which update the sr for the state s are only required for the episodic case in methods the effect of m on the prediction target is ignored when computing the gradient algorithm table i signal primitives gvf prediction with the sr input feature representation policy discount function and output matrix m and vectors as predictors of ci initialize w and m arbitrarily while s is not terminal do observe state s take action a selected according to s and observe a next state s and the cumulants ci s s m s m s m m s for each cumulant ci do ci s wi wi wi s end for end while s m s m m s this algorithm allows us to predict the cumulant ci in state s using the current estimate of the matrix m and the weights wi we can then obtain the final prediction by simply computing s w m s w s m this algorithm accelerates learning because generally learning to estimate is faster than learning the gvf directly this is exactly what our algorithm does when predicting a new signal it starts with its current estimate for the sr at the end the multiplication is simply a weighted average of the predictions across all states weighted by the likelihood they will be visited we provide empirical evidence supporting this claim in the next sections iv evaluation in dayan s grid world we first evaluated our algorithm in a tabular grid world its simplicity allowed us to analyze our method more thoroughly since we were not bounded by the speed and complexity of physical robots the grid world we used was inspired by dayan s see figure four actions are available in this environment up down left right taking an action into a wall blue results in no change in position transitions are deterministic a move in any direction moves the agent to the next cell in the given direction except when moving into a wall for each episode the agent spawns at location s and the episode terminates when the agent reaches the goal we generated fifty different signals for the agent to predict they were generated randomly from a collection of primitives enumerated in table i they are composed of two different primitives one for each axis like so sigx x of f setx biasx sigy y of f sety biasy the bias and offset were drawn from and respectively offset and bias were not applied to either the unit or shortest path primitives further the shortest path primitive was not combined with a second signal but was primitive fixed value square wave sin wave random binary random float unit shortest path parameters value period invert t rue f alse period fixed random binary string generated over the length of an axis fixed random floats generated over the length of an axis fixed value of transition cost goal reward used on its own gaussian noise with a standard deviation of was applied on top of each signal the shortest path signal is inspired by a common reward function used in rl where each transition has a cost a negative reward meant to push the agent to completing a task in a timely manner and reaching the goal produces a positive reward signal our agent selects actions using action selection where at each timestep with probability it uses the action specified by a policy see figure and otherwise chooses randomly from all four actions in our experiments a tabular representation is used with each grid cell uniquely represented by a encoding in this set of experiments we can compute the groundtruth predictors for the sr and the signal predictors this is done by taking the average return observed from each state the sr reference was averaged over episodes and the signal predictor references were averaged over episodes each episode started at the start state and followed the policy already described we first evaluated the predictive performance of our sr learning algorithm with respect to the for a variety of values we report the average over trials of episodes we initialize the sr weights to the squared euclidean distance was calculated between the predicted sr and the reference sr for each timestep these values were summed over the run and the average was taken across the runs these averages are shown in figure using the results in figure we evaluated the performance of the two signal prediction approaches by sweeping across for various for each experimental run learning of a new signal was enabled incrementally every episodes this produced runs with a total length of episodes where the first pair of gvfs added the direct and predictors were trained for episodes and the last added gvfs were trained for episodes further for each run the order in which the signals were added was randomized thirty runs were performed the weights of the predictors and of the sr were initialized to notice that the sr was being learned at the same time as the direct and predictions for each run a cumulative mse for each signal i was calculated according to eq this equation computes the total squared error between the predictor s estimate v and the reference predictor s estimate v for each episode e the error of the current and previous episodes is averaged then for each signal the maximum error for a given a b c fig a dayan s grid world arrows indicate the policy from the start state the policy is to go up black squares indicate the sr prediction given from the starting state s for the darker the square the higher the expected visitation notice the graying around the central path caused by the action selection b mse of the sr as a function of for different values of lowest error is indicated by the markers c a comparison of the nmse of the direct dashed lines and solid lines predictions as a function of fixed and discount factor summed across all signals lowest cumulative error is indicated by up arrows direct predictions and down arrows predictions note that although difficult to see confidence intervals of are included in both b and either in the direct or predictions is found and used to normalize the errors in the signal across that particular value of see eq in this way we attempt to treat the error of each signal equally if this is not done the errors of large magnitude signals dominate the results these normalized values are then summed across the signals and the averages across all runs are plotted in figure m sei e t xx vi t vi t e e t m sei m sei direct m sei sr n m sei the advantage of the method is clear as increases this is to be expected since for both methods are making predictions in the experiment of figure the sr performs better in the vast majority of the signals as shown in table ii for all not listed the method was better on all signals analysis of these cases where the direct method did better reveal that some of the target signals have very small magnitudes suggesting the approach may be more susceptible to ratio further analysis remains to be done finally we analyzed how the prediction error of our systems evolve with time this is demonstrated in figure where we selected the best for and plotted the performance over time across different runs in this case the order of the signals remained fixed so that sensible averages could be plotted for each signal signal performance was normalized as before and summed across table ii signal performance for of figure direct better better fig all predictors learn from scratch with new predictors added in every episodes as the sr error red right axis goes low the predictors green are able to learn faster than their direct blue counterparts shading indicates a confidence interval all active signals as expected we clearly see that the srbased predictions green start with much higher error than the direct blue but as the error of the sr red drops low the newly added predictors are able to learn quicker with less peak and overall error than the direct predictor in the continual learning setting we never have the opportunity to tune for optimal as we did in our evaluation practically fixed are used for many robotics settings in rl but in order to ensure stable learning small are chosen as we saw in figure the advantage of using the predictions is enhanced with smaller ideally however we would imagine that a fully developed system would use some method of adapting such as adadelta evaluation on a robot arm tabular settings like dayan s grid world are useful for enabling analysis and providing insight into the behavior of our method however our goal is to accelerate learning fig the user controls the robot arm using a joystick to trace the inside of the wire maze in a direction circuit path shown in blue on a real robot where states are not fully observed and can not be represented exactly instead we must use function approximation here we demonstrate our approach using a robot arm and learning sensorimotor predictions with respect to a policy in our task a user controls a robot arm via joystick to trace a circuit through the inside of the wire maze see figure with a rod held in the robot s gripper the user performed this task for approximately minutes completing around circuits in this experiment we used six different prediction targets the current position and speed of both the shoulder rotation and elbow flexion joints a new predictor was activated every timesteps note that the robot reports sensor updates at for this demonstration a discount factor of was used four signals were used as input to our function approximator the current position and a decaying trace of the position for the shoulder and elbow joints the decaying trace for joint j was calculated as trj trj t posj these inputs were normalized over the joint ranges observed in the experiment and passed into a tilecoding with tilings of width and a total memory size of additionally a bias unit was added resulting in a binary feature vector of length with a maximum of active features on each timestep hashing collisions can reduce this number we use a decaying for all the predictors where the starts at and decays linearly to zero over the entire dataset at each timestep this is further divided by the number of active features in st finally for each predictor this is offset such that the starts at when it is first activated and decays at the same rate as all the other predictors to compare the prediction error we compute a running mse for each signal according to eq where at each timestep t the sum is taken over all previous timesteps unlike the previous tabular domain we do not have the ideal estimator to compare against and instead compare the predictions v against the actual return in order to treat each signal equally we further normalize these errors according to eq note that the nmse allows us to compare the predictions of a single signal between the two methods but does not tell us how accurate the predictions are nor does fig a minute run tracing the maze circuit a new predictor is added every timesteps nmse errors are summed across all predictors it allow comparison between signals t x vi t gi t t m sei max m sei direct m sei sr m sei t n m sei figures and show a single run approximately minutes in length a single ordering of the predictors was used figure shows the error across all predictors while figure separates out each predictor here we see a clear advantage to using the predictions for most of the signals unlike the previous tabular results there is little difference on the performance of the first predictor shoulder current even while the sr is being learned to investigate we ran experiments where each signal was learned from the beginning of the run we observed that performance was rarely worse and sometimes even better when using the srbased method this suggests the approach is more robust than expected but further experimentation is needed vi further advantages when scaling while this paper analyzed single policies and discount functions this is not the setting in which the gvf framework is proposed to be used rather it is imagined that massive numbers of gvfs over many policies and timescales will be used represent complex models of the world in this setting we note that using predictions can offer additional benefits allowing the robot to do more with less consider for a single policy a collection of srs learned for f discount functions and h predictors we can then represent f h predictions using f h predictors a first advantage is that far fewer gvfs need to be updated on each timestep saving computational costs as a second benefit there is potential to reduce the number of weights used by the system for example consider learning in a tabular setting with states using linear estimators for f h predictions the number of weights needed is f f it can be shown that for a fixed f and s the total number of weights used by the direct prediction approach is greater when h ff new predictor we demonstrated this behaviour in both a tabular grid world and on a robot arm these results suggest an effective method for improving the learning rate and sample efficiency for robots learning in the real world there are several clear opportunities for further research on this topic the first is to provide greater understanding into why for a given fixed some few signals are better predicted directly rather than through the further the work in using the sr with function approximation is preliminary and more insight can yet be gained in this setting another opportunity for research is to explore using srbased predictions with discount functions finally we suggest that predictions with deep feature learning and an incrementally constructed architecture would be a very powerful tool to support continual or developmental learning in robotic domains with widespread applications r eferences fig the same results as figure but with the nmse for the individual predictors each is normalized from to vii related work the idea of the sr was originally introduced as a function approximation method however it has recently been applied to other settings it has been used for instance in transfer learning problems allowing agents to generalize better across similar but different tasks and to define intrinsic rewards in option discovery algorithms gvfs were originally proposed as a method for building an agent s overall knowledge in a modular way to date they have primarily been used with fixed policies the unreal agent is a powerful demonstration of the usefulness of multiple predictions auxiliary tasks which can be viewed as gvfs are shown to accelerate and improve the robustness of learning finally the idea closest to this work is the concept of universal value functions uvfas uvfas are as gvfs a generalization of value functions however instead of generalizing them to multiple predictors and discount factors they generalize value functions over goals in a parametrized way we believe our result and the idea of uvfas are complementary and could in fact be eventually combined in a future work viii conclusions in this paper we showed how the successor representation sr although originally introduced for another purpose can be used to accelerate learning in a continual learning setting in which a robot incrementally constructs models of its world as a collection of predictions known as general value functions gvfs the sr enables a given prediction to be modularized into two components one representing the dynamics of the environment the sr and the other representing the target signal signal prediction this allows a robot to reuse its existing knowledge when adding a new prediction target speeding up learning of the b ring continual learning in reinforcement environments dissertation the university of texas at austin oudeyer kaplan and hafner intrinsic motivation systems for autonomous mental development ieee transactions on evolutionary computation vol no pp sutton modayil delp degris pilarski a white and precup horde a scalable architecture for learning knowledge from unsupervised sensorimotor interaction in proceedings of the international joint conference on autonomous agents and multiagent systems aamas pp sutton learning to predict by the methods of temporal differences machine learning vol pp sutton and barto reinforcement learning an introduction mit press mnih kavukcuoglu silver a rusu veness bellemare graves riedmiller fidjeland ostrovski petersen beattie sadik antonoglou king kumaran wierstra legg and hassabis control through deep reinforcement learning nature vol no pp silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton chen lillicrap hui sifre van den driessche graepel and hassabis mastering the game of go without human knowledge nature vol pp dayan improving generalization for temporal difference learning the successor representation neural computation vol no pp barreto dabney munos j hunt schaul silver and van hasselt successor features for transfer in reinforcement learning in advances in neural information processing systems nips pp zeiler adadelta an adaptive learning rate method corr vol modayil a white and sutton nexting in a reinforcement learning robot adaptive behavior vol no pp kulkarni saeedi gautam and gershman deep successor reinforcement learning corr vol machado rosenbaum guo liu tesauro and campbell eigenoption discovery through the deep successor representation in proceedings of the international conference on learning representations iclr jaderberg mnih czarnecki schaul j leibo silver and kavukcuoglu reinforcement learning with unsupervised auxiliary tasks in proceedings of the international conference on learning representations iclr schaul horgan gregor and silver universal value function approximators in proceedings of the international conference on machine learning icml pp
| 2 |
foundations of declarative data analysis using limit datalog programs nov mark kaminski bernardo cuenca grau egor kostylev boris motik and ian horrocks department of computer science university of oxford uk abstract motivated by applications in declarative data analysis we study datalog z extension of positive datalog with arithmetic functions over integers this language is known to be undecidable so we propose two fragments in limit datalog z predicates are axiomatised to keep numeric values allowing us to show that fact entailment is co ne xp t in combined and co in data complexity moreover an additional stability requirement causes the complexity to drop to e xp t ime and pt ime respectively finally we show that stable datalog z can express many useful data analysis tasks and so our results provide a sound foundation for the development of advanced information systems introduction analysing complex datasets is currently a hot topic in information systems the term data analysis covers a broad range of techniques that often involve tasks such as data aggregation property verification or query answering such tasks are currently often solved imperatively using java or scala by specifying how to manipulate the data and this is undesirable because the objective of the analysis is often obscured by evaluation concerns it has recently been argued that data analysis should be declarative alvaro et markl seo et shkapsky et users should describe what the desired output is rather than how to compute it for example instead of computing shortest paths in a graph by a concrete algorithm one should i describe what a path length is and ii select only paths of minimum length such a specification is independent of evaluation details allowing analysts to focus on the task at hand an evaluation strategy can be chosen later and general parallel incremental evaluation algorithms can be reused for free an essential ingredient of declarative data analysis is an efficient language that can capture the relevant tasks and datalog is a prime candidate since it supports recursion apart from recursion however data analysis usually also requires integer arithmetic to capture quantitative aspects of data the length of a shortest path research on combining the two dates back to the mumick et kemp and stuckey beeri et van gelder consens and mendelzon ganguly et ross and sagiv and is currently experiencing a revival faber et mazuran et this extensive body of work however focuses primarily on integrating recursion and arithmetic with aggregate functions in a coherent semantic framework where technical difficulties arise due to nonmonotonicity of aggregates surprisingly little is known about the computational properties of integrating recursion with arithmetic apart from that a straightforward combination is undecidable dantsin et undecidability also carries over to the above formalisms and practical datalogbased systems such as boom alvaro et deals shkapsky et myria wang et socialite seo et overlog loo et dyna eisner and filardo and yedalog chin et to develop a sound foundation for declarative data analysis we study datalog z datalog with integer arithmetic and comparisons our main contribution is a new limit datalog z fragment that like the existing data analysis languages is powerful and flexible enough to naturally capture many important analysis tasks however unlike datalog z and the existing languages reasoning with limit programs is decidable and it becomes tractable in data complexity under an additional stability restriction in limit datalog z all intensional predicates with a numeric argument are limit predicates instead of keeping all numeric values for a given tuple of objects such predicates keep only the minimal min or only the maximal max bounds of numeric values entailed for the tuple for example if we encode a weighted directed graph using a ternary predicate edge then rules and where sp is a min limit predicate compute the cost of a shortest path from a given source node to every other node sp sp x m edge x y n sp y m n if these rules and a dataset entail a fact sp v k then the cost of a shortest path from to v is at most k hence sp v k holds for each k k since the cost of a shortest path is also at most k rule intuitively says that if x is reachable from with cost at most m and hx yi is an edge of cost n then v is reachable from with cost at most m this is different from datalog z where there is no implicit semantic nection between sp v k and sp v k and such semantic connections allow us to prove decidability of limit datalog z we provide a direct semantics for limit predicates based on herbrand interpretations but we also show that this semantics can be axiomatised in standard datalog z our formalism can thus be seen as a fragment of datalog z from which it inherits properties such as monotonicity and existence of a least fixpoint model dantsin et our contributions are as follows first we introduce limit datalog z programs and argue that they can naturally capture many relevant data analysis tasks we prove that fact entailment in limit datalog z is undecidable but after restricting the use of multiplication it becomes co ne xp t and co in combined and data complexity respectively to achieve tractability in data complexity which is very important for robust behaviour on large datasets we additionally introduce a stability restriction and show that this does not prevent expressing the relevant analysis tasks the proofs of all results are given in the appendix of this paper preliminaries in this section we recapitulate the definitions of datalog with integers which we call datalog z syntax a vocabulary consists of predicates objects object variables and numeric variables each predicate has an integer arity n and each position i n is of either object or numeric sort an object term is an object or an object variable a numeric term is an integer a numeric variable or of the form or where and are numeric terms and and are the standard arithmetic functions a constant is an object or an integer the magnitude of an integer is its absolute value a standard atom is of the form b tn where b is a predicate of arity n and each ti is a term whose type matches the sort of position i of b a comparison atom is of the form or where and are the standard comparison predicates vand vand are numeric terms a rule r is of the form i j where i are standard atoms are comparison atoms and each variable in r occurs v in some atom h r is the head v of r sb r i is the standard body of r cb r j is the comparison body of r and b r sb r cb r is the body of a ground instance of r is obtained from r by substituting all variables by constants a datalog z program p is a finite set of rules predicate b is intensional idb in p if b occurs in p in the head of a rule whose body is not empty otherwise b is extensional edb in a term atom rule or program is ground if it contains no variables a fact is a ground standard atom program p is a dataset if h r is a fact and b r for each r we often say that p contains a fact and write p which actually means we write a tuple of terms as t and we often treat conjunctions and tuples as sets and write say sb r and ti semantics a herbrand interpretation i is a not necessarily finite set of facts such i satisfies a ground atom written i if i is a standard atom and evaluating the arithmetic functions in produces a fact in i or ii is a comparison atom and evaluating the arithmetic functions and comparisons produces true the notion of satisfaction is extended to conjunctions of ground atoms rules and programs as in logic where each rule is universally quantified if i p then i is a model of program p and p entails a fact written p if i holds whenever i complexity in this paper we study the computational properties of checking p combined complexity assumes that both p and are part of the input in contrast data complexity assumes that p is given as p d for p a program and d a dataset and that only d and are part of the input while p is fixed unless otherwise stated all numbers in the input are coded in binary and the size kpk of p is the size of its representation checking p is undecidable even if the only arithmetic function in p is dantsin et presburger arithmetic is logic with constants and functions and equality and the comparison predicates and interpreted over all integers z the complexity of checking sentence validity whether the sentence is true in all models of presburger arithmetic is known when the number of quantifier alternations the number of variables in each quantifier block are fixed berman haase limit programs towards introducing a decidable fragment of datalog z for data analysis we first note that the undecidability proof of plain datalog z outlined by dantsin et al uses atoms with at least two numeric terms thus to motivate introducing our fragment we first prove that undecidability holds even if atoms contain at most one numeric term the proof uses a reduction from the halting problem for deterministic turing machines to ensure that each standard atom in p has at most one numeric term combinations of a time point and a tape position are encoded using a single integer theorem for p a datalog z program and a fact checking p is undecidable even if p contains no or and each standard atom in p has at most one numeric term we next introduce limit datalog z where limit predicates keep bounds on numeric values this language can be seen as either a semantic or a syntactic restriction of datalog z definition in limit datalog z a predicate is either an object predicate with no numeric positions or a numeric predicate where only the last position is numeric a numeric predicate is either an ordinary numeric predicate or a limit predicate and the latter is either a min or a max predicate atoms with object predicates are object atoms and analogously for other types of a datalog z rule r is a limit datalog z rule if i b r or ii each atom in sb r is an object ordinary numeric or limit atom and h r is an object or a limit atom a limit datalog z program p is a program containing only limit rules and p is homogeneous if it does not contain both min and max predicates in the rest of this paper we make three simplifying assumptions first numeric atoms occurring in a rule body are but comparison atoms and the head can contain arithmetic functions second each numeric variable in a rule occurs in at most one standard body atom third distinct rules in a program use different variables the third assumption is clearly because all variables are universally quantified so their names are immaterial moreover the first two assumptions are as well since for each rule there exists a logically equivalent rule that satisfies these assumptions in particular we can replace an atom such as a t with conjunction a t m i i m m where m is a fresh variable and i is a fresh predicate axiomatised to hold on all integers as follows i i m i m i m i m also we can replace atoms m m with conjunction m m m where is a fresh variable intuitively a limit fact b a k says that the value of b for a tuple of objects a is at least k if b is max or at most k if b is min for example a fact sp v k in our shortest path example from section says that node v is reachable from via a path with cost at most to capture this intended meaning we require interpretations i to be closed for limit is whenever i contains a limit fact it also contains all facts implied by according to the predicate type in our example this captures the observation that the existence of a path from to v of cost at most k implies the existence of such a path of cost at most k for each k definition an interpretation i is if for each limit fact b a k i where b is a min resp max predicate b a k i holds for each integer k with k k resp k k an interpretation i is a model of a limit program p if i p and i is the notion of entailment is modified to take into account only models the semantics of limit predicates in a limit datalog z program p can be axiomatised explicitly by extending p with the following rules where z is a fresh predicate thus limit datalog z can be seen as a syntactic fragment of datalog z z z m z m z m z m b x m z n m n b x n for each min predicate b in p b x m z n n m b x n for each max predicate b in p each limit program can be reduced to a homogeneous program however for the sake of generality in our technical results we do not require programs to be homogeneous proposition for each limit program p and fact a homogeneous program p and fact can be computed in linear time such that p if and only if p intuitively program p in proposition is obtained by replacing all min or all max predicates in p by fresh max resp min predicates and negating their numeric arguments in section we have shown that limit datalog z can compute the cost of shortest paths in a graph we next present further examples of data analysis tasks that our formalism can handle in all examples we assume that all objects in the input are arranged in an arbitrary linear order using facts first next next an we use this order to simulate aggregation by means of recursion example consider a social network where agents are connected by the follows relation agent as introduces tweets a message and each agent ai retweets the message if at least kai agents that ai follows tweet the message where kai is a positive threshold uniquely associated with ai our goal is to determine which agents tweet the message eventually to achieve this using limit datalog z we encode the network structure in a dataset dtw containing facts follows ai aj if ai follows aj and ordinary numeric facts th ai kai if ai s threshold is kai program ptw containing rules encodes message propagation where nt is a max predicate tw as follows x y first y nt x y follows x y first y tw y nt x y nt x y m next y y nt x y m nt x y m next y y follows x y tw y nt x y m th x m nt x y n m n tw x specifically ptw dtw tw ai iff ai tweets the message intuitively nt ai aj m is true if out of agents aj according to the order at least m agents that ai follows tweet the message rules and initialise nt for the first agent in the order nt is a max predicate so if the first agent tweets the message rule overrides rule rules and recurse over the order to compute nt as stated above example limit datalog z can also solve the problem of counting paths between pairs of nodes in a directed acyclic graph we encode the graph in the obvious way as a dataset dcp that uses object predicates node and edge program pcp consisting of rules where np and np are max predicates then counts the paths node x np x x node x node y first z np x y z edge x z np z y m first z np x y z m np x y z m next z z np x y z m np x y z m next z z edge x z np z y n np x y z m n np x y z m np x y m specifically pcp dcp np ai aj k iff at least k paths exist from node ai to node aj intuitively np ai aj ak m is true if m is at least the sum of the number of paths from each ak according to the order to aj for which there exists an edge from ai to rule says that each node has one path to itself rule initialises aggregation by saying that for the first node z there are zero paths from x to y and rule overrides this if there exists an edge from x to z finally rule propagates the sum for x to the next z in the order and rule overrides this if there is an edge from x to z by adding the number of paths from z and z to y example assume that in the graph from example each node ai is associated with a bandwidth bai limiting the number of paths going through ai to at most bai to count the paths compliant with the bandwidth requirements we extend dcp to dataset dbcp that additionally contains an ordinary numeric fact bw ai bai for each node ai and we define pbcp by replacing rule in pcp with the following rule np x y z m bw z n m n np x y m then pbcp dbcp np ai aj k iff there exist at least k paths from node ai to node aj where the bandwidth requirement is satisfied for all nodes on each such path fixpoint characterisation of entailment programs are often grounded to eliminate variables and thus simplify the presentation in limit datalog z however numeric variables range over integers so a grounding can be infinite thus we first specialise the notion of a grounding definition a rule r is if each variable in r is a numeric variable that occurs in r in a limit body atom a limit program p is if all of its rules are the of p contains for each r p each rule obtained from r by replacing each variable not occurring in r in a numeric argument of a limit atom with a constant of obviously p if and only if p for p the semigrounding of we next characterise entailment of limit programs by which compactly represent interpretations if a interpretation i contains b b k where b is a min predicate then either the limit value k exists such that b b i and b b k i for k or b b k i holds for all k k and dually for b a max predicate thus to characterise the value of b on a tuple of objects b in i we just need the limit value or information that no such value exists definition a j is a set of facts over integers extended with a special symbol such that k k holds for all limit facts b b k and b b k in interpretations correspond naturally and to so we can recast the notions of satisfaction and model using unlike for interpretations the number of facts in a of a limit program p can be bounded by definition a interpretation i corresponds to a j if i contains exactly all object and ordinary numeric facts of j and for each limit predicate b each tuple of objects b and each integer i b b k i for all k if and only if b b j and ii b b i and b b k i for all k resp k and b is a min resp max predicate if and only if b b j let j and j be corresponding to interpretations i and i then j satisfies a ground atom written j if i j is a of a program p written j p if i p finally j v j holds if i i example let i be the interpretation consisting of a a b a k for k and b b k for k z where a is an ordinary numeric predicate b is a max predicate and a and b are objects then a a b a b b is the corresponding to i we next introduce the immediate consequence operator tp of a limit program p on we assume for simplicity that p is to apply a rule r p to a j while correctly handling limit atoms operator tp converts r into a linear integer constraint c r j that captures all ground instances of r applicable to the interpretation i corresponding to j if c r j has no solution r is not applicable to j otherwise h r is added to j if it is not a limit atom and if h r is a min max atom b b m then the minimal maximal solution for m in c r j is computed and j is updated such that the limit value of b on b is at least at most is the application of r to j keeps only the best limit value definition for p a limit program r p and j a c r j is the conjunction of comparison atoms containing i cb r ii if an object or ordinary numeric atom sb r exists with j or a limit atom b b s sb r exists with b b j for each and iii s resp s for each min resp max atom b b s sb r with b b j and rule r is applicable to j if c r j has an integer solution assume r is applicable to j if h r is an object or ordinary numeric atom let hd r j h r if h r b b s is a min resp max atom the optimum value opt r j is the smallest resp largest value of s in all solutions to c r j or if no such bound on the value of s in the solutions to c r j exists moreover hd r j b b opt r j operator tp j maps j to the smallest v pseudointerpretation satisfying hd r j for each r p applicable to j finally and tnp tp p for n example let r be a x x b x with a and b max predicates then c r x does not have a solution and therefore rule r is not applicable to the empty moreover for j a conjunction c r j x x has two x and x therefore rule r is applicable to j finally b is a max predicate and so opt r j max and hd r j b consequently t r j b lemma for each limit program p operator tp is monotonic moreover j p if and only if tp j v j for each monotonicity ensures existence of the closure p of the least such that tnp v p for each n the following theorem characterises entailment and provides a bound on the number of facts in the closure theorem for p a limit program and a fact p if and only if p also and j p implies v j for each p the proofs for the first and the third claim of theorem use the monotonicity of tp analogously to plain datalog the second claim holds since for each n each pair of distinct facts in tnp must be derived by distinct rules in decidability of entailment we now start our investigation of the computational properties of limit datalog z theorem bounds the cardinality of the closure of a program but it does not bound the magnitude of the integers occurring in limit facts in fact integers can be arbitrarily large moreover due to multiplication checking rule applicability requires solving nonlinear inequalities over integers which is undecidable theorem for p a limit program and a fact checking p and checking applicability of a rule of p to a are both undecidable the proof of theorem uses a straightforward reduction from hilbert s tenth problem checking rule applicability is undecidable due to products of variables in inequalities however for linear inequalities that prohibit multiplying variables the problem can be solved in np and in polynomial time if we bound the number of variables thus to ensure decidability we next restrict limit programs so that their contain only linear numeric terms all our examples satisfy this restriction definition a limit rule r p is if each numeric n term in r is of the form si mi where i each mi is a distinct numeric variable occurring in a limit body atom of r ii term contains no variable occurring in a limit body atom of r and iii each si with i is a term constructed using multiplication integers and variables not occurring in limit body atoms of a program contains only in the rest of this section we show that entailment for limitlinear programs is decidable and provide tight complexity bounds our upper bounds are obtained via a reduction to the validity of presburger formulas of a certain shape lemma for p a program wnand a fact there exists a presburger sentence that is valid if and only if p each is a conjunction of possibly negated atoms moreover and each k are bounded polynomially by kpk number n is bounded polynomially by and exponentially by krk finally the magnitude of each integer in is bounded by the maximal magnitude of an integer in p and the reduction in lemma is based on three main ideas first for each limit atom b b s in a program p we use a boolean variable def bb to indicate that an atom of the form b b exists in a of p a boolean variable fin bb to indicate whether the value of is finite and an integer variable val bb to capture if it is finite second each rule of p is encoded as a universally quantified presburger formula by replacing each standard atom with its encoding finally entailment of from p is encoded as a sentence stating that in every either some rule in p is not satisfied or holds this requires universal quantifiers to quantify over all models and existential quantifiers to negate the universally quantified program lemma bounds the magnitude of integers in models of presburger formulas from lemma these bounds follow from recent deep results on sets and their connection to presburger arithmetic chistikov and haase note that each limit program can be normalised in polynomial time to a program we thank christoph haase for providing a proof of this lemma wn lemma let be a presburger sentence where each is a conjunction of possibly negated atoms of size at most k mentioning at most variables a is the maximal magnitude of an integer in and m then is valid if and only if is valid over models where each integer variable assumes a value whose magnitude is bounded by log ak m lemmas and provide us with bounds on the size of for entailment theorem for p a program d a dataset and a fact p d if and only if a pseudomodel j of p d exists where j and the magnitude of each integer in j is bounded polynomially by the largest magnitude of an integer in p d exponentially by and by krk by theorem the following nondeterministic algorithm decides p compute the p of p guess a j that satisfies the bounds given in theorem if tp j v j so j p and j return true step requires exponential polynomial in data time and it does not increase the maximal size of a rule hence step is nondeterministic exponential polynomial in data and step requires exponential polynomial in data time to solve a system of linear inequalities theorem proves that these bounds are both correct and tight theorem for p a program and a fact deciding p is co ne xp t in combined and co in data complexity the upper bounds in theorem follow from theorem in data complexity is shown by a reduction from the square tiling problem and co ne xp t in combined complexity is shown by a similar reduction from the succinct version of square tiling co tractability of entailment stability tractability in data complexity is important on large datasets so we next present an additional stability condition that brings the complexity of entailment down to e xp t ime in combined and pt ime in data complexity as in plain datalog cyclic dependencies in limit programs the fixpoint of a plain datalog program can be computed in pt ime in data complexity however for p a program a computation of p may not terminate since repeated application of tp can produce larger and larger numbers thus we need a way to identify when the numeric argument of a limit fact a a is grows or decreases without a bound moreover to obtain a procedure tractable in data complexity divergence should be detected after polynomially many steps example illustrates that this can be achieved by analysing cyclic dependencies example let pc contain facts a and b and rules a m b m and b m a m where a and b are max predicates applying the first rule copies the value of a into b and applying the second rule increases the value of a thus both a and b diverge in pc the existence of a cyclic dependency between a and b however does not necessarily lead to divergence let program be obtained from pc by adding a max fact c and replacing the first rule with a m c n m n b m while a cyclic dependency between a and b still exists the increase in the values of a and b is bounded by the value of c which is independent of a or b thus neither a nor b diverge in in the rest of this section we extend and to by defining k k and k for each integer we formalise cyclic dependencies as follows definition for each limit predicate b and each tuple b of n objects let vbb be a node unique for b and b the value propagation graph of a limitlinear program p and a j is the directed weighted graph gjp v e defined as follows for each limit fact b b j we have vbb for each rule r p applicable to j with the head of the form a a s where vaa v and each body atom b b m of r where vbb v and variable m occurs in term s we have hvbb vaa i e such r is said to produce the edge hvbb vaa i in for each r p and each edge e hvbb vaa i e produced by r j if opt r j otherwise opt r j if b and a are max r j if b is max a is min j r j if b and a are min r j if b is min a is max opt r j if where is such that b b j the weight of each edge e e is then given by e max j r p produces e a cycle in gjp is a cycle for which the sum of the weights of the contributing edges is greater than intuitively gjp describes how for each limit predicate b and objects b such that b b j operator tp propagates to other facts the presence of a node vbb in v indicates that b b j holds for some z this can be uniquely identified given vbb and j an edge e hvbb vaa i e indicates that at least one rule r p is applicable to j where h r a a s b b m sb r and m occurs in s moreover applying r to j produces a fact a a where satisfies e if both a and b are max predicates and analogously for the other types of a and b in other words edge e indicates that the application of tp to j will propagate the value of vbb to vaa while increasing it by at least e thus presence of a cycle in gjp indicates that repeated rule applications might increment the values of all nodes on the cycle stable programs as example shows the presence of a cycle in gjp does not imply the divergence of all atoms corresponding to the nodes in the cycle this is because the weight of such a cycle may decrease after certain rule applications and so it is no longer positive this motivates the stability condition where edge weights in gjp may only grow but never decrease with rule application hence once the weight of a cycle becomes positive it will remain positive and thus guarantee the divergence of all atoms corresponding to its nodes intuitively p is stable if whenever a rule r p is applicable to some j rule r is also applicable to each j with larger limit values and applying r to such j further increases the value of the head definition defines stability as a condition on gjp please note that for all j and j with j v j and gjp v e and gjp v e the corresponding value propagation graphs we have e e definition a program p is stable if for all j and j with j v j gjp v e gjp v e and each e e e e and e hvbb vaa i and b b j imply e a program is stable if its is stable example program pc from example is stable while is not for j a c and j a c we have j v j but hva vb i and hva vb i for each program p and each integer n we have tnp v p and stability ensures that edge weights only grow after rule application thus recursive application of the rules producing edges involved in a cycle leads to divergence as shown by the following lemma lemma for each stable program p each j with j v p and each node vaa on a cycle in gjp we have a a p algorithm uses this observation to deterministically compute the fixpoint of the algorithm iteratively applies tp however after each step it computes the corresponding value propagation graph line and for each a a where node vaa occurs on a cycle line it replaces with line by lemma this is sound moreover since the algorithm repeatedly applies tp it necessarily derives each fact from p eventually finally lemma shows that the algorithm terminates in time polynomial in the number of rules in a program intuitively the proof of the lemma shows that without introducing a new edge or a new positive weight cycle in the value propagation graph repeated application of tp necessarily converges in o steps moreover the number of edges in gjp is at most quadratic and so a new edge or a new positive weight cycle can be introduced at most o many times lemma when applied to a stable program p algorithm terminates after at most iterations of the loop in lines lemmas and imply the following theorem theorem for p a stable program d a dataset and a fact algorithm decides p d in time polynomial in kp dk and exponential in krk since the running time is exponential in the maximal size of a rule and does not increase rule sizes algorithm entailment for stable programs input stable program p fact output true if p j repeat j j gjp v e for each vaa v in a cycle in gjp do replace a a in j with a a j tp j until j j return true if j and false otherwise algorithm combined with a preprocessing step provides an exponential time decision procedure for stable programs this upper bound is tight since entailment in plain datalog is already e xp t in combined and pt in data complexity the first condition of definition ensures that each variable occurring in a numeric term contributes to the value of the term for example it disallows terms such as x and x x since a rule with such a term in the head may violate the second condition moreover the second condition of definition ensures that if the value of a numeric variable x occurring in the head increases the type of the body atom introducing x x increases if it occurs in a max body atom and decreases otherwise then so does the value of the numeric term in the head this is essential for the first condition of stability cf definition finally the third condition of definition ensures that comparisons can not be invalidated by increasing the values of the variables involved which is required for both conditions of stability type consistency is a purely syntactic condition that can be checked by looking at one rule and one atom at a time hence checking type consistency is feasible in l og s pace proposition each program is stable theorem for p a stable program and a fact checking p is e xp t in combined and pt imecomplete in data complexity proposition checking whether a program is can be accomplished in l og s pace programs unfortunately the class of stable programs is not recognisable which can again be shown by a reduction from hilbert s tenth problem proposition checking stability of a program p is undecidable we next provide a sufficient condition for stability that captures programs such as those in examples and intuitively definition syntactically prevents certain harmful interactions in the second rule of program from example numeric variable m occurs in a max atom and on the lefthand side of a comparison atom m n thus if the rule is applicable for some value of m it is not necessarily applicable for each m which breaks stability definition a rule r is typeconsistent if pn each numeric term t in r is of the form ki mi where is an integer and each ki i n is a nonzero integer called the coefficient of variable mi in t if h r a a s is a limit atom then each variable occurring in s with a positive resp negative coefficient also occurs in a unique limit body atom or r that is of the same resp different type min max as h r and for each comparison or in r each variable occurring in with a positive resp negative coefficient also occurs in a unique min resp max body atom and each variable occurring in with a positive resp negative coefficient also occurs in a unique max resp min body atom of a program is if all of its rules are moreover a program p is if the program obtained by first p and then simplifying all numeric terms as much as possible is conclusion and future work we have introduced several fragments of datalog with integer arithmetic thus obtaining a sound theoretical foundation for declarative data analysis we see many challenges for future work first our formalism should be extended with aggregate functions while certain forms of aggregation can be simulated by iterating over the object domain as in our examples in section such a solution may be too cumbersome for practical use and it relies on the existence of a linear order over the object domain which is a strong theoretical assumption explicit support for aggregation would allow us to formulate tasks such as the ones in section more intuitively and without relying on the ordering assumption second it is unclear whether integer constraint solving is strictly needed in step of algorithm it may be possible to exploit stability of p to compute tp j more efficiently third we shall implement our algorithm and apply it to practical data analysis problems fourth it would be interesting to establish connections between our results and existing work on artefact systems damaggio et koutsos and vianu which faces similar undecidability issues in a different formal setting acknowledgments we thank christoph haase for explaining to us his results on presburger arithmetic and sets as well as for providing a proof for lemma our work has also benefited from discussions with michael benedikt this research was supported by the royal society and the epsrc projects dbonto and references alvaro et peter alvaro tyson condie neil conway khaled elmeleegy joseph hellerstein and sell sears boom analytics exploring declarative programming for the cloud in eurosys acm beeri et catriel beeri shamim naqvi oded shmueli and shalom tsur set constructors in a logic database language log berman leonard berman the complexitiy of logical theories theor comput byrd et richard byrd alan goldman and miriam heller recognizing unbounded integer programs oper chin et brian chin daniel von dincklage vuk ercegovac peter hawkins mark miller franz josef och christopher olston and fernando pereira yedalog exploring knowledge at scale in snapl chistikov and haase dmitry chistikov and christoph haase the taming of the set in icalp consens and mendelzon mariano consens and alberto mendelzon low complexity aggregation in graphlog and datalog theor comput damaggio et elio damaggio alin deutsch and victor vianu artifact systems with data dependencies and arithmetic acm trans database dantsin et evgeny dantsin thomas eiter georg gottlob and andrei voronkov complexity and expressive power of logic programming acm comput eisner and filardo jason eisner and nathaniel wesley filardo dyna extending datalog for modern ai in datalog faber et wolfgang faber gerald pfeifer and nicola leone semantics and complexity of recursive aggregates in answer set programming artif ganguly et sumit ganguly sergio greco and carlo zaniolo extrema predicates in deductive databases comput syst erich subclasses of presburger arithmetic and the hierarchy theor comput haase christoph haase subclasses of presburger arithmetic and the weak exp hierarchy in hougardy stefan hougardy the algorithm on graphs with negative cycles inf process kannan ravi kannan minkowski s convex body theorem and integer programming math oper kemp and stuckey david kemp and peter stuckey semantics of logic programs with aggregates in islp koutsos and vianu adrien koutsos and victor vianu views of business artifacts comput system loo et boon thau loo tyson condie minos garofalakis david gay joseph hellerstein petros maniatis raghu ramakrishnan timothy roscoe and ion stoica declarative networking commun acm markl volker markl breaking the chains on declarative data analysis and data independence in the big data era pvldb mazuran et mirjana mazuran edoardo serra and carlo zaniolo extending the power of datalog recursion vldb mumick et inderpal singh mumick hamid pirahesh and raghu ramakrishnan the magic of duplicates and aggregates in vldb pages papadimitriou christos papadimitriou on the complexity of integer programming acm ross and sagiv kenneth ross and yehoshua sagiv monotonic aggregation in deductive databases comput system uwe complexity of presburger arithmetic with fixed quantifier dimension theory comput seo et jiwon seo stephen guo and monica lam socialite an efficient graph query language based on datalog ieee trans knowl data shkapsky et alexander shkapsky mohan yang matteo interlandi hsuan chiu tyson condie and carlo zaniolo big data analytics with datalog queries on spark in sigmod acm van gelder allen van gelder the semantics of aggregation in pods von zur gathen and sieveking joachim von zur gathen and malte sieveking a bound on solutions of linear integer equalities and inequalities proc ams wang et jingjing wang magdalena balazinska and daniel halperin asynchronous and recursive datalog evaluation in engines pvldb a proofs for section theorem for p a datalog z program and a fact checking p is undecidable even if p contains no or and each standard atom in p has at most one numeric term proof we prove the claim by presenting a reduction of the halting problem for deterministic turing machines on the empty tape let m be an arbitrary deterministic turing machine with finite alphabet containing the blank symbol the finite set of states s containing the initial state s and the halting state h and transition function s s l r we assume that m works on a tape that is infinite to the right that it starts with the empty tape and the head positioned on the leftmost cell and that it never moves the head off the left edge of the tape we encode each time point i using an integer and we index tape positions using integers thus at time i each position j is necessarily empty so we can encode a combination of a time point i and tape position j with j using a single integer j we use this idea to encode the state of the execution of m using the following facts num k is true for each positive number k time k is true if k and so k encodes a time point i tape a j says that symbol a occupies position j of the tape at time i and it will be defined for each j pos j says that the head points to position j of the tape at time i state q says that the machine is in state q at time i and halts is a propositional variable saying that the machine has halted we next give a datalog z program pm that simulates the behaviour of m on the empty tape we represent each alphabet symbol a using an object constant a and we represent each state q s using an object constant q furthermore we abbreviate s t t u as s t u finally we abbreviate conjunction s t t s as s t and disjunction s t t s as s t strictly speaking disjunctions are not allowed in rule bodies however each rule with a disjunction in the body of the form s t corresponds to rules s t and t s so we use the former form for the sake of clarity with these considerations in mind program pm contains rules num num x num x time time x time x x tape pos state s state h x halts time x tape v y pos z x y x x x z x x y z tape v x y time x num u x x x u u x x x x tape u moreover for each alphabet symbol a and all states q q s such that q a q d where d l r is a direction pm contains rules time x state q x tape a y pos y x y x x tape x y time x state q x tape a y pos y x y x x state q x x time x state q x tape a y pos y num u x y x x x y u pos u if d l time x state q x tape a y pos y num u x y x x x y u pos u if d r rules initialise num so that it holds of all positive integers and rules initialise time so that it holds for each integer k rules initialise the state of the m at time i rule derives halts if at any point the turing machine enters the halting state the remaining rules encode the evolution of the state of m and they are based on the following idea if variable x encodes a time point i using value then variable y encodes a position j for time point i if x y x x holds moreover for such y position j at time point i is encoded as j j and can be obtained as and the encodings of positions j and j can be obtained as x y and x y respectively since our goal is to prove undecidability by just using we simulate subtraction by looking for a value u such that x y u with these observations in mind one can see that rule copies the unaffected part of the tape from time point i to time point i moreover rule pads the tape by filling each location j with j with the blank symbol since division is not supported in our language we express this condition as j finally rule updates the tape at the position of the head rule updates the state and rules and move the head left and right respectively consequently we have pm halts if and only if m halts on the empty tape proposition for each limit program p and fact a homogeneous program p and fact can be computed in linear time such that p if and only if p proof let p be an arbitrary limit program without loss of generality we construct a program p containing only max predicates for each min predicate a let be a fresh max predicate uniquely associated with a we construct p from p by modifying each rule r p as follows if h r a t s where a is a min predicate replace the head of r with t for each body atom a t n sb r where a is a min predicate and n is a variable replace the atom with t m where m is a fresh variable and replace all other occurrences of n in the rule with for each body atom a t k sb r where a is a min predicate and k is an integer replace the atom with t finally if a a k is a min fact let a a otherwise let now consider an arbitrary interpretation i and let i be the interpretation obtained from i by replacing each min fact a a k with a it is straightforward to see that i p if and only if i p and that i if and only if i thus p if and only if p b proofs for section we use the standard notion of partial mappings of variables to constants for a formula and a substitution is the formula obtained by replacing each free variable x in on which is defined with x proposition for each rule r each j and each mapping of the variables of r to integers is an integer solution to c r j if and only if j proof assume that is an integer solution to c r j we consider each atom and show that j holds if is a comparison atom the claim is straightforward due to c r j if is an object atom or an ordinary numeric atom is ground and we have j and j otherwise c r j would hold and so could not be a solution to c r j if is a max atom b b s since c r j either b b j for some integer z and s c r j or b b j in the former case since is a solution to c r j we have and since b is a max predicate j b b holds in the latter case j b b holds due to b b v b b if is a min atom the proof is analogous to the previous case the proof of the direction is analogous and we omit it for the sake of brevity definition given a interpretation i and a program p let v v ip i i is a ground instance of a rule in p such that i i and is a fact such that v s n let let inp ip p for n and let ip ip lemma for each program p operator ip is monotonic moreover for i an interpretation i p if and only if ip i i and i p implies p i proof operator ip is the standard immediate consequence operator of datalog but applied to the program p obtained by extending p with the rules from section encoding the semantics of limit predicates thus all claims of this lemma hold in the usual way dantsin et lemma for each interpretation i and the corresponding j and for each limit program p interpretation ip i corresponds to the tp j proof it suffices to show that for each fact the following claims hold if is an object fact or ordinary numeric fact then ip i if and only if tp j if is a limit fact of the form a a k where k is an integer then ip i if and only if v tp j and if is a limit fact of the form a a then a a k k z ip i if and only if tp j claim consider an arbitrary object fact of the form a a the proof for ordinary numeric facts is analogous assume ip i then a rule r p and a grounding of r exist such that i the head of r must be since p is but then j holds as well so proposition ensures that is a solution to c r j moreover hd r j and thus we have tp j assume tp j then there exist a rule r p and an integer solution to c r j proposition then ensures j and so i holds as well thus we have ip i claim consider an arbitrary max fact of the form a a k the proof for a min fact is analogous assume ip i then a rule r a a s p and a grounding of r exist such that i and a a but then j holds as well so proposition ensures that is a solution to c r j moreover opt r j and therefore we have v hd r j a a opt r j v tp j assume v tp j then there exist a rule r a a s p and an integer solution to c r j such that v hd r j a a where opt r j proposition then ensures j and so i holds as well thus ip i holds for each fact with v a a so we have ip i claim consider an arbitrary max fact of the form a a the proof for a min fact is analogous in the following let s a a k k z assume s ip i program p contains only finitely many rules so the infinitely many facts of s in ip i are produced by a rule r a a s p and an infinite sequence of groundings of r such that for each i we have i and but then j so proposition ensures that satisfies c r j for each i therefore opt r j and tp j holds assume a a tp j then a rule r a a s p exists such that opt r j so an infinite sequence of solutions to c r j exists such that for each i proposition ensures j for each i and so i as well thus for each k z some i exists such that k and therefore we have a a k v a a consequently s ip i holds lemma for each limit program p operator tp is monotonic moreover j p if and only if tp j v j for each j proof immediate from lemmas and theorem for p a limit program and a fact p if and only if p also and j p implies v j for each p proof by inductively applying lemma for each n the interpretation inp clearly corresponds to the tnp thus p and tp also correspond on all object and ordinary numeric facts now consider an arbitrary max predicate a and a tuple of n objects a and for m k a a k p consider the following cases n m then for each n and each k z we have a a k ip which implies a a k tnp and a a tnp finally p is the least v fixpoint of tp so a a k tp and a a tp holds as well there exists max m then there exists n such that a a inp and a a im p for each and m is the least v fixpoint of for each and m finally t but then a a tnp and a a tm p p holds operator tp so a a p m z then for each k z there exists n such that a a k inp and so tnp a a k holds but then a a tnp holds as well analogous reasoning holds for min predicates so p corresponds to tp but then the first and the third claim of this theorem follow straightforwardly from lemma moreover contain at most one fact per combination of a limit predicate and a tuple of objects of corresponding arity and program p is so each rule in p produces at most one fact in p which implies the second claim of this theorem c proofs for section theorem for p a limit program and a fact checking p and checking applicability of a rule of p to a are both undecidable proof we present a reduction from hilbert s tenth problem which is to determine whether given a polynomial p xn over variables xn equation p xn has integer solutions it is well known that the problem remains undecidable even if the solutions must be nonnegative integers so we use that variant in this proof for each such polynomial p let pp be the program containing rules for a a unary min predicate and b a nullary object predicate it is obvious that pp b if and only if p xn has a nonnegative integer solution a vn a xi p xn p xn b moreover rule is applicable to j a if and only if p xn has a nonnegative integer solution although presburger arithmetic does not have propositional variables these can clearly be axiomatised using numeric variables hence in the rest of this section we use propositional variables in presburger formulas for the sake of clarity definition for each object predicate a each n ordinary numeric predicate b each n limit predicate c each of objects a and each integer k let def aa def bak def ca and fin ca be distinct propositional variables and let val ca a distinct integer variable moreover let be resp if c is a max resp min predicate v for p a program pres p pres r is the presburger formula where pres r for y all numeric variables in r and is obtained by replacing each atom in r with its encoding pres defined as follows pres if is a comparison atom pres def aa if is an object atom of the form a a pres def bak if is an ordinary numeric atom of the form b a k and pres def ca ca s val ca if is a limit atom of the form c a s let j be a and let be an assignment of boolean and integer variables then j corresponds to if all of the following conditions hold for all a b c and a as specified above for each integer k z def aa true if and only if a a j def bak true if and only if b a k j def ca true if and only if c a j or there exists z such that c a j fin ca true and val ca k if and only if c a k j note that k in definition ranges over all integers which excludes val ca is an equal to some integer k and j is a and thus can not contain both c a and c a k thus c a j implies fin ca false also note that each assignment corresponds to precisely one j however each j corresponds to infinitely many assignments since definition does not restrict the value of variables other than def aa def bak def ca fin ca and val ca moreover two assignments corresponding to the same may differ on the value of val ca if fin ca is set to false in both assignments and they can differ on the values of fin ca and val ca if def ca is set to false in both assignments lemma let j be a and let be a variable assignment such that j corresponds to then j if and only if pres for each ground atom and j r if and only if pres r for each rule proof claim we consider all possible forms of is a comparison atom then the truth of is independent from j so the claim is immediate a a is an object fact then pres def aa and def aa true if and only if def aa j so the claim holds b a k is an ordinary numeric fact the proof is analogous to the case of object facts c a k is a limit fact if j then either c a j or an integer exists such that c a j and k either way def ca true holds moreover fin ca false holds in the former and val ca holds in the latter case thus pres r clearly holds the converse direction is analogous so we omit it for the sake of brevity claim let r be an arbitrary rule and let i be the interpretation corresponding to j by definition j r if and only if i r and the latter is equivalent to i for each ground instance of r by the semantics of universal quantification in logic but then the latter claim is equivalent to j for each ground instance of now note that by construction we have pres pres for each atom and each grounding and thus pres pres r finally groundings of r can be equivalently seen as variable assignments to universally quantified numeric variables in pres r so claim follows immediately from claim wn lemma for p a program and a fact there exists a presburger sentence that is valid if and only if p each is a conjunction of possibly negated atoms moreover and each k are bounded polynomially by kpk number n is bounded polynomially by and exponentially by krk finally the magnitude of each integer in is bounded by the maximal magnitude of an integer in p and proof lemma immediately implies that p if and only if the sentence pres p is valid where x contains all variables def aa def bak def ca fin ca and val ca occurring in pres p or pres clearly is polynomially bounded by kpk and the magnitude of each integer in is bounded by the maximum magnitude of an integer in p and let be the sentence obtained from by converting each conjunct of pres p into form where is in cnf formulae and are equivalent and is of the form vn v i pres where n and for each rule ri p integer i is exponentially bounded by kri k and k and are linearly bounded by kri by moving all quantifiers to the front of the formula and pushing negations inwards we finally obtain formula wn w i sn where y yi and each is the form of formula is of the required form is bounded pn polynomially by kpk number i is bounded polynomially by n and exponentially by krk and k is bounded linearly by kpk wn lemma let be a presburger sentence where each is a conjunction of possibly negated atoms of size at most k mentioning at most variables a is the maximal magnitude of an integer in and m then is valid if and only if is valid over models where each integer variable assumes a value whose magnitude is bounded by log ak m proof let each can be seen as a system of linear inequalities si ai x ci such that k and where the maximal magnitude of all numbers in a and ci is bounded by ak by proposition of chistikov and haase adapted from s the work by von zur gathen and sieveking the set of solutions to si can be represented by a set l where z z and the magnitude of all integers in and wn s is bounded by log ak consequently disjunction corresponds to a set l bj pj where o log bj w pj and the magnitude ak s of each integer in bj and pj is still bounded by n formula then corresponds to the projection of l bj pj on the variables in x which is a set of s the form l where each zm is a projection of bj on x and each zm is a projection of pj on x now wn theorem by chistikov and haase s implies that the satisfying assignments to the formula can be represented as a set j l cj qj where the magnitude of each integer in each cj and qj is bounded by b log ak m since has a satisfying assignment if and only if it has a satisfying assignment involving only numbers from some cj it follows that is satisfiable if and only if it is satisfiable over models where the absolute value of every integer variable is bounded by b this implies the claim of this lemma since is valid if and only if is unsatisfiable theorem for p a program d a dataset and a fact p d if and only if a j of p d exists where j and the magnitude of each integer in j is bounded polynomially by the largest magnitude of an integer in p d exponentially by and by krk proof the direction is trivial for the direction assume that p d holds and let be obtained wn from d by removing each fact that does not unify with an atom in p or clearly we have p let be the presburger sentence from lemma for p and sentence is not valid and it satisfies the following conditions number m is polynomial in kp k which in turn is bounded by krk moreover contains only facts that unify with atoms in p and so m can be bounded further namely linearly in the product cs for c and s kpk number n is linear in the product of c and the size and hence the number of variables in each are linear in let a be the maximal magnitude of an integer in p and thus in as well by lemma an assignment exists such s o s that and the magnitude of each integer variable is bounded by b s log s ao s o cs clearly b is polynomial in a exponential in c and in s as required moreover clearly pres p and pres now let j be the corresponding to by lemma we have j p and j by construction the magnitude of each integer in j is bounded by b furthermore let j be the restriction of j to the facts that unify with the head of at least one rule in p clearly we still have j p and j finally holds by our construction which implies our claim lemma for each program p j and dataset d there exists a polynomial p such that j can be computed in nondeterministic polynomial time in kpk kdk kjk and in deterministic p krk polynomial time in kpk kdk kjk proof let s hd r j rule r p d is applicable to j program p is and therefore p d is semiground as well thus each rule of p d can contribute at most one fact to s so we have by definition j is the smallest v such that j s so we can compute j as the set containing each object and ordinary numeric fact in s each fact a a s for a a limit predicate each fact a a s such that a is a min resp max predicate and a a k s implies k and k resp k to complete the proof of this lemma we next argue that set s can be computed within the required time bounds consider an arbitrary rule r p d and let j be the subset of j containing all facts that unify with a body atom in r note that krk rule r is applicable to j if and only if conjunction c r j has an integer solution by construction kc r j k is linear in krk kj k the number of variables in c r j and r is the same r j is linear in krk and the magnitude of each integer in c r j is exponentially bounded in krk kjk but then checking whether c r j has an integer solution is in np krk kjk and in pt ime kjkp krk for some polynomial p as we argue next we first consider the former claim let a be the maximal magnitude of an integer in c r j conjunction c r j contains only the numbers from r and j whose magnitude is at most and respectively thus we have a moreover the results by papadimitriou show that there exists a polynomial such that the magnitude of an integer in a solution to c r j can be bounded by b krk and so there exists a polynomial such that b the binary representation of b thus requires at most krk kjk bits and so we can guess it in polynomial time we next consider the latter claim by theorem of kannan checking satisfiability of c r j over z is fixedparameter tractable in the number n of variables in is there exists a polynomial such that a solution to c r j can be computed in time o krk kjk n since n krk clearly holds there exists a polynomial such that the satisfiability of c r j can be checked in time that is thus krk now assume that r is applicable to j then hd r j h r if h r is an object atom so we assume that h r a a s is a limit atom and argue that opt r j can be computed within the required time bounds using the following two steps depending on whether a is a min or a max predicate we check whether there is a value for s in all solutions to c r j is we check whether the integer linear program s subject to c r j is bounded byrd et al showed that this amounts to checking boundedness of the corresponding linear relaxation which in turn can be reduced to checking linear feasibility and can be solved in deterministic polynomial time in krk kjk if the above problem is bounded we compute its optimal solution which can be reduced to polynomially many in krk kjk feasibility checks as shown by papadimitriou corollary with binary search each such feasibility check is in np krk kjk and in pt ime kjkp krk thus hd r j can be computed in nondeterministic polynomial time in krk kjk and in deterministic polynomial time in kjkp krk which implies our claim lemma deciding p is co in data complexity for p a program and a fact proof an instance t of the square tiling problem is given by an integer n coded in unary a set t tm of m tiles and two compatibility relations h t t and v t t the problem is to determine whether there exists a tiling n t of an n n square such that i j i j i h holds for all i n and j n and i j i j i v holds for all i n and j n which is known to be thus to prove the claim of this lemma we reduce the complement of the problem by presenting a fixed program ptiling and a dataset dt that depends on t and showing that t has no solution if and only if ptiling dt nosolution our encoding uses object edb predicates succ incompatibleh and incompatiblev ordinary numeric edb predicates shift tileno numtiles and maxtiling nullary object idb predicate nosolution unary min idb predicate i and unary max idb predicate tiling program ptiling contains rules where s t abbreviates s t t s i tiling tiling n numtiles nt shift x y s tileno u t i i n nt s t s s succ x shift x y s tileno u t i i n nt s t s incompatibleh u tiling n tiling n numtiles nt shift x y s tileno u t i i n nt s t s s succ y y shift x y s tileno u t i i n nt s t s incompatiblev u tiling n tiling n maxtiling m m n nosolution dataset dt contains facts where gn are fresh objects and ti for i m are distinct objects sponding to the tiles in t since n is coded in unary although numbers m n and m j are exponential in n they can be computed in polynomial time and represented using polynomially many bits numtiles m tileno ti i incompatibleh ti tj incompatiblev ti tj for each i m for each i j m such that ti tj for each i j m such that ti tj for each i n for each i j n maxtiling m n succ gi shift gi gj m j our reduction uses the following idea facts associate each tile ti with an integer i where i m hence in the rest p of this discussion we do not distinguish a tile from its number this allows us to represent each tiling using a number j n i j m j thus given a number n that encodes a tiling number t with t m corresponds to the tile assigned to position i j if n m m j t m j for some integers and where m j thus if numeric variable n is assigned such an encoding of a tiling and numeric variable s is assigned the factor m j corresponding to a position i j then conjunction tileno u t i i n nt s t s s is true if and only if u is assigned the tile object corresponding to position i j in the tiling encoded by to complete the construction we represent each position i j by a pair of objects gi gj each of which is associated with the corresponding factor m j using facts facts provide an ordering on gi which allows us to identify adjacent positions finally fact records the maximal number that encodes a tiling as we outlined earlier program ptiling then simply checks through all tilings rule ensures that the tiling encoded as is checked moreover for each n such that tiling n holds rules and derive tiling n if either the horizontal or the vertical compatibility requirement is violated for the tiling encoded by finally rule detects that no solution exists if tiling m n is derived lemma deciding p is co ne xp t for p a program and a fact proof we present a reduction from the succinct square tiling problem an instance t of the problem is given by an integer n coded in unary a set t containing m tiles and horizontal and vertical compatibility relations h and v respectively as in the proof of lemma however the objective is to tile a square of positions which is known to be ne xp t imecomplete thus to prove the claim of this lemma we reduce the complement of the problem by presenting a program pt and showing that t has no solution if and only if pt nosolution the main idea behind our reduction is similar to lemma program pt contains rules that associate each tile with a number using an ordinary numeric predicate tileno and encode the horizontal and vertical incompatibility relations using the object predicates incompatibleh and incompatiblev tileno ti i incompatibleh ti tj incompatiblev ti tj for each i m for each i j m such that ti tj for each i j m such that ti tj the main difference to lemma is that in order to obtain a polynomial encoding we can not represent a position i j in the grid explicitly using a pair of objects instead we encode each position using a pair i j where i and j are n of objects and if we read and as representing numbers and respectively then each i and j can be seen as a binary number in by a slight abuse of notation we often identify a tuple over and with the number it encodes and use tuples in arithmetic expressions while positions can be encoded using n bits we will also need to ensure distance between positions which requires n bits in the rest of this proof and stand for tuples and respectively whose length is often implicit from the context where these tuples occur similarly x y and are tuples of distinct variables whose length will also be clear from the context to axiomatise an ordering on numbers with n bits program pt contains rules where b is a unary object predicate succ is a object predicate and succ is a object predicate rules ensure pt succ i j where i and j encode numbers with n bits such that j i in particular rule encodes binary incrementation where holds for each position k and each of zeros and ones x rules and ensure an analogous property for succ but for numbers with n bits b b vk b xi succ x x for each k n where k and n k vk for each k n where k and n k b xi succ x x analogously to the proof of lemma we encoded tilings using numbers in m to compute the maximum number encoding a tiling program pt contains rules where maxtiling is a unary min predicate and auxt is a min predicate auxiliary rules multiply m with itself as many times as there are grid positions n so we have pt auxt i j m for each position i j consequently rule ensures that for all s we have pt maxtiling s if and only if s m auxt m auxt x y n succ x x auxt x y m n auxt y n succ y y auxt y m n auxt n maxtiling n unlike in the proof of lemma we can not include shift factors explicitly into pt since this would make the encoding exponential moreover we could precompute shift factors using rules similar to but then we would need to use values from limit predicates in multiplication which would not produce a program therefore we check tilings using a different approach as in the proof of lemma our construction ensures that for all s we have pt tiling s if and only if each tiling n with n s does not satisfy the compatibility relations given a tiling encoded by n and a position i j let j k n sn i j n m program pt contains rules where shiftedtiling is a max predicate of arity and i is a unary min predicate these rules ensure that for each i j and tiling n such that pt tiling n we have pt shiftedtiling i j sn i j to understand how this is achieved we order the grid positions as follows now consider an arbitrary position i j and its successor in the ordering the encoding of a tiling using an integer n ensures sn i j m sn t holds where t m is the number of the tile that n assigns to position i j thus rule ensures that position satisfies the mentioned property rule handles adjacent positions of the form i j and i j and rule handles adjacent positions of the form j and j i tiling n shiftedtiling n shiftedtiling x y n succ x x i i m m n m m shiftedtiling y m shiftedtiling y n succ y i i m m n m m shiftedtiling m note that for all n and n with n n and each position i j we have sn i j i j thus since shiftedtiling is a max predicate the limit value for s in shiftedtiling i j s will always correspond to the limit value for n in tiling n n checking horizontal compatibility is now easy but checking vertical compatibility requires dividing sn i j by m which would make the reduction exponential hence pt checks compatibility using rules where conflict is a max predicate of arity these rules ensure that for each i j d u and tiling n such that pt tiling n and the position that precedes i j by distance d in the ordering can not be labelled in n with tile u we have pt conflict i j d u sn i j to this end assume that x y is labelled with tile now if u h and x the predecessor of x exists then rule says that the position preceding x y by the position to the left can not be labelled with u moreover if u v and y the predecessor of y exists then rule says that the position preceding x y by the position above can not be labelled with u moreover rule propagates such constraints from position i j to i j while reducing the distance by one and rule does so for positions j and j shiftedtiling x y m succ x incompatibleh u tileno i m m conflict x y u m shiftedtiling x y m succ y incompatiblev u tileno i m m conflict x y u m shiftedtiling x y m succ x conflict y u succ z i m m m conflict x y z u m shiftedtiling y m succ y conflict u succ z i m m m conflict y z u m program pt also contains rules where invalid is a max predicate these rules ensure that for each i j and each tiling n such that pt tiling n and there exists a position that comes after i j in the position order such that n does not satisfy the compatibility relations between and its horizontal or vertical successor we have pt invalid i j sn i j rule determines invalidity at position x y for conflicts with zero distance and rules and propagate this information to preceding positions analogously to rules and conflict x y u m tileno u t i m m t invalid x y m shiftedtiling x y m succ x invalid y i m m m invalid x y m shiftedtiling y m succ y invalid i m m m invalid y m finally program pt contains rules where tiling is a unary max predicate and nosolution is a nullary predicate rule ensures that tiling encoded by is checked based on our discussion from the previous paragraph for each invalid tiling n such that pt tiling n we have pt invalid n moreover n n so if pt invalid n holds then rule ensures that tiling encoded by n is considered atom m n is needed in the rule since no numeric variable is allowed to occur in more than one standard body atom if we exhaust all available tilings rule determines that no solution exists just as in the proof of lemma tiling invalid m tiling n m n tiling n tiling n maxtiling m m n nosolution based on our discussion of the consequences of pt we conclude that instance t of the succinct tiling problem does not have a solution if and only if pt nosolution proposition for j a and a fact j if and only if v j proof consider an arbitrary j and the corresponding interpretation i if j then i so there exists a fact j such that such that v which implies v j moreover if v j then there exists a fact j such that v and since i is we have i which implies j theorem for p a program and a fact deciding p is co ne xp t in combined and co npcomplete in data complexity proof lemmas and prove hardness moreover the following nondeterministic algorithm decides p d in time polynomial in kdk and exponential in kpk kdk compute the p of guess a j over the signature of p d such that the number of facts in j and the absolute values of all integers in j are bounded as in theorem check that j is a of p d if not return false return false if j and true otherwise correctness of the algorithm follows from theorem so we next argue about its complexity the mentioned data complexity holds by the following observations in step kp k and the time required to compute p are all polynomial in kdk and constant in since is polynomial in kdk and constant in and krk is constant in kdk and the magnitude of the integers in j is exponentially bounded in kdk by theorem thus the number of bits needed to represent each integer in j is polynomial in kdk furthermore we have and is polynomial in kdk and constant in thus j can be guessed in step in nondeterministic polynomial time in kdk by lemma checking that j is a of p d amounts to checking tp j j by lemma tp j can be computed in deterministic polynomial time in kjk kdk and hence in kdk as kjk is polynomial in kdk hence step requires deterministic polynomial time in kdk by proposition step amounts to checking v j which can be done in time polynomial in kjk and and hence polynomial in kdk as well finally the mentioned combined complexity holds by the following observations in step kp k and time required to compute p are all exponential in kpk kdk and constant in since is exponential in kpk kdk and constant in and krk is linear in kpk and constant in kdk the magnitude of the integers in j is doubly exponentially bounded in kpk kdk by theorem thus the number of bits needed to represent each integer in j is exponential in kpk kdk furthermore we have and is exponential in kpk kdk and constant in thus j can be guessed in step in nondeterministic exponential time in kpk kdk by lemma checking that j is a of p amounts to checking tp j j by lemma polynomial p exists such that tp j can be computed in deterministic polynomial time in kp k kdk kjkp krk which in turn is bounded by o kdk o p krk hence step requires deterministic exponential time in kpk kdk by proposition step amounts to checking v j which can be done in time polynomial in kjk and and hence in time exponential in kpk kdk d proofs for section for arbitrary value propagation graph gjp v e a path in gjp is a nonempty sequence vn of nodes from v such that hvi i e holds for each i n such starts in and ends in vn we define n moreover by a slight abuse of notation we sometimes write x or vi where we identify with the set of its nodes a path is simple if all of its nodes are distinct a path is a cycle if vn definition given a limit linear program p a j value propagation graph gjp v e and a path vn in gjp the weight of is defined as x hvi i lemma let p be a and stable program let j be a of p let gjp v e and let vaa vbb v be nodes such that vab is reachable from vba by a path then for each k z such that j b b k j a a k if a and b are both max predicates j a a if a is a min predicate and b is a max predicate j a a k if a and b are both min predicates and j a a if a is a max predicate and b is a min predicate proof we consider the case when a and b are both max predicates the remaining cases are analogous we proceed by induction on the length of the base case is empty is immediate for the inductive step assume that vaa where is a path starting at vbb and ending in node vcc then there exists an edge e hvcc vaa i e and e is produced by a rule r c c n a a s p such that n is a variable occurring in s and is a grounding of r such that j c c n and j e hvcc vaa i we next consider the case when c is a max predicate the case when c is a min predicate is analogous let be such that c c j we have the following possibilities if opt r j and are both integers they are not we have e opt r j if opt r j then e by definition if then e by definition and the fact that p is stable and moreover opt r j by definition now for an arbitrary k z such that j b b k we consider the following two cases e the inductive hypothesis holds for so j c c k and thus k consequently we have e opt r j opt r j k and so k opt r j holds moreover j p implies tp j j by lemma thus proposition and the definition of tp imply j a a k e clearly moreover j p implies tp j j by lemma thus opt r j proposition and the definition of tp imply j a a lemma for each stable program p each j with j v p and each node vaa on a cycle in gjp we have a a p j proof let gjp v e let j p and let gp v e now assume for the sake of a contradiction that there j exist a cycle in gp and a node vaa such that and j a a rule applicability is monotonic v so is still a cycle in gjp and since p is stable we have we consider the case when a is a max predicate the remaining case is analogous now vaa v v implies that a a k j for some k moreover j a a implies k but then lemma implies j a a k moreover implies that k is either or it is an integer larger than k either way this contradicts our assumption that a a k j lemma when applied to a stable program p algorithm terminates after at most iterations of the loop in lines proof for j a a an n limit predicate and a an of objects such such that a a j let val j aa if or a is a max predicate and z and val j aa if a is a min predicate and z moreover let r j aa be the set containing each rule r p that is applicable to j and where h r is of the form a a s by monotonicity of datalog z we have r j aa r j aa for each j and j such that j v j moreover for each edge e hvbb vaa i e generated by a rule r r j aa definition ensures that the following property holds val j bb e val t r j aa to prove this lemma we first show the following auxiliary claim claim for each n determining the j tnp and the value propagation graph gjp v e each determining the j and the value propagation graph gjp v e each set p of nodes x v and each node vaa v of such that e e val j bb holds for each node vbb v that occurs in gjp in a cycle vaa x val j aa val tp j aa holds for each simple path in gjp that ends in vaa and satisfies x for each node vbb x there exists a path in gjp that starts in vaa and ends in vbb one of the following holds i val j cc val tp j aa for some node vcc x and path in gjp starting in vcc and ending in vaa ii r j cc r j cc for some node vcc proof for arbitrary n we prove the claim by induction on for the base case consider an arbitrary set x v and vertex vaa v that satisfy properties of we distinguish two cases there exists an edge e hvbb vaa i e such that val j bb e val tp j aa now either vbb x or vbb vaa holds if that were not the case path vba vaa would be a simple path in gjp such that which would contradict property we next show that vbb vaa is impossible for the sake of a contradiction assume that vbb vaa holds and thus we have val j aa e val tp j aa by property this implies val j aa e val j aa and hence e consequently path is a cycle in gjp and so by property we have val j aa which in turn contradicts property consequently we have vbb x but then since by assumption val j bb e val tp j aa part i of the claim holds for vcc vbb for each edge hvbb vaa i e we have val j bb e val tp j aa then for each rule r p that generates an edge e hvbb vaa i property ensures val t r j aa val tp j aa since val tp j aa max h r a s val t r j aa a rule r p exists that satisfies val tp j aa val t r j aa but does not generate an edge in e ending in vaa clearly h r is of the form a a s and r is applicable to j so r r j aa holds moreover r is hence if s were to contain a variable this variable would occur in a limit body atom of r and so r would generate an edge in e consequently s is ground finally if r were applicable to j then a a s v j and so val j aa val a a s aa val tp j aa which contradicts property consequently we have r j aa and so part ii of the claim holds for vcc vaa for the inductive step we assume that holds for each set x v and each node vaa v and we consider an arbitrary set x v and vertex vaa v that satisfy properties of by property there exists a rule r p such that val j aa val tp j aa val t r j aa now if r does not generate an edge in e then in exactly the same way as in the base case we conclude that part ii of claim holds for vcc vaa consequently in the rest of this proof we assume that r generates at least one edge in let j and let gjp v e then e e e p by property and val t r j aa val j aa val t r j aa so there exists an edge e hvbb vaa i e such that val j bb val j bb furthermore since val j aa val t r j aa if vba were equal to vaa then path vaa vaa would be a cycle containing vaa which contradicts property hence we have vbb vaa and so path vbb vaa is simple now if vbb x holds then since r generates e and and val t r j aa val tp j aa by property we have val j bb e val tp j aa is part i of the claim holds for vcc vbb therefore in the rest of this proof we assume vbb x we now distinguish two cases j vbb is reachable from vaa in gp we next show that the set x vaa and node vbb satisfy properties and of the inductive hypothesis for for property note that since vbb is the direct predecessor of vaa in gjp each simple path in gjp that ends in vbb and does not involve vaa can be extended to the simple path vaa that ends in vaa thus we have max is a simple path in gjp ending in vbb and x vaa max is a simple path in gjp ending in vaa and x property for x vaa and ensures max is a simple path in gjp ending in vaa and x which in turn implies max is a simple path in gjp ending in vbb and x vaa property holds for x and vaa moreover there exists a path from vbb to vaa via the edge e so the property also holds for the set x vaa and node vbb property vbb x vaa and property val j bb val j bb have already been established for x vaa vbb and moreover properties and do not depend on x vaa and thus we can apply the inductive hypothesis and conclude that one of the following holds i val j cc val j bb holds for some node vcc x vaa and path in gjp that starts in vcc and ends in vbb ii r j cc r j cc holds for some node vcc if ii is true then case ii of claim holds since r j cc r j cc thus we next assume that case i holds and we show that then part i of claim holds for vcc x and vaa we first show that vcc vaa for contradiction assume vcc vaa then val j aa val j bb moreover since r generates e by property and property we have val j bb e val t r j aa val tp j aa val j aa consequently val j aa e val j aa moreover val j aa val j aa holds since tp is monotonic and holds since p is stable by these observations we have val j aa e val j aa that is e but then vaa is a cycle in gjp and so we have val j aa which contradicts property thus we have vcc x then from val j cc val j bb and val j bb e val t r j aa val tp j aa we conclude val j cc e val tp j aa as in the case for vcc vaa since e vaa part i of claim holds for vcc x and vaa vbb is not reachable from vaa in gjp then by property vbb is not reachable in gjp from any node in x otherwise vbb would also be reachable in gjp from vaa via some node in x thus no simple path in gjp ending in vbb involves vaa or a node in is each such path can be extended to a simple path ending in vaa now property ensures max is a path in gjp ending in vaa and x which implies max is a path in gjp ending in vbb thus property of the inductive hypothesis for holds for the set and node vbb moreover property holds vacuously for properties and have already been established for vbb and properties and hold by assumption thus we can apply the inductive hypothesis for to and vbb and so one of the following holds i val j cc val j bb for some node vcc and path in gjp that starts in vcc and ends in vbb ii r j cc r j cc for some node vcc in gjp clearly i is trivially false so ii holds but then case ii of claim holds since r j cc r j cc tn tn note that for each n and each simple path in gp p is bounded by the number of nodes in gp p which is in turn bounded by m therefore claim for m and x ensures that for each n such that tp p one of the following holds val cc for some node vcc that occurs in gp p in a positive weight cycle so the value of the fact correp sponding to vcc is set to in the next iteration of the main loop of the algorithm gp p tn contains at least one edge that does not occur in gp p or tn r tnp cc r cc for some node vcc in gp p p tn for each n the size of the set r tnp cc for each node vcc and the number of nodes in gp p are both bounded by tn m and the number of edges in gp p is bounded by thus the number of iterations of the main loop is bounded by m m where the first factor is given by claim the second factor comes from the first case above the third factor comes from second case and the fourth factor comes from the third case hence algorithm reaches a fixpoint after at most iterations of the main loop theorem for p a stable program d a dataset and a fact algorithm decides p d in time polynomial in kp dk and exponential in krk proof partial correctness follows by lemma while termination follows by lemma moreover the number of iterations of the main loop of algorithm is polynomially bounded in and hence kjk in each such iteration is bounded by kp consequently lines and of algorithm require time that is exponential in krk and polynomial in kp dk by lemma moreover lines and the check in line require time polynomial in kjk and hence in kp dk finally we argue that the check for cycles in line is feasible in time polynomial in kp dk let be the graph obtained from gjp by negating all weights then a path from to in gjp corresponds to the path from to in thus detecting whether a node occurs in gjp in at least one cycle reduces to detecting whether the node occurs in on a negative cycle on a cycle with a negative sum of weights which can be solved in polynomial time using for example a variant of the algorithm hougardy theorem for p a stable program and a fact checking p is e xp t in combined and pt in data complexity proof the e xp t ime lower bound in combined complexity and the pt ime lower bound in data complexity are inherited from plain datalog dantsin et the pt ime upper bound in data is immediate by theorem for the e xp t ime upper bound in combined complexity note that for p the of p over constants in p we have that kp k is exponentially bounded in kpk whereas krk krk hence by theorem running algorithm on p gives us an decision procedure for p proposition checking stability of a program p is undecidable proof we present a reduction from hilbert s tenth problem which is to determine whether given a polynomial p xn over variables xn equation p xn has integer solutions for each such polynomial p we can assume p qn k without loss of generality that p is of the form j cj xi j i for cj z and kj i now let pp be the program containing the following rule where b is a unary max predicate and an are distinct unary ordinary numeric predicates an xn p xn p xn b m m b m note that rule is since variables xn do not occurs in a limit atom in the rule we show that pp is stable if and only if p xn has no integer solutions assume that p xn has no integer solutions then for each grounding of at least one of the first two comparison atoms in the rule is not satisfied and so pp is trivially stable since for each j the value propagation graph gjp does not contain any edges assume that substitution exists such that p xn holds and let and be the following pseudointerpretations with the corresponding value propagation graphs xn b xn b then we clearly have v and e hvb vb i however e and e consequently program pp is not stable lemma for each j and each rule r if r is applicable to j and it contains a limit body atom b b n sb r such that b b j and variable n occurs in h r then opt r j proof consider an arbitrary j and rule r applicable to j that contains a limit body atom b b n sb r with b b j and n occurring in h r we consider the case when h r a a s for a a max predicate and variable n occurs in s with a negative coefficient the cases when a is a min predicate n occurs in s with a positive coefficient are analogous then term s has the form n t for some negative integer and term t not containing moreover r is so b is a min predicate since r is applicable to j conjunction c r j has a solution we next show that opt r j holds for which it suffices to argue that for each k z conjunction c r j has a solution such that let and let be the grounding of c r j such that n n and m m for each variable m since b b j we have j b b n moreover r is satisfies all comparison atoms in the body of r b is a min predicate and n n so also satisfies all comparison atoms in the body of hence is a solution to c r j then the following calculation implies the claim of this lemma n n lemma for each rule r with h r a t s each limit body atom b n sb r such that n occurs in s each of r such that n dom and all and such that v e e and is applicable to we have where e i proof consider arbitrary r b n and e as stated in the lemma we consider the case when a is a max and b is a min predicate the remaining cases are analogous let be the body of r and let be a of moreover e e due to v each solution to c is a solution so we next assume the claim is trivial if e holds by definition we then have opt and opt to c as well so therefore but then by lemma there exist z such that b and b since b is a min predicate e e opt r rule r is opt r and we have moreover by definition we have so variable n occurs negatively in s thus is of the form n where is a ground product evaluating to a negative integer and does not mention moreover opt so there exists a grounding of such that e and opt let be the substitution such that n and m m for m clearly satisfies all object and numeric atoms in then we have the following e opt e furthermore we have already established which implies the following e e e e e but then and clearly imply as required proposition each program is stable proof for p a program and a of p condition of definition follows by lemma and condition of definition follows by lemma proposition checking whether a program is can be accomplished in l og s pace proof let p be a program we can check whether p is by considering each rule r p independently note that the first type consistency condition is satisfied for every rule where all numeric terms are simplified as much as possible thus no of r with constants from p where all numeric terms are simplified as much as possible can violate the first condition of definition thus it suffices to check whether a of r with constants from p can violate the second or the third condition in both cases it suffices to consider at most one atom at a time a limit head atom a a s for the second condition or a comparison atom or for the third condition we consider at most one numeric term s at a time s for the third condition where s is of the n form ti mi and ti for i are terms constructed from integers variables not occurring in limit atoms and multiplication moreover for each such s we consider each variable m occurring in by assumption m occurs in s so we have mi m for some i for the second condition of definition we need to check that if the limit body atom b s mi introducing mi has the same different type as the head atom then term ti can only be grounded to positive negative integers or zero for the third condition we need to check that if s and the limit body atom b s mi introducing mi is min max then term ti can only be grounded to positive negative integers or and dually for the case s hence in either case it suffices to check whether term ti can be so that it evaluates to a positive integer a negative integer or zero we next discuss how this can be checked in logarithmic space let ti tki where each tji is an integer or a variable not occurring in a limit atom and assume without loss of generality that we want to check whether ti can be grounded to a positive integer this is the case if and only if one of the following holds all tji are integers whose product is positive the product of all integers in ti is positive and p contains a positive integer the product of all integers in ti is positive p contains a negative integer and the total number of variable occurrences in ti is even the product of all integers in ti is negative p contains a negative integer and the total number of variable occurrences in ti is odd or the product of all integers in ti is negative p contains both positive and negative integers and some variable tji has an odd number of occurrences in ti each of these conditions can be verified using a constant number of pointers into p and binary variables this clearly requires logarithmic space and it implies our claim
| 2 |
may the necessity of scheduling in ori shmuel asaf cohen omer gurewitz department of communication system engineering university of the negev email shmuelor department of communication system engineering university of the negev email coasaf department of communication system engineering university of the negev email gurewitz and forward cf is a promising relaying scheme which instead of decoding single messages or information at the relay decodes linear combinations of the simultaneously transmitted messages the current literature includes several coding schemes and results on the degrees of freedom in cf yet for systems with a fixed number of transmitters and receivers it is unclear however how cf behaves at the limit of a large number of transmitters in this paper we investigate the performance of cf in that regime specifically we show that as the number of transmitters grows cf becomes degenerated in the sense that a relay prefers to decode only one strongest user instead of any other linear combination of the transmitted codewords treating the other users as noise moreover the tends to zero as well this makes scheduling necessary in order to maintain the superior abilities cf provides indeed under scheduling we show that linear combinations are chosen and the does not decay even without state information at the transmitters and without interference alignment i ntroduction compute and forward cf is a coding scheme which enables receivers to decode linear combinations of transmitted messages exploiting the broadcast nature of wireless relay networks cf utilizes the shared medium and the fact that a receiver which received multiple transmissions simultaneously can treat them as a superposition of signals and decode linear combinations of the received messages specifically together with the use of lattice coding the obtained signal after decoding can be considered as a linear combination of the transmitted messages this is due to an important characteristic of lattice codes every linear combination of codewords is a codeword itself however since the wireless channel suffers from fading the received signals are attenuated by real and not integers attenuations factors hence the received linear combination is noisy the receiver a relay then seeks a set of integer coefficients denoted by a vector a to be as close as to the true channel coefficients this problem was elegantly associated with diophantine approximation theory in and was compared to a similar problem that of finding a vector for the true channel one can define different criteria for the goodness of the approximation for example the minimum distance between the vectors elements coefficients vector between the receiver and the transmitters in addition the vector must be an integer valued vector due to the fact that it should represent the coefficients of an integer linear combination of codewords based on this theory if one wishes to find an integer vector a that is close in terms of to a real vector h then one must increase in order to have a small approximation error between them the increase in the norm value leads to a significant penalty in the achievable rate at the receiver and thus results in a tradeoff between the goodness of the approximation and the maximization of the rate the cf scheme was extended in many directions such as mimo cf linear receivers integer forcing integration with interference alignment scheduling and more all the mentioned works considered a general setting where the number of transmitters is a parameter for the system and all transmitters are active at all times that is the receiver is able to decode a linear combination of signals from a large number of transmitters as long as the transmitters comply with the achievable rates at the receiver and still promise to some extent an acceptable performance however in this work we show that the number of simultaneous transmitters is of great importance when the number of relays is fixed in fact this number can not be considered solely as a parameter but as a restriction since when it grows the receiver will prefer to decode only the strongest user over all possible linear combinations this will make the cf scheme degenerated in the sense that the relay chooses a vector a which is actually a unit vector a line in the identity matrix thus treating all other signals as noise in other words the linear combination chosen is trivial furthermore we show that as the number of transmitters grows the scheme s sumrate goes to zero as well thus one is forced to use users scheduling to maintain the superior abilities cf provide we conclude this paper with an optimistic view that user scheduling can improve the cf gain we believe that this can be done by suitable matching of linear combinations coding possibilities using simple round robin scheduling and results for cf in fixed size systems we lower bound the we thus show that even for a simple scheduling policy the system does not decay to zero the paper is organized as follows in section ii the system in cf each relay decodes a linear combination um of the original messages and forward it to the destination with enough linear combinations the destination is able to recover the desired original messages from all sources the main results in cf are the following fig compute and forward system model l transmitters communicate through a shared medium to m relays model is described in section iii we derive an analytical expression for the probability of choosing a unit vector by the relay as the number of users grows section iv depicts the behaviour of the for this model and in section v we present the advantage of using scheduling along with a simple scheduling algorithm ii s ystem model and k nown results consider a network where l transmitters are communicating to a single destination d via m relays the model is illustrated in figure all relays form a layer between the transmitters and the destination such that each transmitter can communicate with all the relays each transmitter draws a message with equal probability over a prime size finite field wl fkp l l where fp denotes the finite field with a set of p elements this message is then forwarded to the transmitter s encoder el fkp rn which maps messages over the finite field to codewords xl el wl each codeword is subject to a power constraint kxl np the message rate of each transmitter is defined as the length of the message measured in bits normalized by the number of channel uses that is r nk log p which is for each transmitter each transmitter then broadcasts it s codeword to the channel hence each relay m m observes a noisy linear combination of the transmitted signals through the channel ym l x hml xl zm m m where hml n are the real channel coefficients and z is an gaussian noise z n let hm hml t denote the vector of channel coefficients at relay we assume that each relay knows its own channel vector after receiving the noisy linear combination each relay selects a scale coefficient r an integer coefficients vector am aml t zl and attempts to l decode the lattice point aml xl from ym note that messages with different length can be allowed with zero padding to attain a message which will result in different rates for the transmitters theorem theorem for awgn networks with channel coefficient vectors hm rl and coefficients vector am zl the following computation rate region is achievable p r hm am max log p h a m m m where x max log x theorem theorem the computation rate given in theorem is uniquely maximized by choosing to be the mmse coefficient m se p htm am p khm which results in a computation rate region of p htm am r hm am kam p khm note that the above theorems are for real channels and the rate expressions for the complex channel are twice the above theorems and since the relay can decide which linear combination to decode the coefficients vector a an optimal choice will be one that maximizes the achievable rate that is p htm am aopt arg max log ka k m m p khm am remark the coefficients vector the coefficients vector a plays a significant role in the cf scheme it dictates which linear combination of the transmitted codewords the relay wishes to decode that is each element signifies the fact that the relay is interested in it s corresponding codeword if starting from a certain number of simultaneously transmitting users the coefficients vector the relay chooses is always or with high probability a unit vector this means that essentially we treat all other users as noise and loose the promised gain of cf the following lemma bounds the search domain for the maximization problem in lemma lemma for a given channel vector h the computation rate r hm am in theorem is zero if the coefficient vector a satisfies kam p khm the problem of finding the optimal a can be done by exhaustive search for small values of however as l grows the problem becomes prohibitively complex quickly in fact it becomes a special case of the lattice reduction problem which has been proved to be this can be seen if we write the maximization problem of as an equivalent minimization problem t aopt m arg min f am am gm am am where gm p khm i p hm htm gm can be regarded as the gram matrix of a certain lattice and am will be the shortest basis vector and the one which minimize f this problem is also known as the shortest lattice vector problem slv which has known approximation algorithms due to its hardness the most notable of them is the lll algorithm which has an exponential approximation factor which grows with the size of the dimension however for special lattices efficient algorithms exist in a polynomial complexity algorithm was introduced for the special case of finding the best coefficient vector in cf iii p robability of a u nit v ector in this section we examine the coefficient vector at a single relay hence we omit the index m in the expressions a l b l c l d l fig example for the magnitude of the elements of g for different dimensions different values of l for the graphs depict a single realization for each l and were interpolated for ease of visualization a the matrix g examining the matrix g one can notice that as l the number of transmitters grows the diagonal elements grow very fast relatively to the elements specifically each diagonal element is a random variable which is a minus a multiplication of two gaussian whereas the elements are only a multiplication of two gaussian of course as l grows the former has much higher expectation value compared to later examples of g are presented in figure for different dimensions it is clear that even for moderate number of transmitters the differences in values between the diagonal and elements are significant consider now the quadric form we wish to minimize any choice of a that is not a unit vector will add more than one element from the diagonal of g to it when l is large the elements have little effect on the function value compared to the diagonal elements therefore intuitively one would prefer to have as little as possible elements from the diagonal although the elements can reduce the function value this will happen if we choose a to be a unit vector in the reminder of this section we make this argument formal minimization of the quadratic form f note that the right term consists of all possible pairs i j such that i j a total of l elements we wish to understand when will a relay prefer a unit vector over any other vector a specifically since a is a function of the random channel h we will compute the probability of having a unit vector as the minimizer of f for a given a or alternatively the probability that a certain nontrivial a will minimize f compared to a unit vector we thus wish to find the probability p r f a f ei p r p at h p where ei is a unit vector of size l with at the entry and zero elsewhere and a is any integer valued vector that is not a unit vector note that refers to any integer vector a including the vectors in the search domain such that kak p p lemma note also that the right and left hand sides of the inequality in equation are dependent hence direct computation of this probability is not trivial still this probability can be evaluated exactly noting that the angle between a and h is what mainly affects it the details are in the theorem below the minimization function f a at ga can be written as the optimality of ei a certain vector a at ga l x p p l x x l x x hi aj hj ai p at h p hi hj ai aj theorem under the cf scheme the probability that a nontrivial vector a will be the coefficient vector aopt which maximize the achievable rate r h aopt minimize f aopt comparing with a unit vector ei is upper bounded by p r f a f ei a where ix a b is the cdf of the beta distribution with eters a and b and a kak note that a for any a which is not a unit vector pr in the context of this work the main consequence of theorem is the following corollary as the number of simultaneously transmitting users grows the probability that a a will be the maximizer for the achievable rate goes to zero specifically p r f a f ei l the proofs will be given after the following discussion discussion and simulation results corollary clarifies that for every p as the number of users grows the probability of having a vector a as the maximizer of the achievable rate is going to note that the assumption of l which arises naturally form this paper s regime along with the fact that kak grantees that l is positive figure depicts the probability in it s upper bound given in equation and simulation results from the analytic results as well as the simulations on the rate of decay one can deduce that even for relatively small values of simultaneously transmitting users l the relay will prefer to choose a unit vector also one can observe from the results and from the analytic bound that as the norm of a grows the rate of decay increases this faster decay reflects the increased penalty of approximating a real vector using an integer valued vector proofs the proof of theorem is based on the lemma below t a h lemma the distribution of kak which is the squared cosine of the angle between an integer vector a and a standard normal vector h both of dimension l is beta proof let q be an orthogonal rotation matrix such that qa where is to the basis vector that is kak define qh note that is a standard normal vector since e e qh and qiqt qqt i we have at h at h at qt qh t t t t kak khk a a h h a q qa ht qt qh qa t qh qa t qh t t qa qa qh qh t t ka k k kh k k kh k considering the above we have the equality of cos this expression can be represented as ww pl where w h is a and v h i is a independent in w this ratio has a beta a b distribution with a and b note that a and b correspond to the degrees of freedom of w and v simulation beta dist bound min dist bound where a is any integer vector that is not a unit vector and l log kak l fig the upper bounds given in solid lines dashed lines and simulation results dotted lines for not having a unit vector as the minimizer of f compared to various values of as a function of simultaneously transmitting users proof of theorem according to equation we have p r f a f ei p r p at h p pr at h p a p r at h at h pr t a h pr b a where a follows since we removed negative terms and b follows from lemma with a kak the bound on the probability given in theorem consists of a complicated analytic function a hence corollary includes a simplified bound which avoids the use of a yet keeps the nature of the result in theorem the proof of corollary is based on the following lemma t a h lemma the cdf of kak can be lower bounded by the cdf of the minimum of uniform random variables in proof we start by assuming that l is even where the case of odd l will be dealt with later from the at h has the same distribution as that is for any at h pr pr pr pr a l b a is true since a larger will yield lower probability b is due to the observation that khk can be represented as pl w hi are independent w where w and v exponential note that v is essentially a sum of independent pairs this ratio is distributed as the minimum of uniform random variables lemma this is since the ratio can be interpreted as the proportion of the waiting time for the first arrival to the arrival of a poisson process in case l is an odd can increase the term in the h proof by replacing it with resulting in a distribution l which is similar to the minimum of uniform random variables in the same manner proof of corollary p r f a f ei a at h pr l b kak l where a and b follows from lemmas and respectively l log kak and the following lemma shows a simple property of the optimal coefficients vector which shows that if the relay is interested in only one transmitter a unit vector as the optimal coefficients vector it will be the transmitter with the strongest channel lemma for any channel vector h of size l with m arg maxi the optimal coefficients vector aopt which maximize the rate r h a has to satisfy m arg maxi as well proof suppose that there exist h for which for all i and that the optimal coefficients vector aopt satisfies for all i considering the rate expression r h a we will show that by rearranging aopt a b be a vector which is identical higher rate can be attain let a to aopt except the two first entries which are switched aopt and aopt the values of the vectors are the same thus we have kb ak kaopt k and the only term affecting the rate is the scalar multiplication between aopt and we first note that the signs of hi and its corresponding optimal coefficient aopt has to be equal or different for all the i the case which there exist i such that sign hi sign aopt i and j such that sign hj sign aopt j could not be possible this is due to the fact that the optimal coefficients vector has to maximize the scalar multiplication therefore considering this property aopt aopt this means b contradicting aopt that the rate can be improved by choosing a optimality specifically we ll get that as long as the maximal value in any a is not in the same place as the maximal value in h we can always improve the rate the optimality of ei all possible vectors a corollary refers to the probability that a unit vector will minimize f for a fixed a next we wish to explore this probability for any possible a for the purpose of clarity gives an upper bound on the probability that a unit vector will not minimize f compered to a certain possible integer coefficients vectors with a certain where the probability of having an optimal vector which is not a unit vector will be the union of all probabilities for each a vector which satisfies p let us define p ei as the probability that a relay picked a unit vector as the coefficient vector and p ei as the probability which any other vector was chosen in a polynomial time algorithm for finding the optimal coefficients vector a was given the complexity result derives from the fact that the cardinality of the set of all a vectors denoted p as which are considered is upper bounded by d p e that is any vector which does not exist in this set has zero probability to be the one which maximize the rate we shell note this set here as a thus we wish to compute p ei p f a f ei a l where a a z a a ei note that the cardinality of a grows with the dimension of h with l and can be easily upper bounded as follows d p p e p e p theorem under the cf scheme the probability which any other coefficients vector a will be chosen to maximize the achievable rate r h aopt compared with a unit vector ei as the number of simultaneously transmitting users grows is zero that is lim p ei proof we have lim p ei lim lim x p f a f ei a p f a f ei a lim a in is a set of points which the average of any consecutive points is mapped to a different coefficients vector lim sum rate a lim p khk l x lim p l c h lim p l i d lim p b lim trans users fig the sum rate as give in for the case of relays as a function of the number of simultaneously transmitting users for different values of lim l where a is true since the term inside the sum is maximized with b is due to in c we multiplied and divide with l and eliminate the limit term which is multiplied by since it goes to zero d follows from the strong law of large numbers were the normalized sum converge with probability one to the expected value of which is one and lastly we define l log this result implies that the probability of having any non unit vector as the rate maximizer is decreasing exponentially to zero as the number of users grows iv c ompute and f orward s um ate in order that relay m will be able to decode a linear combination with coefficients vector am all messages rates which are involved in the linear combination must be within the computation rate region all the messages for which the corresponding entry in the coefficient vector is non zero that is rl min r hm am aml hence the sum rate of the system is defined as the sum of messages rates l x min r hm am m aml following the results from previous subsections we would like to show that as the number of users grows the system s decreases to zero as well that is without scheduling users not only each individual rate is negligible this is true for the as well this will strengthen the necessity to schedule users in cf theorem as l grows the sum rate of cf is tends to zero that is l x lim min r hm am m aml proof the proof outline is as follows the sum rate expression is divided into two parts which describe two scenarios the first is for the case where a relay chooses a unit vector as the coefficients vector and the second is for the case where any other vector is chosen the probabilities for that are p ei and p ei respectively then we show that each part goes to zero by upper bounding the corresponding expressions the complete proof is given in appendix simulations for the for different values of p can be found in figure it is obvious that for large l the sumrate decreases hence for a fixed number of relays there is no use in scheduling a large number of users as cf degenerates to choosing unit vectors and treating other users as noise however the simulations suggests a peak at a small number of transmitters we explore this in the next section s cheduling in c ompute and f orward theorem and suggest that a restriction on the number of simultaneously transmitting users should be made that is in order to apply the cf scheme for systems with a large number of sources scheduling a smaller number of users should take place the most simple scheduling scheme is to schedule users in a round robin rr manner where in each transmission only k users may transmit simultaneously the value of k can be optimized yet as a thump rule one can schedule m users similar to the number of relays is each transmission to obtain a which is not going to zero figure depicts such a scenario in fact even higher sum rates can be obtained if the number of scheduled users is higher than the number of relays the number for which the maximal in figure is achieved still it is clearly seen that it is not zero for m scheduled sources compared to the zero when all l users transmit and the relay use cf in fact one can use existing results for the cf for the case of equal number of sources and relays and describe the in each transmission under such a schedule sum rate sum rate no scheduling scheduling upper best eq lower trans users power dbd fig simulation results for the average per transmission here the number of relays is and scheduling was performed in a round robin manner where in every phase sources were scheduled among the transmitting users fig simulation results for the average compared with the upper and lower bounds given in and respectively as a function of the transmission power for m and m according to the sum rate for m sources and m relays is upper bounded by any other linear combination of the transmitted signals thus cf becomes degenerate and it is more preferable to apply scheduling for much smaller group size we show that even with a simple scheduling policy the does not goes to zero and would like as future work to proceed and explore scheduling policies which exploit the decoding procedure of cf m x min r hm am m aml log p log log p for m and p a very coarse lower bound can be attained if the relays are forced to choose their coefficients vectors such that each relay i chooses ai ei that is an interference channel where each relay i considers the interferences from all other sources j j i as noise even with this one has m x min m aml m x proof the probabilities p ei and p ei define a partition on the channel vectors a relays sees specifically we define he h rl arg min f a ei r hm am min m aml p khm p khm l he h r arg min f a ei which is not zero simulation results for the bounds and the optimal cf coefficient vectors are presented in figure from the aforesaid one can conclude that scheduling m users for transmission is worthwhile with respect to the alternative of permitting all users transmit simultaneously of course the scheduling policy has great impact on the performance which can be increased if for example one schedules groups whose channel vectors are more suitable for cf that is with probability p ei a relay sees a channel vector h he and with probability p ei a relay sees a channel vector h he we note he and he as a channel vectors which belongs to he and he respectively under the above definitions the sum rate can be written as follows l x min r hm am m aml l x vi c onclusion and future work this work gives evidence for the necessity of user scheduling under the cf scheme for large number of simultaneously transmitting users we proved with probability which goes to one that in this regime the optimal choice of decoding at the relays is to decode the user with best channel instead of m x p p log hj a ppendix a p roof for t heorem p ei min r hem ei m eml p ei min r hem am m aml we treat the two terms above separately where the second term represents the sum rate for the case which the optimal coefficients vectors may be any integer vector excluding the unit vector ei and the first term is for the case that the optimal coefficients vector is ei we will show that both terms goes to zero while starting with the second term l x returning to the expectation in we have z e e khm k p ei m aml l x a z min m aml r hem am e lim p rl for all using the markov and jensen s inequalities we have e p rl e rl e e p ei l log p khm k l p ei p e khem which means in words all possible squared norm values which belong to all vectors he we define then p p henorm as the probability to belong to henorm that is z x dx p henorm x x p x x dx p ei p ei z dx x dx l p ei where satisfies x dx p and a is due to the fact that p p ei since it may happen that two vectors he and he would have the same squared norm value applying the expectation s upper bound in we have l p ei p e khem l pl p ei p ei s a l pl lp lp ei p ei p ei b l p l p l where a is due the bound log b following directly from theorem and l log considering the above as l grows the second term of is going to zero that is lim therefore we are interested in analyzing the expectation of the squared norm values belonging to all channel vectors he remember that without any constraints the channel vector h is a gaussian random vector which it s squared norm follows the distribution we shell note as x a single squared norm value can belong to a several different gaussian random vectors hence we define henorm as the set of squared norm values which belongs to he formally henorm he p z dx p ei l max r hem am m p ei l max m p khem log kam p kam khem hem t am p ei l max p khem m max p ei l p khem m e e define rl p ei l log p khm k we would like e p to show that rl that is x henorm p ei min r hem am x l x p ei min r hem am m aml p l l lim e for all thus we are left with the first term in lim l x p ei min r hm ei m eil l x min r hm ei m eil l x p khm lim min log m eil p khm m x b p khm lim p khm m x p khm log lim p khm i a where in a we set the unit vector ei in the rate expression r hm am the upper bound b is for the best case scenario for which each relay has different unit vector ei finally it is clear that as l grows for each realization of hm the argument of the log is going to r eferences nazer and gastpar harnessing interference through structured codes ieee transactions on information theory vol no pp niesen and whiting the degrees of freedom of ieee transactions on information theory vol no pp zhan nazer gastpar and erez mimo in ieee international symposium on information theory ieee pp zhan nazer erez and gastpar linear receivers ieee transactions on information theory vol no pp sakzad harshan and viterbo mimo linear receivers based on lattice reduction wireless communications ieee transactions on vol no pp he feng ionita and nazer collision scheduling for cellular networks in information theory isit ieee international symposium on ieee pp wei and chen network coding design over channels ieee transactions on wireless communications vol no pp hong and caire strategies for cooperative distributed antenna systems information theory ieee transactions on vol no pp sahraei and gastpar finding the best equation in communication control and computing allerton annual allerton conference on ieee pp dadush peikert and vempala enumerative lattice algorithms in any norm via coverings in foundations of computer science focs ieee annual symposium on ieee pp alekhnovich khot kindler and vishnoi hardness of approximating the closest vector problem with in annual ieee symposium on foundations of computer science focs ieee pp lenstra lenstra and factoring polynomials with rational coefficients mathematische annalen vol no pp gama and nguyen finding short lattice vectors within mordell s inequality in proceedings of the fortieth annual acm symposium on theory of computing acm pp conway and j sloane sphere packings lattices and groups springer science business media vol jagannathan borst whiting and modiano efficient scheduling of systems in modeling and optimization in mobile ad hoc and wireless networks international symposium on ieee pp
| 7 |
image with fast upscaling technique longguang wang zaiping lin xinpu deng wei an image misr aims to fuse information in lr image sequence to compose a hr one which is applied extensively in many areas recently different with single image sisr transitions between multiple frames introduce additional information attaching more significance to fusion operator to alleviate the of misr for approaches the inevitable projection of reconstruction errors from lr space to hr space is commonly tackled by an interpolation operator however crude interpolation may not fit the natural image and generate annoying blurring artifacts especially after fusion operator in this paper we propose an fast upscaling technique to replace the interpolation operator design upscaling filters in lr space for periodic respectively and shuffle the filter results to derive the final reconstruction errors in hr space the proposed fast upscaling technique not only reduce the computational complexity of the upscaling operation by utilizing shuffling operation to avoid complex operation in hr space but also realize superior performance with fewer blurring artifacts extensive experimental results demonstrate the effectiveness and efficiency of the proposed technique whilst combining the proposed technique with bilateral total variation btv regularization the misr approach outperforms methods index upscaling technique bilateral total variation shuffling operation i introduction ue to the limited technical and manufacturing level the resolution of image may not be satisfied in video surveillance medical imaging aerospace and many other fields where high resolution hr images are commonly required and desired for distinct image details with device ccd and cmos image sensors developing rapidly in recent decades the increasing demand in image resolution still can not be satisfied leading to attempts to steer clear of the sensor issues but utilize computational imaging to improve the spatial resolution namely sr serving as a typical inverse problem sr aims to recover missing image details during image degradation process which is underdetermined requiring additional information to alleviate the for single image sisr lack of observation information leads to attempts to exploit additional information to learn how natural images are and many approaches have been proposed for image misr transitions between multiple observations provide d wang is with the college of electronic science and engineering national university of defense technology changsha china cn lin deng and an are also with the college of electronic science and engineering national university of defense technology changsha china ing available information therefore approach is mainly concentrated on to derive the high resolution hr image through maintaining global consistency which is intuitive and natural concerning misr approaches extensive works have been put forward focusing on the design of regularization to realize favorable results tikhonov regularization sr method as a representative method introduces smoothness constraints to suppress the noise but results in the loss of detailed information in edge regions to realize edge preserving total variation tv operator is introduced as a regularization term however leads to the deterioration of smoothness in local flat region motivated by bilateral filter farsiu et al proposed the bilateral total variation btv operator measured by norm which integrates tv with bilateral filter and realizes superior performance and robustness due to the performance and simplicity of btv further improvement has attracted extensive investigation li et al proposed the locally adaptive bilateral total variation labtv operator measured by neighborhood homogeneous measurement realizing locally adaptive regularization among these approaches to maintain global consistency with multiple observations reconstruction errors are commonly integrated in the cost function to penalize the discrepancy between reconstructed hr image and lr observations within the iterative sr process inevitable projection of reconstruction error from lr space to hr space is usually tackled by an interpolation operator for simplicity however this crude operation may introduce additional errors and lead to deteriorated convergence and performance especially after fusion operation of misr in this paper we propose an fast upscaling technique to replace the interpolation operation in the sr framework firstly we unfold the degradation model to analyze underlying contributions of periodic to reconstruction error in lr space and design upscaling filters correspondingly secondly the filter results utilizing designed upscaling filters are shuffled to derive the reconstruction errors in hr space finally the reconstruction errors are cooperated with regularization term to modify the hr image iteratively until convergence extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed upscaling technique besides combining the proposed technique with btv regularization the misr approach realizes performance the rest of the paper is organized as follows section ii mainly formulates the problem of image section iii presents the proposed upscaling technique in detail section iv performs extensive experiments compared with other approaches and the conclusions are drawn in section original image geometric wrap fig blurring downsample add noise sketch of degradation model ii image sr problem formulation degradation model as the inverse process of image degradation sr reconstruction is tightly dependent on the degradation model with many degrading factors existing like atmospheric turbulence optical blurring relative motion and sampling process the degradation model of lr images can be formulated as where x represent lr image and hr image respectively serve as decimation matrix blurring operator and geometric warp matrix of respectively and is the additional gaussian noise in note that although complex motions may be common in real sequences which can not be represented by a simple parametric form and many works tend to address this problem global translational displacements between multiple frames serving as a fundamental issue is still the focus of this paper generally assuming all lr images are generated under the same condition we can derive the following model where and represent same decimation matrix and blurring operator respectively in all lr images the degradation model is further illustrated in fig sr process in bayesian framework sr reconstruction is equivalent to the estimation of hr image with given lr images where maximum a posteriori map estimator is extensively utilized as to solve the probabilistic maximization problem equivalent minimization of reconstruction errors can be derived as as insufficient information given in lr image sequence reconstructing the original hr image is an underdetermined problem to solve the problem regularization is commonly introduced as priori knowledge to obtain a stable solution and can be rewritten as where is the regularization term of hr image serves as a regularization parameter weighting the reconstruction errors against the regularization cost assuming decimation matrix blurring operator and geometric warp matrix are already known the minimization problem can be solved utilizing steepest descent approach as where are estimators of hr image in and iteration respectively is the learning rate representing the pace to approach the optimal during iterations in this paper the derivation of displacements and blur kernel is not under consideration we assume the blur kernel is already known and utilize optical flow method to estimate the underlying displacements iii image sr with fast upscaling technique in this section we first present the proposed fast upscaling technique introduce our motivation and its formulation in detail before theoretical analysis on computational complexity and convergence then we integrate the proposed upscaling technique with btv regularization to construct the overall misr framework upscaling technique motivation as we can see from projecting reconstruction errors from lr space into hr space is required in the inference where interpolation operator commonly plays a main role as the upscaling operator and then deblurring operator and inverse translation operator are performed in hr space lacking in theoretical basis crude interpolation may introduce additional errors leading to blurring artifacts therefore it serves as a fundamental operator and requires small stepsize and enough iterations to alleviate the deterioration which is and adds computational complexity in shi et al an efficient convolutional neural network is proposed where an array of upscaling filters are utilized to corporate with a shuffling operator to upscale the final lr feature maps in to hr output which is located at the very end of the network as demonstrates increasing the resolution of lr image before image enhancement increase the computational complexity besides the commonly used interpolation methods do not bring additional information to solve the reconstruction problem we are inspired to unfold the degradation model to analyze underlying contributions of periodic to reconstruction error in lr space and design similar array of upscaling filters in this paper we propose an upscaling technique to perform fast and efficient upscaling operation serving as a direct bridge between reconstruction errors in lr space and hr space formulation further analyze the degradation model shown in from the perspective of image assuming the blurring operator is limited in a region is odd for the symmetry of blurring kernel in hr space and the upscaling factor is determined original d decimal geometric wrap after translation blurring after blurring downsample after decimation fig degradation process with respect to different as namely the decimation operator is limited in a region in hr space concerning translational displacements between lr images only displacements are considered for displacements do not bring additional information without loss of generality only positive displacements are taken into consideration so that the geometric wrap operator can be limited in a region to illustrate the interaction between the concatenation of operators here we set to be with to be and derive the degradation model from the perspective of image as shown in fig with the degradation process unfolded as shown in fig it can been seen that different ranges of influence in lr space correspond with different which inspires us to structure upscaling filters concerning periodic utilizing the differences between influence ranges as parameters including displacements blurring kernel and upscaling factor are all determined the overall degradation process and the underlying contributions of periodic to lr space can be derived remembering the projection of reconstruction errors from lr space to hr space in we structure upscaling filters utilizing the underlying contributions of periodic to realize upscaling of reconstruction errors in lr space within the probabilistic framework the upscaling operator can be equivalent to an optimal estimation problem as where are reconstruction error of pixel in hr space and pixel in lr space respectively serves as the influence range in lr space of pixel in hr space and is the number of pixels in hr space further assuming serves as the contribution of pixel in hr space to pixel in lr space minimization problem equivalent to can be derived utilizing greedy strategy where is the number of pixels within as the influence ranges of different can be limited in a lr region in the case of fig here we utilize norm for simplicity and the solution can be computed as considering influence range and corresponding contributions are both dependent namely hr pixels with same share identical influence range and contribution distribution we intuitively separate the upscaling operator with respect to different and rewrite for identical in a convolution form due to the global consistent process where represents reconstruction error map for in hr space represents reconstruction error map in lr space namely and serves as the contribution distribution concerning regarding the norm of as a normalization constant we integrate it into to derive normalized contribution distribution as filter masks in this way the upscaling operator can be implemented by convolution operator which realizes favorable efficiency as reconstruction errors with respect to ranged in hr space derived separately a shuffling operator is introduced to rearrange the elements in separate error maps to a complete error map in hr space as shown in fig utilizing the proposed upscaling technique we evade the interpolation operator which may introduce additional errors design filter masks according to contribution distribution concerning ranged and process the error map in lr space separately finally a shuffling operator is implemented to derive the final error map in hr space as all processes in lr p p shuffling p p reconstruction error map h reconstruction error map e reconstruction error map h fig upscaling technique space are namely all processing results can be mapped directly to corresponding hr space without intermediate operations our upscaling technique can realize superior efficiency and effectiveness which is demonstrated in the following analysis and section iv theoretical analysis in this section theoretical analysis with respect to computational complexity and convergence are carried out respectively we attempt to illustrate the superiority of the proposed upscaling technique theoretically computational complexity for conventional upscaling technique the reconstruction errors in lr space are commonly projected into hr space by an interpolation operator first and then processed by deblurring operator and inverse translation operator in this way deblurring operator and inverse translation operator are both performed in hr space which adds computational complexity although the complexities of upscaling technique and the proposed upscaling technique are both of order where is the number of pixels in hr space the computation amounts differ greatly assuming are limited in and region of hr space respectively for upscaling technique bicubic interpolation is commonly utilized performing as weighted sum of neighboring pixels in lr space afterwards deblurring operator and inverse translation operator are performed as weighted sum of neighboring and pixels respectively in hr space for our proposed upscaling technique the upscaling operator is performed as weighted sum of neighboring pixels in lr space which remarkably scent and fixed stepsize are commonly utilized for upscaling technique which requires small stepsize and enough iterations to approach the optimal as upscaling technique introduces additional errors the deviation of descent direction makes the convergence process greatly time consuming while methods typically tend to converge in fewer iterations the computation of hessian matrix in each iteration is required leading to expensive computational cost as we analyze our upscaling technique theoretically it can be regarded as a variation and simplification of method which can realize superior convergence remembering the minimization problem in as we unfold the degradation model it can be rewritten as where represent vectorized hr image and lr image respectively serves as a dictionary arranged in lexicographic order which consists of atoms as analyzed before different correspond to different influence ranges and contribution distributions we utilize this characteristic to construct overcomplete dictionary as shown in fig for newton method the inference of can be written as as dictionary is hard to manipulate newton method can not be directly utilized in general considering is commonly a operator and the computation of second derivation is relatively difficult besides the regularization parameter is usually small we simplify and separate term out as reduces the computational complexity especially with er upscaling factor unfold the symmetric matrix and we can derive convergence for misr approaches steepest as we can see from fig the atom is highly sparse and as the inverse operation of is hard to manipulate regional namely equals to zero except we only take diagonal elements into consideration namely where represents the neighborhood of corresponding regard as a diagonal matrix by ignoring other hr pixel in hr space taking this into consideration entries and rewrite as elements in can be rearranged through placing relative atoms closer and then we can derive an mate diagonal matrix namely most entries in rewrite in an way and we can equal to zeros except diagonal ones and some other ones derive x yk yk x dictionary akt x yk yk x fig procedure of dictionary if we push atom backwards into the corresponding lr image can rewritten in a convolution form where represents the contribution distribution map corresponding to atom now we can see performs identical to concerning reconstruction error in lr space illustrating the proposed upscaling technique performs as a variation and simplification of newton method which can realize superior convergence as is a atom performs as a fication constant no less than considering is commonly small the magnification effect on regularization term can be ignored and we can derive technique technique mse lr interpolation x l l hr x strategy however the descent direction utilizing our upscaling technique is relatively more accurate therefore the convergence can be faster and more stable to further demonstrate the superior convergence we utilize tikhonov regularization without loss of generality and compare the convergence process of with the upscaling technique the comparison of convergence process is shown in fig btv regularization lr yn lr error map hr error map dhfk x yk g considering our upscaling technique performs as an approximate newton method where the simplifications may introduce additional errors therefore we also applied a similar learning rate in as for stable convergence upscaling technique with btv regularization to construct the overall misr framework due to the performance and simplicity of btv it has become one of the most commonly applied regularization in sr process therefore we utilize btv in our misr framework to corporate with the proposed upscaling technique the overall framework is illustrated in fig and further summarized in algorithm fig overall misr framework algorithm misr utilizing upscaling technique input lr images blurring kernel upscaling factor initialize select target image for example utilize bicubic interpolation to derive initial hr image estimate translational displacements between target image and other lr images loop until or compute error map in lr space respectively perform upscaling technique to derive error map in hr space compute btv regularization and its tion update to derive according to output reconstructed hr image iv experimental results iteration fig comparison of convergence process as we can see form fig our technique converges within around iterations while technique requires more than iterations to converge demonstrating superior convergence of our upscaling technique misr framework in this section we integrate the proposed in this section extensive experimental results are presented to demonstrate the effectiveness and efficiency of the proposed upscaling technique we first perform experiments to demonstrate the effectiveness of our upscaling technique through equipping it to various misr methods and then the proposed misr framework is compared with other algorithms as described in the degradation model the degraded lr images are generated from an hr image through parallel translations blurring downsampling and addition of noises in the experiments the translational displacements are randomly set with vertical and horizontal shifts randomly sampled from a uniform distribution in the blurring a tikhonov b tv c btv a b c d labtv d fig comparison of methods utilizing technique with baselines on image baby a tikhonov b tv c btv a b c d labtv d fig comparison of methods utilizing technique with baselines on image butterfly operator is realized utilizing a gaussian kernel with standard deviation after geometric wrapping and blurring operation the images are then downsampled by a factor finally gaussian noise with standard deviation is added in our scenario we use lr images to reconstruct an hr image for misr approaches select the first one as target image without loss of generality and suppose the blurring kernel is already given as human vision is more sensitive to brightness changes all the sr methods are mented only in brightness channel y with color channels uv upscaled by bicubic interpolation for color images all the experiments are coded in matlab and running on a workstation with septuple core ghz cpus and gb memory for quantitative analysis and comparison of reconstruction performance ratio psnr and mean structure similarity ssim are utilized as metrics which are defined as senting the dynamics of a pixel value and are generally set to be and respectively where are mean value of image and respectively are standard variance of and respectively are two stabilizing constants with a evaluation of the proposed upscaling technique a original e btv to validate the effectiveness and efficiency of the pro b bicubic c kim d yang f labtv g miscsr j proposed fig reconstruction results for image bridge by ranged methods posed upscaling technique we first select four representative misr method consisting of tikhonov tv btv and labtv method as baseline methods apply our technique to replace technique in the sr pipelines note as for example and conduct experiments on dataset to compare the performance correspondingly visual comparison is exhibited in fig and with quantitative results are presented in table i as we can see from fig and compared with corresponding baseline methods methods equipped with our technique generate sharper edges and fine details effectively alleviate the blurring effects with fewer artifacts and realize superior visual quality which demonstrates the effectiveness of our technique from the quantitative results shown in table i we can further see that the proposed technique remarkably improves the reconstruction performance with respect to psnr and ssim in all the images of whilst accelerating the misr process the psnr values have been improved by around db in average while ssim values also increased by around concerning computational complexity the running time of equipped methods is shortened by around in average with practicability greatly enhanced table i comparison of psnr ssim and runnding time the mean perfromance of experiments is presented with the performance improvement equipped with the proposed technique shown in brackets red and bold metric tikhonov tv btv labtv psnr baby ssim time psnr bird ssim time psnr butterfly ssim time psnr head ssim time psnr woman ssim time psnr average ssim time b comparison with methods to further demonstrate the effectiveness and efficiency of the proposed misr framework seven methods are selected to compare with our work as bicubic interpolation serves as the simplest sr approach it is selected as a baseline method serving as most cited method in the field of misr for the performance and simplicity farsiu s btv method is selected besides with its variation li s labtv method as the popularity of approaches increases we also select kato s sparse coding method for misr denote as miscsr as one method in this field in addition kim s and yang s methods considered as sisr methods are also introduced in the comparison for fair comparisons the source codes of kim s and yang s methods released in the authors homepages are a original e btv directly implemented in our experiments as no available b bicubic c kim d yang f labtv g miscsr j proposed fig reconstruction results for image commic by ranged methods a original e btv b bicubic c kim f labtv g miscsr d yang j proposed fig reconstruction results for image foreman by ranged methods codes for other methods we implement them according to the instructions in and the performance may differ from the original note that kato s miscsr method is only utilized for comparison with upscaling factor for instructions in only presented implementation details and parameter settings under this condition extensive experiments are conducted on dataset and the reconstruction results are exhibited in figs with quantitative results presented in table iii in our scenario for bicubic method and sisr methods only the target lr image first lr image is utilized for reconstruction and for misr methods same registration procedure is adopted note that the blurring kernel is posed to be given for all sr approaches for its derivation is not the focus of this paper other detailed parameter settings for our misr framework are summarized in table ii table ii parameter settings for the proposed upscaling technique parameters values tolerance threshold maximum iteration regularization parameter in step size in from the perspective of visual quality for sisr methods kim s method serving as the superior approach has already recovered the major structures of the scene however it a original e btv tends to oversmooth fine details for misr methods b bicubic c kim d yang f labtv g miscsr j proposed fig reconstruction results for image girl by ranged methods blurring artifacts in btv and labtv methods are commonly noticeable especially within edge and texture regions although the sparse representation alleviates the blurring effect for miscsr some ragged edges are still visible by comparison the proposed misr approach produces sharper and clearer images with fine details and fewer artifacts from the quantitative results exhibited in table iii we can further see that our approach outperforms other methods in all the images of with respect to reconstruction performance and efficiency compared with kim s method serving as the superior sisr method the psnr value of our approach is improved by db in average with processing efficiency times faster compared with miscsr method known as the misr method our approach improves the psnr value by db and runs nearly times faster comparing our approach with btv method the superiority of the proposed upscaling technique can be further extensively validated by the higher psnr values with shorter running time leave out the bicubic method we can see our approach performs as the most effective and efficient one among comparing methods with practical applications table iii comparison of psnr and running time for range methods for mimr methods the mean perfromance of experiments is presented with standard derivation shown in brackets the best results are shown in red bold baboon barbara bridge coastguard comic face flowers foreman lenna man monarch pepper ppt zebra average bicubic psnr time kim psnr time psnr yang time btv psnr to further demonstrate the effectiveness of the proposed misr approach experiments are conducted on dataset concerning ranged upscaling factors and noise intensities with results presented in table iv and table as shown in table iv with upscaling factor increased the effects of multiple observations are gradually erased due to the growing leading to the performances of misr approaches deteriorating severely and even inferior to sisr approaches in several conditions although our approach also undergoes same deterioration time labtv psnr time miscsr psnr time proposed psnr time it still performs superiorly with highest psnr values in most conditions from table v we can further see that our approach performs strong robustness and tolerance to noises as all the sr methods are sensitive to noises and deteriorate with noise intensity increased the proposed approach still outperforms in average even under the condition of noise intensity the proposed approach performs db and db compared with kim s method and labtv method respectively a original e btv b bicubic c kim d yang f labtv g miscsr j proposed fig reconstruction results for image lenna by ranged methods table iv magnification and performance in terms of psnr on dataset for mimr methods the mean perfromance of experiments is presented with standard derivation shown in brackets the best results are shown in red bold baby bird butterfly head woman average baby bird butterfly head woman average baby bird butterfly head woman average upscaling factor bicubic kim yang btv labtv miscsr proposed table v noise intensity and performance in terms of psnr on dataset for mimr methods the mean perfromance of experiments is presented with standard derivation shown in brackets the best results are shown in red bold baby bird butterfly head woman average baby bird butterfly head woman average baby bird butterfly head woman average noise intensity bicubic kim yang btv labtv miscsr proposed conclusions in this paper we propose an fast upscaling technique to replace the interpolation operator for misr approaches as we unfold the degradation model from the perspective of image we find the influence ranges and underlying contributions of periodic vary periodically which inspires us to design upscaling filters for periodic respectively and utilize a shuffling operator to realize effective fusion operation equipped with our upscaling technique remarkable improvements are realized with respect to reconstruction performance and efficiency for methods besides the cooperation of our technique and btv regularization outperforms other methods demonstrated by extensive experiments references chandran fookes lin sridharan investigation into optical flow for surveillance applications aprs workshop on digital image computing vol no pp zhang zhang shen and li a reconstruction algorithm for surveillance images signal processing vol no pp greenspan oz kiryati peled in mri in proc ieee isbi pp shi caballero ledig zhuang bai bhatia marvao dawes oregan and rueckert cardiac image with global correspondence using patchmatch in proc int conf medical image computing and computer assisted intervention miccai pp trinh luong dibos rocchisani pham and nguyen novel method for and denoising of medical images ieee trans image vol no pp apr tatem lewis atkinson nixon target identification from remotely sensed images using a hopfield neural network ieee trans geosci rem vol no pp apr thornton atkinson and holland mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using international journal of remote sensing vol no pp makantasis karantzalos doulamis doulamis deep supervised learning for hyperspectral data classification through convolutional neural networks in proc ieee igarss jul pp tatem lewis atkinson nixon land cover pattern prediction using a hopfield neural network remote sens vol no pp goto fukuoka nagashima hirano and sakurai system for in proc int conf pattern recognition pp zhang gao tao and li dictionary for single image in proc cvpr providence ri jun pp yang and yang fast direct by simple functions in proc iccv pp timofte de and gool anchored neighborhood regression for fast in proc iccv pp dong loy he and tang learning a deep convolutional network for image in proc eccv pp yang wang zhang and wang neighbor embedding for image super resolution with sparse tensor ieee trans image vol no pp jul gu et al convolutional sparse coding for image in proc iccv pp nguyen milanfar golub a computationally efficient superresolution image reconstruction algorithm ieee trans image vol no pp zhang lam wu wong application of tikhonov regularization to reconstruction of brain mri image lecture notes in computer science vol pp ng shen lam zhang a total variation regularization based reconstruction algorithm for digital video eurasip adv signal process vol no pp babacan molina katsaggelos parameter estimation in tv image restoration using variational distribution approximation ieee trans image vol no pp apr yuan zhang shen multiframe employing a spatially weighted total variation model ieee syst video vol no pp shen zhang huang li a map approach for joint motion estimation segmentation and super resolution ieee trans image vol no pp molina mateos katsaggelos and vega bayesian multichannel image restoration using compound random fields ieee trans image vol no pp humblot superresolution using hidden markov model and bayesian detection estimation framework eurasip appl signal vol article id pp farsiu robinson elad milanfar fast and robust superresolution ieee trans image vol no pp purkait and chanda super resolution image reconstruction through bregman iteration using morphologic regularization ieee trans image vol no pp shi caballero huszar et al single image and video using an efficient convolutional neural network in proc ieee cvpr pp protter elad takeda and milanfar generalizing the to reconstruction ieee trans image vol no pp mar takeda milanfar protter and elad superresolution without explicit subpixel motion estimation ieee trans image vol no pp liu sun on bayesian adaptive video super resolution ieee trans image vol no pp li hu gao ning a image method signal processing vol no kato hino murata image super resolution based on sparse coding neural networks vol pp yang wright huang image as sparse representation of raw image patches ieee trans image vol no pp kim and kwon using sparse regression and natural image prior ieee trans pattern anal mach vol no pp
| 1 |
jan the graph of modules over commutative rings ii habibollah and shokoufeh habibi abstract let m be a module over a commutative ring in this paper we continue our study of graph ag m which was introduced in the zariski of modules over commutative rings comm ag m is a undirected graph in which a nonzero submodule n of m is a vertex if and only if there exists a nonzero proper submodule k of m such that n k where n k the product of n and k is defined by n m k m m and two distinct vertices n and k are adjacent if and only if n k we prove that if ag m is a tree then either ag m is a star graph or a path of order and in the latter case m f s where f is a simple module and s is a module with a unique submodule moreover we prove that if m is a cyclic module with at least three minimal prime submodules then gr ag m and for every cyclic module m cl ag m in m introduction throughout this paper r is a commutative ring with a identity and m is a unital by n m resp n m we mean that n is a submodule resp proper submodule of m define n r m or simply n m r rm n for any n m we denote m by annr m or simply ann m m is said to be faithful if ann m let n k m then the product of n and k denoted by n k is defined by n m k m m see there are many papers on assigning graphs to rings or modules see for example the graph ag r was introduced and studied in ag r is a graph whose vertices are ideals of r with nonzero annihilators and in which two vertices i and j are adjacent if and only if ij later it was modified and further studied by many authors see in we generalized the above idea to submodules of m and defined the undirected graph ag m called the graph with vertices v ag m n m there exists k m with n k in this graph distinct vertices n l v ag m are adjacent if and only if n l let ag m be the subgraph of ag m with vertices v ag m n m with n m ann m there exists a submodule k m with k m ann m and n k note that m is a vertex of ag m if and only if there exists a date april mathematics subject classification primary secondary key words and phrases graph cyclic module minimal prime submodule chromatic and clique number habibollah and shokoufeh habibi nonzero proper submodule n of m with n m ann m if and only if every nonzero submodule of m is a vertex of ag m in this work we continue our studying in and we generalize some results related to graph obtained in for graph a prime submodule of m is a submodule p m such that whenever re p for some r r and e m we have r p m or e p the prime radical radm n or simply rad n is defined to be the intersection of all prime submodules of m containing n and in case n is not contained in any prime submodule radm n is defined to be m the notations z r n il r and m in m will denote the set of all the set of all nilpotent elements of r and the set of all minimal prime submodules of m respectively also zr m or simply z m the set of zero divisors on m is the set r rm for some m m a clique of a graph is a complete subgraph and the supremum of the sizes of cliques in g denoted by cl g is called the clique number of let g denote the chromatic number of the graph g that is the minimal number of colors needed to color the vertices of g so that no two adjacent vertices have the same color obviously g cl g in section of this paper we prove that if ag m is a tree then either ag m is a star graph or is the path and in this case m f s where f is a simple module and s is a module with a unique submodule see theorem next we study the bipartite graphs of modules over artinian rings see theorem moreover we give some relations between the existence of cycles in the graph of a cyclic module and the number of its minimal prime submodules see theorem and corollary let us introduce some graphical notions and denotations that are used in what follows a graph g is an ordered triple v g e g consisting of a nonempty set of vertices v g a set e g of edges and an incident function that associates an unordered pair of distinct vertices with each edge the edge e joins x and y if e x y and we say x and y are adjacent a path in graph g is a finite sequence of vertices xn where and xi are adjacent for each i n and we denote xi for existing an edge between and xi a graph h is a subgraph of g if v h v g e h e g and is the restriction of to e h a bipartite graph is a graph whose vertices can be divided into two disjoint sets u and v such that every edge connects a vertex in u to one in v that is u and v are each independent sets and complete bipartite graph on n and m vertices denoted by kn m where v and u are of size n and m respectively and e g connects every vertex in v with all vertices in u note that a graph m is called a star graph and the vertex in the singleton partition is called the center of the graph for some u v g we denote by n u the set of all vertices of g u adjacent to at least one vertex of u for every vertex v v g the size of n v is denoted by d v if all the vertices of g have the same degree k then g is called or simply regular an independent set is a subset of the vertices of a graph such that no vertices are adjacent we denote by pn and cn a path and a cycle of order n respectively let g and be two graphs a graph homomorphism from g to is a mapping v g v such that for every edge u v of g u v is an edge of a retract of g is a subgraph h of g such that there exists a homomorphism g h such the graph of modules that x x for every vertex x of the homomorphism is called the retract graph homomorphism see the graph ii an ideal i r is said to be nil if i consist of nilpotent elements i is said to be nilpotent if i n for some natural number proposition suppose that e is an idempotent element of we have the following statements a r where er and e b m where em and e m c for every submodule n of m n such that is an submodule is an and n r m d for submodules n and k of m n k such that n and k e prime submodules of m are p and q where p and q are prime submodules of and respectively proof this is clear we need the following lemmas lemma see proposition let rn be ideals of then the following statements are equivalent a r r rn b as an abelian group r is the direct sum of rn c there exist pairwise orthogonal central idempotents en with en and ri rei i lemma see theorem let i be a nil ideal in r and u r be such that u i is an idempotent in then there exists an idempotent e in ur such that e u lemma see lemma let n be a minimal submodule of m and let ann m be a nil ideal then we have n or n em for some idempotent e proposition let m be an artinian ring and let m be a finitely generated module then every nonzero proper submodule n of m is a vertex in ag m proof let n be a submodule of m so there exists a maximal submodule k of m such that n hence we have m k m m n m since m is an artinian ring k m is a minimal prime ideal containing ann m thus k m ass m it follows that k m m for some m m therefore n rm as desired lemma let m where em m and e e is an idempotent element of if ag m is a graph then one of the following statements holds a both and are prime habibollah and shokoufeh habibi b one mi is a prime module for i and the other one is a module with a unique submodule moreover ag m has no cycle if and only if either m f s or m f d where f is a simple module s is a module with a unique submodule and d is a prime module proof if none of and is a prime module then there exist r ri re and r e mi mi with ri mi and ri annri mi for i so and form a triangle in ag m a contradiction thus without loss of generality one can assume that is a prime module we prove that ag has at most one vertex on the contrary suppose that n k is an edge of ag therefore n and k form a triangle a contradiction if ag has no vertex then is a prime module and so part a occurs if ag has exactly one vertex then by theorem and proposition we obtain part b now suppose that ag m has no cycle if none of and is a simple module then choose submodules ni in mi for some i so and form a cycle a contradiction the converse is trivial theorem if ag m is a tree then either ag m is a star graph or ag m moreover ag m if and only if m f s where f is a simple module and s is a module with a unique submodule proof if m is a vertex of ag m then there exists only one vertex n such that ann m n m and since ag m is an empty subgraph hence ag m is a star graph therefore we may assume that m is not a vertex of ag m suppose that ag m is not a star graph then ag m has at least four vertices obviously there are two adjacent vertices n and k of ag m such that n k and k n let v n k ni and v k n kj since ag m is a tree we have v n v k by theorem diam ag m so every edge of ag m is of the form n k n ni or k kj for some i and j now consider the following claims claim either n or k pick p and q since ag m is a tree np kq is a vertex of ag m if np kq nu for some u then knu a contradiction if np kq kv for some v then n kv a contradiction if np kq n or np kq k then n or k respectively and the claim is proved here without loss of generality we suppose that n clearly n m m k and k m m n claim our claim is to show that n is a minimal submodule of m and k to see that first we show that for every m n rm n assume that m n and rm n if rm k then k n a contradiction thus rm k and the induced subgraph of ag m on n k and rm is a contradiction so rm n this implies that n is a minimal submodule of m now if k then we obtain the induced subgraph on n k and n m m k m m is a contradiction thus k as desired the graph of modules claim for every i and every j ni kj n let i and j since ni kj is a vertex and n ni kj k ni kj either ni kj n or ni kj if ni kj k then k a contradiction hence ni kj n and the claim is proved claim we complete the claim by showing that m has exactly two minimal submodules n and let l be a submodule properly contained in since n l n k either l n or l ni for some i so by the claim n l k a contradiction hence k is a minimal submodule of m suppose that is another minimal submodule of m since n and k both are minimal submodules we deduce that n a contradiction so the claim is proved now by the claims and k and k is a minimal submodule of m then by lemma k em for some idempotent e now we have m em m by lemma we deduce that either m f s and ag m or r f d and ag m is a star graph conversely we assume that m f then ag m has exactly four vertices s f n and f n thus ag m with the vertices s f n and f n theorem let r be an artinian ring and ag m is a bipartite graph then either ag m is a star graph or ag m moreover ag m if and only if m f s where f is a simple module and s is a module with a unique submodule proof first suppose that r is not a local ring hence by theorem r rn where ri is an artinian local ring for i by lemma and proposition since ag m is a bipartite graph we have n and hence if is a prime module then it is easy to see that is a vector space over and so is a semisimple hence by lemma and theorem we deduce that either ag m is isomorphic to or now we assume that r is an artinian local ring let m be the unique maximal ideal of r and k be a natural number such that mk m and m clearly m is adjacent to every other vertex of ag m and so ag m is a star graph proposition assume that ann m is a nil ideal of a if ag m is a finite bipartite graph then either ag m is a star graph or ag m b if ag m is a regular graph of finite degree then ag m is a complete graph proof a if m is a vertex of ag m then ag m has only one vertex n such that ann m n m and since ag m is an empty subgraph ag m is a star graph thus we may assume that m is not a vertex of ag m and hence by theorem m is not a prime module therefore theorem follows that m is an artinian ring if m m is a local ring then there exists a natural number k such that mk m and m clearly m is adjacent to every other vertex of ag m and so ag m is a star graph otherwise by theorem and lemma there exist pairwise orthogonal central idempotents modulo ann m by lemma it is easy to see habibollah and shokoufeh habibi that m em e m where e is an idempotent element of r and lemma implies that ag m is a star graph or ag m b if m is a vertex of ag m since ag m is a regular graph then ag m is a complete graph hence we may assume that m is not a vertex of ag m so m is not a prime module and hence rm such that m m r ann m it is easy to see that rm m r if the set of of rm m r is infinite then m r rm has infinite degree a contradiction thus rm and m r have finite length since rm m r m has finite length so that m is an artinian ring as in the proof of part a m if has one submodule n then deg deg n and this contradicts the regularity of ag m hence is a simple module similarly is a simple module so ag m now suppose that m m is an artinian local ring now as we have seen in part a there exists a natural number k such that m is adjacent to all other vertices and we deduce that ag m is a complete graph let s be a multiplicatively closed subset of a subset s of m is said to be if se s for every s s and e s an subset s is said to be saturated if the following condition is satisfied whenever ae s for a r and e m then a s and e s we need the following result due to lu theorem see theorem let m rm be a cyclic module let s be an subset of m relative to a multiplicatively closed subset s of r and n a submodule of m maximal in m s if s is saturated then ideal n m is maximal in r s so that n is prime in m theorem if m is a cyclic module ann m is a nil ideal and in m then ag m contains a cycle proof if ag m is a tree then by theorem either ag m is a star graph or f s where f is a simple module and s has a unique submodule the latter case is impossible because in f s suppose that ag m is a star graph and n is the center of star clearly one can assume that n is a minimal submodule of m if n then by lemma there exists an idempotent e r such that n em so that m em e m now by proposition and lemma we conclude that in m a contradiction hence n thus one may assume that n rm and rm suppose that and are two distinct minimal prime submodules of m since rm we have rm m ann m pi m i so rm m m rm pi i hence m pi i choose z m m and set z z m and s if s then n m n s is not empty then has a maximal element say n hence by theorem n is a prime submodule of m since n we have n a contradiction because z n m so s therefore there exist positive integer k and m such that z k now consider the submodules m and z k m it is clear that m and m z k m if m z k m then z m a contradiction thus m and z k m form a triangle in ag m a contradiction hence ag m contains a cycle the graph of modules theorem suppose that m is a cyclic module radm and ann m is a nil ideal if in m then either ag m contains a cycle or ag m proof a similar argument to the proof of theorem shows that either ag m contains a cycle or m f s where f is a simple module and s is a module with a unique submodule the latter case implies that ag m note that radf where f is a simple module and d is a prime module we recall that n m is said to be a semiprime submodule of m if for every ideal i of r and every submodule k of m i k n implies that ik n further m is called a semiprime module if m is a semiprime submodule every intersection of prime submodules is a semiprime submodule see theorem let s be a maltiplicatively closed subset of r containing no zerodivisors on finitely generated module m then cl ag ms cl ag m moreover ag ms is a retract of ag m if m is a semiprime module in particular cl ag ms cl ag m whenever m is a semiprime module proof consider a vertex map v ag m v ag ms n ns clearly ns ks implies n k and n k if and only if ns ks thus is surjective and hence cl ag ms cl ag m in what follows we assume that m a semiprime module if n k and n k then we show that ns ks without loss of generality we can assume that m is not a vertex of ag m and on the contrary suppose that ns ks then ns ks n k s and so n a contradiction this shows that the map is a graph homomorphism now for any vertex ns of ag ms we can choice the fixed vertex n of ag m then is a retract graph homomorphism which clearly implies that cl ag ms cl ag m under the assumption corollary if m is a finitely generated semiprime module then cl ag t m cl ag m where t r z m since the chromatic number g of a graph g is the least positive integer r such that there exists a retract homomorphism g kr the following corollaries follow directly from the proof of theorem corollary let s be a maltiplicatively closed subset of r containing no on finitely generated module m then ag ms ag m moreover if m is a semiprime module then ag ms ag m corollary if m is a finitely generated semiprime module then ag t m ag m where t r z m eben matlis in proposition proved that if pn is a finite set of distinct minimal prime ideals of r and s r pi then rpn rs in this result was generalized to finitely generated multiplication modules in theorem we use this generalization for a cyclic module theorem see theorem let pn be a finite set of distinct minimal prime submodules of finitely generated multiplication module m and s r pi m then mpn ms where pi pi m for i habibollah and shokoufeh habibi theorem let m be a cyclic module and pn be a finite set of distinct minimal prime submodules of m then there exists a clique of size proof let m be a cyclic module and s r pi where pi pi m for i then since m is a multiplication module by theorem there exists an isomorphism mpn ms let m rm ei and ei ni where m m i n and is in the position of ei consider the principal submodules ni ni ni in the module ms by lemma and proposition the product of submodules rpi and rpj are zero i j since is an isomorphism there exists tij s such that tij ri nj for every i j i j n where ni ri m for some ri let t tij we show that tnn is a clique of size n in ag m for every i j i j n rtni rtnj rtnj m rtni rtnj m tri m tri rtnj since tni s ni ni we deduce that tni are distinct submodules of m corollary for every cyclic module m cl ag m in m and if in m then gr ag m theorem let m be a cyclic module and radm then ag m cl ag m in m proof if in m then by corollary there is nothing to prove thus suppose that in m pn for some positive integer let pi pi m and s r pi by theorem we have mpn ms clearly cl ag ms now we show that ag ms by corollary pi rpi is the only prime submodule of m and since radm every mpi is a simple rpi define the map c v ag ms n by c nn min ni since each mpi is a simple module c is a proper vertex coloring of ag ms thus ag ms n and so ag ms cl ag ms since radm it is easy to see that s z m now by theorem and corollary we obtain the desired theorem for every module m cl ag m if and only if ag m proof for the first assertion we use the same technique in theorem let cl ag m on the contrary assume that ag m is not bipartite so ag m contains an odd cycle suppose that c be a shortest odd cycle in ag m for some natural number clearly k since c is a shortest odd cycle in ag m is a vertex now consider the vertices and if then this implies that is an odd cycle a contradiction thus if then we have again a contradiction hence it is easy to check and form a triangle in ag m a contradiction the converse is clear the radical of i defined as the intersection of all prime ideals containing i the next theorem we recall that if m is a finitely denoted by i before stating p generated module then q m rad q m where q m see and proposition also we know that if m is a finitely generated module then the graph of modules for every prime ideal p of r with p ann m there exists a prime submodule p of m such that p m p see theorem theorem assume that m is a finitely generated module ann m is a nil ideal and in m if ag m is a graph then ag m is a star graph proof suppose first that p is the unique minimal prime submodule of m since m is not a vertex of ag m hence z m so there exist elements r r and m m such that rm it is easy to see that rm and rm are vertices of ag m because rm rm since ag m is rm or rm is a minimal submodule of m without loss of generality we can assume that rm is a minimal submodule of m so that rm if rm is a minimal submodule of m then there exists m such that rm we claim that rm is the unique minimal submodule of m on the contrary suppose that k is another minimal submodule of m so either k k or k if k k then by lemma k em for some idempotent element e r and hence m em e m this implies that in m a contradiction if k then we have k k m m rm m m rm k a contradiction so rm is the unique minimal submodule of m let n rm v ag m a k rm k b a and c rm we prove that ag m is a bipartite graph with parts and we may assume that is an independent set because ag m is we claim that one end of every edge of ag m is adjacent to rm and another end contains rm to prove this suppose that n k is an edge of ag m and rm n rm since n rm rm by the minimality of rm either n rm or rm n the latter case follows that k rm if n rm then k rm and hence rm so our plain is proved this gives that is an independent set and n c since every vertex of a contains rm and ag m is all vertices in a are just adjacent to rm and so by theorem n c b since one end of every edge is adjacent to rm and another end contains rm we also deduce that every vertex of c contains rm and so every vertex of a contains rm note that if rm p then one end of each edge of ag m is contained in rm and since rm is a minimal submodule of m ag m is a star graph with center rm p now suppose that p rm we claim that p a since rm p it suffices to show that rm p to see this let r p m we prove that rm clearly rrm rm if rm then we are done thus rrm rm and so m rsm for some s we have p m rs by theorem we have n il r p m note that ann m rad m p m therefore rs is unit a contradiction as required since n c b if b then c and so ag m is a star graph with center rm it remains to show that b suppose that k b and consider the vertex k p of ag m since every vertex of a contains rm yields k p b pick k p since ag m is one can find an element such that is a minimal submodule of m and since rm is the unique minimal submodule of m we have rm thus rm k p a contradiction so b and we are done hence ag m is a star graph whose center is rm as desired habibollah and shokoufeh habibi corollary assume that m is a finitely generated module ann m is a nil ideal and in m if ag m is a bipartite graph then ag m is a star graph references aalipour akbari nikandish nikmehr and shaveisi on the coloring of the graph of a commutative ring discrete mathematics minimal prime ideals and cycles in graphs rocky mountain j math vol no aalipour akbari behboodi nikandish nikmehr and shaveisi the classication of the graphs of commutative rings algebra colloquium anderson and livingston the graph of a commutative ring algebra springer nj anderson fuller rings and categories of modules new and farshadifar product and dual product of submodules far east j math sci and habibi the zariski of modules over commutative rings comm algebra the graph of modules over commutative rings arxiv submitted atiyah and macdonald introduction to commutative algebra beck coloring of commutative rings j behboodi and rakeei the graph of commutative rings algebra appl vol no lam a first course in rings springer verlag new york lu prime submodules of modules comment math univ pauli no of submodules in modules ii mathematica japonica spectra of modules comm algebra unions of prime submodules houston journal of no modules with noetherian spectrum comm algebra matlis the minimal prime spectrum of a reduced ring illinois j math reinard graph theory grad texts in math springer nj samei reduced multiplication modules math sci tavallaee and varmazyar of submodules in modules iust international journal of engineering science department of pure mathematics faculty of mathematical sciences university of guilan o box rasht iran ansari department of pure mathematics faculty of mathematical sciences university of guilan o box rasht iran
| 0 |
feb inference in additively separable models with a set of conditioning variables damian kozbur university of department of economics email abstract this paper studies nonparametric series estimation and inference for the effect of a single variable of interest x on an outcome y in the presence of potentially conditioning variables z the context is an additively separable model e z x z the model is highdimensional in the sense that the series of approximating functions for z can have more terms than the sample size thereby allowing z to have potentially very many measured characteristics the model is required to be approximately sparse z can be approximated using only a small subset of series terms whose identities are unknown this paper proposes an estimation and inference method for x called double selection which is a generalization of selection standard rates of convergence and asymptotic normality for the estimator are shown to hold uniformly over a large class of sparse data generating processes a simulation study illustrates finite sample estimation properties of the proposed estimator and coverage properties of the corresponding confidence intervals finally an empirical application estimating convergence in gdp in a crosssection demonstrates the practical implementation of the proposed method key words additive nonparametric models sparse regression inference under imperfect model selection jel codes introduction nonparametric estimation in econometrics and statistics is useful in applications where theory does not provide functional forms for relations between relevant observed variables in many problems primary quantities of interest can be computed from the conditional expectation function of an outcome variable y given a regressor of interest x e x date first version september this version is of february correspondence department of economics university of i thank christian hansen tim conley matt taddy azeem shaikh dan nguyen dan zhou emily oster martin schonger eric floyd kelly reeve and seminar participants at university of western ontario university of pennsylvania rutgers university monash university and the center for law and economics at eth zurich for helpful comments i gratefully acknowledge financial support from the eth postdoctoral fellowship damian kozbur in this case nonparametric estimation is a flexible means for estimating unknown from data under minimal assumptions in most econometric models however it is also important to take into account conditioning information given through variables z failing to properly control for such variables z will lead to incorrect estimates of the effects of x on y when such conditioning information is important to the problem it is necessary to replace the simple objective of learning the conditional expectation function with the new objective of learning a family of conditional expectation functions e z z x indexed by z this paper studies series and inference of z in a particular case characterized by the following two main features z is additively separable in x and z meaning that z x x z for some functions and the conditioning variables z are observable and additively separable models are convenient in many economic problems because any ceteris paribus effect of changing x to is completely described by in addition a major statistical advantage in restricting to additively separable models is that the individual components can be estimated at faster rates than a joint estimation of the family z therefore imposing additive separability in contexts where such an assuption is justified is very helpful the motivation for studying a framework for z is to allow researchers substantial flexibility in modeling conditioning information when the primary object of interest is this framework allows analysis of particularly rich or big datasets with a large number of conditioning in this paper of z is formally defined by the total number of terms in a series expansion of z this will allow many possibilities on the types of variables z and functions covered for example z can be itself while is approximately linear in the sense that z zl l o with l n and j denoting the jth component of the vector and the asymptotic o valid for l more generally z itself can also have moderate estimation of nonparametric regression problems involves least squares estimation performed on a series expansion of the regressor variables series estimation is described more fully in section on faster rates for separable models exist for both kernel methods marginal integration and methods and series based estimators for a general review of these issues see for example the textbook additional discussion on the literature on additively separable models is provided later in the introduction many cases larger set of covariates can lend additional credibility to conditional exogeneity assumptions see the discussion in additively separable dimension but any sufficiently expressive series expansion of must have many terms as a simple consequence of the curse of dimensionality a basic mechanical outline for the estimation and inference strategy presented in this paper proceeds in the following steps consider approximating dictionaries equivalently series expansions with k terms given by pk x x pkk x linear combinations of pk x are used for approximating x in addition consider approximating dictionaries with l terms q l z z qll z for approximating z possibly l reduce the number of series terms for in a way which continues to allow robust inference this requires multiple model selection steps proceed with traditional series estimation and inference techniques on the reduced dictionaries strategies of this form are commonly referred to as selection inference strategies the primary targets of inference considered in this paper are functionals g a g specifically let a leading examples of such functionals include the average derivative a g e g x or the difference of a g g g for two distinct of interest the main contribution of this paper is the construction of confidence sets that cover to some confidence level moreover the construction is valid uniformly over a large class of data generating processes which allow z to be highdimensional current estimation techniques provide researchers with useful tools for dimension reduction and dealing with datasets where the number of parameters exceeds the sample most techniques require additional structure to be imposed on the problem at hand in order to ensure good performance one common structure for which reliable techniques exist is sparsity sparsity means that the number of nonzero parameters is small relative to the sample size in this setting common techniques include techniques like lasso and other techniques include the dantzig selector scad and forward stepwise regression the literature on nonparametric estimation of additively separable models is well developed as mentioned above additively separable models are useful since they models which are extremely flexible and thus overparameterized are likely to overfit the data leading to poor inference and out of sample performance therefore when many covariates are present regularization is necessary lasso is a shrinkage procedure which estimates regression coefficients by minimizing a quadratic loss function plus an penalty for the size of the coefficient the nature of the penalty gives lasso favorable property that many parameter values are set identically to zero and thus lasso can also be used as a model selection technique fits an ordinary least squares regression on variables with estimated lasso coefficients for theoretical and simulation results about the performance of these two methods see among many more damian kozbur impose an intuitive restriction on the class of models considered and as a result provide higher quality estimates early study of additively separable models was initiated in and who describe backfitting techniques propose marginal integration methods in the kernel context and consider estimation of derivatives in components of additive models develop local partitioned regression which can be applied more generally than the additive model in terms of estimation series estimators are particularly easy to use for estimating additively separable models since series terms can be allocated to respective model components general large sample properties of series estimators have been derived by and many other references relative to kernel estimation series estimators are simpler to implement but often require stronger support conditions many additional references for both kernel and series based estimation can be found in the reference text finally consider estimation of additively separable models in a setting where there are additive components the authors propose and analyze a series estimation approach with a penalty to penalize different additive components this paper therefore studies a very similar setting to the one in but constructs a valid procedure for forming confidence intervals rather than focusing on estimation error the main challenge in statistical inference or construction of confidence intervals after model selection is in attaining robustness to model selection errors when coefficients are small relative to the sample size ie statistically indistinguishable from zero model selection mistakes are such model selection mistakes can lead to distorted statistical inference in much the same way that pretesting procedures lead to distorted inference this intuition is formally developed in and nevertheless given the practical value of dimension reduction and the increasing prevalence of datasets studying robust selection inference techniques and inference techniques is an active area of current research offering solutions to this problem is the focus of a number of recent papers see for example and this paper proposes a procedure called double selection for the additively separable model the proposed procedure is a generalization of the approach in named gives robust statistical inference for the slope parameter of a treatment variable x with control variables z in the context of a partially linear model e z z the selection method selects elements of z in two steps step selects the terms in an expansion of z that are most useful for predicting x step selects terms in an expansion of z most useful for predicting y a consequence of the particular construction using two selection steps is that terms excluded by model selection mistakes twice necessarily have a negligible effect on subsequent statistical double selection replaces step of under some restrictive conditions for example conditions which constrain nonzero coefficients to have large magnitudes perfect model selection can be attained citations are ordered by date of first appearance on arxiv authors have addressed the task of assessing uncertainties or estimation error of model parameter estimates in a wide variety of models with high dimensional regressors see for example and use of two model selection steps is motivated partially by the intuition that two necessary conditions for omitted variables bias to occur an omitted variable exists which is correlated with the treatment x and correlated with the outcome y each selection step addresses one additively separable selection with selecting variables useful for predicting any test function x for a sufficiently general class of functions this paper suggests a simple choice for which is based on the linear span of pk x this choice is called span theoretical and simulation results show that the suggested choice has favorable statistical properties uniformly under certain sequences of data generating processes working with a generalization of selection which dissociates the first stage selection from the final estimation is useful for several reasons one reason is that the direct extension of is not invariant to the choice of dictionary pk x and leads natural to the consideration of more general in addition applying the direct generalization of selection may lead to poorer statistical performance than using a larger more robust a simulation study later in this paper explores these properties next as a theoretical advantage in some cases a larger gives estimates and inference which are valid under weaker rate conditions on k n etc finally working dissociating the first stage helps in terms of organizing the arguments in the proofs in particular various bounds developed in the proof depend on a notion of density of within linspan pk this paper proves convergence rates and asymptotic normality for postnonparametric double selection estimates of x and respectively the proofs in the paper proceed by using the techniques in newey s analysis of series estimators see and ideas in belloni chernozhukov and hansen s analysis of selection see along with careful tracking of a notion of density of the set within the linear span of pk x the estimation rates for obtained in this paper match those of next a simulation study demonstrates finite sample performance of the proposed procedure finally an empirical example estimating the relationship of initial gdp to gdp growth in a of countries illustrates the use of double selection series estimation with a reduced dictionary this section establishes notation reviews series estimation and describes series estimation on a reduced dictionary the exposition begins with basic assumptions on the observed data assumption data the observed data dn is given by n iid copies of random variables x y z x y z indexed by i n so that dn yi xi zi here yi are outcome variables xi are explanatory variables of interest and zi are conditioning variables in addition y r and x rr for some integer r and z is a general measure space of the two concerns in their paper they prove that under the regularity right conditions the two described model selection steps can be used to obtain asymptotically normal estimates of and in turn to construct correctly sized confidence intervals choices are possible and the analysis in the paper covers a general class of choices for damian kozbur assumption additive separability there is a random variable and functions and such that the following additive holds y x z e z traditional series estimation of is carried out by performing least squares regression on series expansions in x and z define a dictionary of approximating functions by pk x q l z where pk x x pkk x and q l z z qll z are each series of k and l functions such that their linear combinations can approximate x and z construct the matrices p pk pk xn q q l q l zn y yn and let k l p q p q p q y be the least squares estimate from y on p q let k l g be the components of k l corresponding to pk then gb x is defined by gb x pk x k l g when l n quality statistical estimation is only feasible provided dimension reduction or regularization is performed a dictionary reduction selects new approximating terms pk x q l z reduction x z comprised of a subset of the series terms in pk x q l x in this paper because the primary objects of interest center around x it will be the convention to always take x pk x the estimate of x is then defined analogously to the traditional series estimate let y where xn p zn and as before let g be the components of corresponding to then gb is defined by gb x x g finally consider a a g r and as before set a one sensible estimate for is given by a b g in order to use for inference on an approximate expression for the varib is necessary as is standard the expression for the variance will ance var k b p x b g let be approximated using the delta method let a m idn be the projection matrix onto the space orthogonal to the assumption simply rewrites the equation stated in the introduction in terms of a residual to ensure uniqueness of a further normalization is required a common normalization in the series context is which is sufficient for most common assumptions on to one dimensional functionals is for simplicity additively separable b y estimate vb using the following span of finally let e sandwich form b b a b vb a b b b mdiag e the following sections describe a dictionary reduction technique along with regularity conditions which imply that b n vb the practical value of the results is that they formally justify approximate gaussian inference for an immediate corollary of the gaussian limit is that for any significance level with the of the standard guassian distribution it holds that p vb vb dictionary reduction by double selection the previous section described estimation using a generic dictionary reduction this section discusses one class of possibilities for constructing such reductions it is important to note that the coverage probabilities of the above confidence sets depend critically on how the dictionary reduction is performed in particular naive methods will fail to produce correct inference formal results expanding on this point can be found for instance in heuristically the reason resulting confidence intervals have poor coverage properties is due to model selection mistakes to address this problem this section proposes a procedure for selecting z the new procedure is a generalization of the methods in who work in the context of the partially linear model e z x z the methods described below rely heavily on model selection therefore a brief description of lasso is now provided the following description of lasso which uses an overall penalty level as well as penalty loadings follows who are motivated by allowing for heteroskedasticity for any random variable v with observations vn the lasso estimate v on q l z with penalty parameter and loadings lj is defined as a solution l lasso arg min n x vi q l zi b l x bj the corresponding selected set iv l is defined as iv l j l lasso j finally the corresponding estimator is defined by n x l arg min vi q l zi b b bj for j v l the required inverse does not exist a may be used damian kozbur lasso is chosen over other model selection possibilities for several reasons foremost lasso is a simple computationally efficient estimation procedure which produces sparse estimates because of its ability to set coefficients identically equal to zero in particular l will generally be much smaller than n if a suitable penalty level is chosen the second reason is for the sake of continuity with the previous literature lasso was used in the third reason is for concreteness there are indeed many alternative estimation or model selection procedures which select a sparse set of terms which in principle can replace the lasso it is possible to instead consider general model selection techniques in the course of developing the subsequent theory however framing the discussion using lasso allows explicit calculation of bounds and explicit description of tuning parameters this is also helpful in terms of practical implementation of the procedures proposed below the quality of lasso estimation is controlled by and lj as the number of different lasso estimations increases ie with increasingly many different variables v the penalty parameter must be increased to ensure quality estimation uniformly over all different the penalty parameter must also be increased with increasing however higher typically leads to more shrinkage bias in lasso estimation therefore given lj is usually chosen to be large enough to ensure quality performance and no larger see for details for the sake of completeness the selection procedure of is now reproduced for a partially linear model specified by e z x z algorithm selection for the partially linear model reproduced from first stage model selection step perform lasso regression x on q l z with penalty and loadings lfs j let ifs be the set of selected terms reduced form model selection step perform lasso regression y on q l z with penalty and loadings lrf j let irf be the set of selected terms selection estimation set ipd ifs irf and let z qjl z estimate with b based on least squares of y on x z appendix a contains details about one possible method for choosing as well as lfs j lrf j arguments in show that the choices of tuning parameters given in appendix a are sufficient to guarantee a centered gaussian sampling distribution of b for the simplest generalization of selection is to expand the first stage selection step into k steps more precisely for k k perform lasso regression of pkk x on q l z and set ifs k as the selected terms then define ifs this ifs k and continue to the reduced form and estimation steps approach has a few disadvantages first the selected variables can depend on the particular dictionary pk x ideally the first stage model selection should be approximately invariant to the choice of pk x standard errors are used for inference previous draft of this paper took this approach deriving theoretical results for this approach requires stronger sparsity assumptions than required here additively separable instead consider a general class of test functions concrete classes for test functions are provided below in the first stage in double selection a lasso step of x on q l z is performed for each algorithm double selection first stage model selection step for each perform lasso regression x on q l z with penalty and loadings j let l be the selected terms let l be the union set of selected terms reduced form model selection step perform lasso regression y on q l z with penalty and loadings lrf j let irf be the set of selected terms selection estimation set irf estimate using based on the reduced dictionary x z pk x qjl z the following are several concrete feasible options for the first option is named the span option this option is suggested for practical use and is the main option in the simulation study as well as in the empirical example that follow span x linspan pk x var x the theory in the subsequent section is general enough to consider other options for which might possibly be preferred in different contexts three additional examples are as follows graded x x x x pkk x m m multiple x pkk x x pkk x simple x pkk x appendix a again contains full implementation details for the span option this includes one possible method for choosing j as well as lfs j lrf j which yield favourable model selection properties discussion of the most important details is given in the text below the analysis in the next section gives conditions under which attains a centered gaussian limiting distribution choosing optimally is an important problem which is similar to the problem of dictionary the span option span is used in the simulation study as well as in the empirical example since it performed well in initial simulations note that the definition of the set span depends on a population quantity var x which may be unknown to the researcher note however that the identities of the covariates selected in the procedure described in the appendix are invariant to rescaling of the side variable the invariance question of which option for is optimal is likely application dependent in order to k maintain focused this question is not considered in detail in this paper but might be of interest for future work damian kozbur is a consequence of the method for choosing penalty loadings therefore replacing the condition x with var x is possible the option simple is the direct extension of post double selection as given in the set multiple corresponds to using multiple dictionaries indexed m in the notation above for example multiple could include the union of orthogonal polynomials and trigonometric polynomials all in the first stage selection the graded is appropriate when dictionaries are not nested with respect to these include in order to set up a practical choice of penalty levels the set proposed above is considered as a span where x x pkk x x linspan pk x var x the reason then for decomposing span in this way is allow the use of different penalty levels on each of the three sets in particular is the penalty for a single heteroskedastic lasso as described in is a penalty which adjusts for the presence of k different lasso regressions with k the main proposed estimator sets this is less conservative than the penalty level would be following for a continuum of lasso as a result any corresponding lasso performance bounds do not hold uniformly over rather the implied bounds hold only uniformly over any k element subsets of the model selection assumption below see assumption indicates that these bounds are sufficient for the present purpose in the simulation study a more conservative higher choice for is also considered in terms of inferential quality there is no noticeable difference between the two choices of penalties in the data generating processes considered in the simulation study as discussed above penalty levels accounting for a set of different lassos estimated simultaneously must be higher to ensure quality estimation this leads to higher shrinkage bias the above decomposition therefore addresses both concerns about quality estimation and shrinkage bias by allowing smaller penalty levels to be used on subsets of span because the decomposition is into a fixed finite number of terms ie into terms such an estimation strategy presents no additional theoretical difficulties another practical difficulty with this approach is computational it is infeasible to estimate a lasso regression for every indexed by a continuum therefore some approximation must be made the reference gives suggestions for estimating a continuum of lasso regressions using a grid this may be computationally expensive if k is even moderately large an alternative heuristic approach is motivated by the observation that qjl is selected into only when there is such that j l in the context of estimating only the identity of selected terms is dictionaries pk x may not contain a term p kk x x in this case x x can be appended to in addition after rescaling is possible and so the sets have nonempty intersection this causes no additional problems the normalization that x k ensures that is indexed by a compact set and so can chosen as described in to account for a continuum of lassos additively separable important not their coefficients for the implementation in this paper a strategy for approximating is adopted where for each j l a lasso regression is run using exactly one test function the choice of is made based on being likely to select qjl relative to other specifically for each j is set to the linear combination of pkk with highest marginal correlation to qjl then the s approximation to the first stage model selection step proceeds by using x in place of this is also detailed in the appendix the formal theory in the subsequent sections proceeds by working with a notion of density of within a broader space of approximating functions for g x aside from added generality working in this manner is helpful since it adds structure to the proofs and it isolates exactly how the density of interacts with the final estimation quality for formal theory in this section additional formal conditions are given which guarantee convergence and asymptotic normality of the double selection there are undoubtedly many aspects of the estimation strategy that can be analyzed these include important choices of tuning parameters and the following definition helps characterize smoothness properties of target function and approximating functions pk let g be a function on x define the sobolev norm where the inner maximum ranges over assumption regularity for pk for each k there is a nonsingular matrix bk such that the smallest eigenvalue of the matrix pk e bk pk x bk pk x is bounded uniformly away from zero in in addition there is a sequence of constants k satisfying kbk pk x k and k k as n assumption approximation of there is an integer d a real number and a sequence of vectors k which depend on k such that pk k o k as k assumptions and would be identical to assumptions and from if there was no conditioning variable z present these assumptions require that the dictionary pk has certain regularity and can approximate at a rate the quantity k is dictionary specific and can be explicitly calculated in certain cases for instance gives that o k is possible for note that values of can be derived for particular d pk and classes of functions containing also gives explicit calculation of for the leading cases when pk x are power series and regression splines the next assumption quantifies the density of within linspan pk in order to do so define the following let g inf kg sup x x damian kozbur assumption density of each satisfies var x there is a constant such that sup g o k pk var g x there is nothing special about the constant in var x it is mainly a tool for helping describe the density of in addition as mentioned above the set selected by lasso as described in the appendix is invariant to rescaling of the side variable as a result imposing restrictions on var x is without loss of generality the density assumption is satisfied with if the span is used since in that case g is bounded uniformly in on the other hand the density assumption may only be satisfied with or higher for the basic simple x pkk x option the next assumptions concern sparse approximation properties of q l z two definitions are necessary before stating the assumption first a vector x is called if j xj next let denote the linear projection operator more precisely for a square integrable random variable v v is defined by v z q l z l for l such that e v q l z l is minimized for functions of x such that x is square integrable write l x l assumption sparsity there is a sequence and a constant such that the following hold there is a sequence of vectors l that are with support such that z q l z l o for all there are vectors l that are all with common support such that sup sup z q l z l o assuming a uniform bound for the sparse approximation error for is potentially stronger than necessary at the moment of the writing of the manuscript the author sees pnno theoretical obstacle in terms of working under the weaker assumption zi l zi l op in addition the rate is imposed in order to maintain a parallel exposition relative to the o k term other rates for instance can also replace and this is done in and other the same comment holds for the sparse approximation conditions for several references in the prior econometrics literature work with sparse approximation of the conditional expectation rather than the linear projection in this context working with the conditional expectation places a higher burden on the approximating dictionary q l in particular if the conditional expectation of x given z can be approximated using terms from q l then the conditional expectation of x may potentially require o terms to approximate once interactions are taken into account this potentially requires the dictionary q l to contain a prohibitively large amount of interaction terms for this reason the conditions in this paper are cast in terms of linear is only more general if l grows faster than every polynomial of author sees no theoretical obstructions in terms of applying the same arguments for lasso bounds in without the conditional expectation assumption the key ingredient in that additively separable the next assumption imposes limitations on the dependence between x and z for example in the case that x x is an element of pk x this assumption states that the residual variation after a linear regression of x on z is more generally the assumption requires that population residual variation after projecting pkk x away from z is uniformly k one consequence of assumption is that constants can not be freely added to both x and z this therefore requires the user to enforce a normalization condition like or e x the simulation study and empirical illustration below both enforce assumption identifiability for each k and for bk as in assumption the matrix e bk pk x pk z bk pk x pk z has eigenvalues bounded uniformly away from zero in k in addition kbk pk x k the next condition restricts the sample gram matrix of the second dictionary a standard condition for nonparametric estimation is that for a dictionary p the gram matrix p p eventually has eigenvalues bounded away from zero uniformly in n with high probability if k n then the matrix p q p q will be rank deficient however in the setting to assure good performance of lasso it is sufficient to only control certain moduli of continuity of the empirical gram matrix there are multiple formalizations of moduli of continuity that are useful in different settings see for explicit examples this paper focuses on a simple condition that seems appropriate for econometric applications in particular the assumption that only small submatrices of q have eigenvalues will be sufficient for the results that follow in the sparse setting it is convenient to define the following sparse eigenvalues of a positive matrix m m m m m max m m min max in this paper favorable behavior of sparse eigenvalues is taken as a high level condition and the following is imposed assumption sparse eigenvalues there is a sequence n such that and such that the sparse eigenvalues obey q o and q o with probability o the assumption requires only that sufficiently small submatrices of the large p p empirical gram matrix q are this condition seems reasonable and will be sufficient for the results that follow informally it states that no small subset of covariates in q l suffer a multicollinearity problem they could be shown to hold under more primitive conditions by adapting arguments found in which build upon results in and see also pn l l argument is that expression q zi l zi q zi l stays suitably small note this expression is a sum of mean zero independent random variables in the present context the sparse eigenvalue definition k k refers to the number of nonzero components of a vector damian kozbur assumption model selection performance there are constants and and bounds k o pn q l zi l l o k log l o pn l b q zi l l o log l which hold with probability o the standard lasso and estimation rates when there is only one outcome considered are log l for the sum of the squared prediction errors and o for the number of selected covariates therefore k is a uniform measure of the loss of estimation quality stemming from the fact that lasso estimation is performed on all rather than just on a single outcome similarly k measures the number of unique j selected in all first stage lasso estimations the choice to present assumptions is for generality so that other model selection techniques can also be applied however verification of the high level bounds are available under additional regularity for lasso estimation one reference on performance bounds for a continuum of lasso estimation steps is in that paper the authors provide formal conditions specifically assumption and prove that statement of assumption holds the bounds in that reference correspond to taking an important note is that the conditions in are slightly more stringent since the authors assume that l and l can be taken to approximate the conditional expectation of x and y given z rather than just the linear projection when finite but grows only polynomially with n and l n is possible under further regularity conditions the main theoretical difficulty in verifying assumption using primitive conditions is in showing that the size of the set stays suitably small prove certain performance bounds for a continuum of lasso estimates under the assumption that dim is fixed and state that their argument would hold for certain sequences dim also proves that the size of the supports of the lasso estimates l stay bounded uniformly by a constant multiple of which does not depend on n or they do not however prove that the size of the union l remains similarly bounded therefore their results do not imply a the existence of a finite value of the later bound is required for the analysis of the above proposed estimator for a finite approximation to span like simple there is no difficulty calculating bounds on the total number of distinct selected terms this is because under regularity conditions standard in the literature each l satisfies l o where the implied constants in the o terms can be bounded uniformly over in particular when is finite it is possible to take k this paper does not derive a bound for span as this would likely lie outside the scope of this project a valid alternative for which verifiable bounds on the union of selected covariates is possible is to report estimates using simple on the event that span t n b span otherwise for some increasing threshold function t of additively separable when g linspan pk var g x coincides with so that is as dense as possible then assumption can be weakened in the following assumption alternative model selection performance suppose that g linspan pk var g x let be any nonrandom fixed finite subset of at most k elements there are constants and and bounds k o pn q l zi l s l o k log l sup o pn l b q zi l l o log l which hold with probability o assumption is weaker than assumption however assumption can be more easily verified with primitive conditions by using finite sets statements can be attained under standard conditions with provided a penalty adjusting for k different lasso estimations is used on the other hand using a conservative penalty as in for the continuum of lasso estimations like in span would result in there is currently no proof that statement with and statement with can hold simultaneously under conditions standard in the econometrics literature it is interesting to note that the requirements to satisfy assumption are essentially pointwise bounds on the predictive performance of a set of lasso estimations along with a uniform bound on the identity of selected covariates by contrast prove uniform bounds on lasso estimations along with pointwise bounds on the identity of selected covariates in practice verification of the condition in assumption could be potentially very useful this would allow the researcher to use a penalty level which is smaller by a factor of k and would ultimately allow more robustness without increasing variability of the final estimator for the choice of penalty parameters given in appendix a for the span option conditions of assumption can be verified under further regularity conditions like those given in or to yield furthermore condition b k mentioned on the previous of assumption can be verified if an option like page is used most importantly assumption serves a plausible model selection condition which is sufficient for proving the results that follow the next assumption describes moment conditions needed pn by applying certain laws of large numbers for instance for the quantities qjl zi assumption moment conditions the following moment conditions hold e qjl z bk pk x pk z is bounded away from zero uniformly in k l e z is bounded uniformly in l e qjl z is bounded away from zero uniformly in l e z is bounded uniformly in the first statement of the assumption may also be seen as a stricter identifiability condition condition on the residual variation pk x pk z it rules out situations where for instance x qjl z note that e bk pk x pk z is given by the identifiability assumption no direct assumption damian kozbur is needed about the corresponding third moment e bk pk x pk z since instead a reference to the bound k is used the final assumption before the statement of theorem are rate conditions assumption rate conditions the following rate conditions hold k o log kl o k k k o k k k k log l k o k log l k k k o log l k k o the first statement ensures that the sparse eigenvalues remain in the with high probability over sets whose size is larger that the selected covariates the second statement is used in conjunction with the above moment conditions to allow the use of moderate deviation bounds following the third and fourth conditions are assumption on the sparse approximation error for q l z the final two assumptions restrict the size of and k and quantities depending on and relative to these assumptions can be unraveled for certain choices of dictionaries for example as was noted above and by for k can be taken to be o k using the simple option gives and then the conditions can be reduced to o k o k log l o the first result is a preliminary result which gives bounds on convergence rates for the estimator gb they are used in the course of the proof of theorem below the main inferential result of this paper the proposition is a direct analogue of the rates given in theorem of which considers estimation of a conditional expectation without model selection over a conditioning set the rates obtained in proposition match the rates in to state it let be the distribution function of the random variable x in addition let k bk pk x theorem under assumptions or and the double selection estimate gb for the function satisfies the following bounds z b g x x x op k k g op k k k the next formal results concern inference for a recall that is estimated by a b g and inference is conducted via the estimator vb as described in earlier sections assumption moments for asymptotic normality e z is bounded for some var z is bounded away from zero note that the conditions in require only that e z is bounded the strengthened condition is needed for consistent variance estimation in order to construct a bound on the quantity the following assumptions on the functional a are imposed they are regularity assumptions that imply that a attains a certain degree of smoothness for example they imply that a is differentiable additively separable assumption differentiability for a the real valued functional a g r is either linear or the following conditions hold k k there is a linear function d g that is linear in g and such that for some constants c and all with it holds that g a d g c and g d g the function d is related to the functional derivative of a the following assumption imposes further regularity on the continuity of the derivative for shorthand let d g d g the next rate conditions is used to ensure that estimates are undersmoothed the rate condition ensures that the estimation bias which is heuristically captured by k converges to zero faster than the estimation standard error assumption undersmoothing rate condition k o the next rate condition is used in order to bound quantities appearing in the proof of theorem as was demonstrated in the case of assumption the rate conditions can be unraveled for certain choices of k pk and assumption rate conditions for asymptotic normality k k k k k k log l o k log l k k o log l k k o k k k k o k k o the final two conditions divide the cases considered into two classes the first class covered by assumption are functionals which fail to be differentiable and therefore can not be estimated at the parametric rate the second class covered by assumption does attain the rate one example with the functional of interest is evaluation of g at a point a g g in this case a fails to be estimated that the parametric rate in general circumstances r a second example is the weighted average derivative a g w x x for a weight function w which satisfies regularity conditions the assumption holds if w is differentiable vanishes outside a compact set and the density of x is bounded away from zero wherever w is positive in this case a g e x g x for x x x by a change of variables provided that x is continuously distributed with non vanishing density these are one possible set of sufficient conditions under which the weighted average derivative does achieve assumption regularity for a in absence of differentiability there is a constant such that g there is dependent on k such that for x p x k it holds that e x and d assumption conditions for there is x such that e x finite and nonzero and such that d g e x g x and d pkk e x pkk x for every there is such that e x p x k nally the matrix e x var z is finite and nonzero theorem now establishes the validity of standard inference procedure after model selection as well as validity of the plug in variance estimator damian kozbur theorem under assumptions or the double selection estimate for the function satisfies op k in addition d v n and d vb n under assumptions or and d n and p v simulation study the results stated in the previous section suggest that double selection series estimation should exhibit good inferential properties for additively separable conditional expectation models when the sample size n is large the following simulation study is conducted in order to illustrate the implementation and study the performance of the outlined procedure the simulation study is divided into two parts the first part compares several alternative estimators to double selection the second part compares several double selection estimates using different choices for this part demonstrates finite sample benefits from using the span option relative to the direct generalization of selection estimation ie using the simple option the following process generates the data in each simulation y x z x sin sin z z l l j zj n corr n x stair z v fn stair z v fn l j v n stair tanh tanh b the study performs simulations for n two settings for the parameter l are considered l and l finally the sparsity level is set to within each data generating process simulation replications are performed additively separable the data generating process is quite complicated it is designed in order to create correlations between the covariates z and various transformations of x this allows the data generating process to highlight many different statistical problems which can arise using double selection and alternative estimation techniques all in one simulation study despite the complicated formulas for the joint distribution of x and z their realizations appear natural scatter plots of one sample of n showing the respective bivariate distributions between and x are provided in figure figure provides a picture of the graph of the simulations evaluate estimation of and of defined by e x in order to avoid further complications for each replication the expectation and thus true are calculated against the empirical distribution of x within that simulation the first part of the simulation study considers the performances of five for and each estimator is a reduced series estimator based on initial dictionaries consisting of a cubic spline expansion pk x for x and a linear expansion q l z z for z oracle estimator is infeasible and sets z this estimator serves as a benchmark for comparison to estimates in which the correct support is known span double estimator selects z using double selection with given by the span option as described in this paper naive estimator selects z in one model selection step by performing lasso of y on q l z ols estimator uses z z in other words this estimator does not reduce the dictionary this estimation strategy is only calculated provided l targeted undersmoothing estimator implements an alternative inferential procedure for dense functionals of parameters tu this procedure was proposed in and is described further below possibility is to calculate against the population expectation of x under the assumption that the researcher knows the population distribution of x this causes no further complication if the distribution of x is unknown and estimated this must however be taken into account there are likely other sensible estimators beyond the considered in the simulation section as pointed out by an anonymous reviewer such estimators may include propensity score matching on a continuous variable though such an approach may work well the context here is not exactly the same as usually seen in propensity score matching in particular the assumptions here do not require unconfoundedness conditions in addition propensity score techniques are most commonly applied to discrete treatement variables there is some work on propensity score matching with a continuous treatment for example see who require the estimation of the conditional density of treatment in the setting estimating the conditional density of x given z would likely introduce complications beyond the scope of this paper damian kozbur detailed implementation descriptions are provided in appendix a for each of the above estimators the choice of pk x is made using a rule first an initial dictionary reduction q initial z is selected for oracle q initial z for the span double and naive estimators q initial z is based on lasso of y on q l z as implemented in appendix a for ols q initial z z next bic is used to choose a expansion pk x comparison of estimators is standard in the selection econometrics literature the oracle estimator should be seen as a benchmark which is known to provide good estimates if the true set was known the naive estimator is expected to perform poorly since it is not a uniformly valid estimator and susceptible to arising from model selection mistakes ols is expected to perform poorly due to potential problems related to overfitting estimator is a procedure called targeted undersmoothing which looks to correct distortions in inference from model selection mistakes targeted undersmoothing appends covariates which significantly affect the value of the functional a b g to an initially selected model see it is appropriate for functionals of highdimensional models which depend on a growing number of parameters dense functionals and is therefore a potentially sensible procedure for inference for this estimator is detailed further in appendix a the simulation results report several quantities which measure the performance b bias of each estimator the results report standard deviation of the estimates of the estimates for confidence interval length for estimates for rejection frequencies under the null for at the level mean number of series terms k used mean number of series terms selected from the original l and integrated squared error for the simulation results are reported in figure for l and figure for l the figures display the above mentioned simulation results for each n with n changing over the horizontal note also that across some of the estimators some of the reported quantities will be identical for example the point estimates for tu are identical to the naive point estimates the selected k is identical for the naive estimates as well as the double selection estimates in all of the simulations the double selection estimates behave similarly to the oracle estimates the ols estimates have wide confidence intervals relative to the double selection estimation but have similar coverage properties the final estimator targeted undersmoothing tu is conservative in terms of coverage with substantially larger intervals in every case on the other hand the naive estimator has poor coverage properties for the naive estimator after failing to control for the correct covariates the increase in k leads to an increasing bias this highlights the fact that simply producing undersmoothed estimates of by increasing k may not be adequate for reducing bias and making quality statistical inference possible in the setting that since s the magnitude of coefficients l and the joint distribution between relevant covariates are all fixed in the simulations as n therefore for sufficiently large n all relevant covariates would be identified with high probability and all of the selection estimators would perform similarly this simulation study therefore is identifying differences in finite sample performance additively separable figure simulation results this figure presents simulation results for the estimation of and in the cases n with and l according to the data generating process described in the text estimates are presented for the five estimators oracle double pnd span naive ols and targeted undersmoothing tu as described in the text the first plot shows standard deviation of the respective estimates for the second plot shows bias of the estimates for the third plot shows confidence interval length for estimates for the fourth plot shows rejection frequencies under the null for for a level test the fifth plot shows the mean number of series terms k used the sixth plot shows the mean number of series terms from l selected the seventh plot shows root mean integrated squared error for figures are based on simulation replications n is always indexed by the horizontal axis damian kozbur figure simulation results this figure presents simulation results for the estimation of and in the cases n with and l according to the data generating process described in the text estimates are presented for the four estimators oracle double pnd span naive and targeted undersmoothing tu as described in the text the first plot shows standard deviation of the respective estimates for the second plot shows bias of the estimates for the third plot shows confidence interval length for estimates for the fourth plot shows rejection frequencies under the null for for a level test the fifth plot shows the mean number of series terms k used the sixth plot shows the mean number of series terms from l selected the seventh plot shows root mean integrated squared error for in each plot the horizontal axis denotes sample size figures are based on simulation replications n is always indexed by the horizontal axis additively separable the second part of the simulation study compares four double selection estimators which use different specifications for span double estimator is identical to the span double estimator in the first part of the simulation conservative span double estimator uses pk and as in the span option but in the decomposition span the penalty applied to is more conservative explicitly aimed at achieve lasso performance bounds which hold uniformly over all of simple double estimator uses pk as in the span but uses simple alternative spline basis simple double estimator uses a different basis for selection a qr decomposition is applied to p in order to obtain orthonormal columns next simple is used on the new orthogonalized data importantly the new p spans the same linear space in rn as in the previous estimators the estimates for the second part of the simulation are presented in figures note that all estimators are identical with regards to k hence only one curve is visible in the corresponding plots in addition the conservative span and span estimators have very similar performance in terms of standard deviation bias interval length rejection frequency and integrated squared error the two estimators are practically indistinguishable except in terms of the number of elements of q l they select they do not give numerically identical estimates or confidence intervals however their differences are too small to be seen in figures there are noticeable differences in the performance of the estimators the span option is able to identify the highest number of relevant covariates followed by the conservative span option the simple option and the alternative spline basis simple option the span conservative span and simple double selection procedures exhibit favorable finite sample properties for this data generating process in particular for those estimators the calculated rejection frequencies move towards as n increases by contrast the alternative spline basis simple double selection procedure has very poor finite sample performance it is unlikely that the projection of the new orthogonalized basis onto q l has a good sparse representation this causes increased model selection mistakes in the first stage unlike in the partially linear model these mistakes can accumulate to cause more severe bias since the number of first stage selection steps is growing with note that the alternative spline basis estimator has similar performance to the naive estimator in the first part of the simulation study the span and the conservative span options offer an opportunity to potentially add additional robustness these options select more variables than the simple option there is no evidence from this simulation study that using the span option conditioning variables to the extent that rejection frequencies become severely distorted or variability increases to an undesirable level damian kozbur figure simulation results this figure presents simulation results for the estimation of and in the cases n with and l according to the data generating process described in the text estimates are presented for four double selection pnd estimators simple span conservative span and alternative spline simple as described in the text the first plot shows standard deviation of the respective estimates for the second plot shows bias of the estimates for the third plot shows confidence interval length for estimates for the fourth plot shows rejection frequencies under the null for for a level test the fifth plot shows the mean number of series terms k used the sixth plot shows the mean number of series terms from l selected the seventh plot shows root mean integrated squared error for in each plot the horizontal axis denotes sample size figures are based on simulation replications n is always indexed by the horizontal axis additively separable figure simulation results this figure presents simulation results for the estimation of and in the cases n with and l according to the data generating process described in the text estimates are presented for the four double selection pnd estimators simple span conservative span and alternative spline simple as described in the text the first plot shows standard deviation of the respective estimates for the second plot shows bias of the estimates for the third plot shows confidence interval length for estimates for the fourth plot shows rejection frequencies under the null for for a level test the fifth plot shows the mean number of series terms k used the sixth plot shows the mean number of series terms from l selected the seventh plot shows root mean integrated squared error for in each plot the horizontal axis denotes sample size figures are based on simulation replications n is always indexed by the horizontal axis damian kozbur figure simulation study this figure depicts the function used in the simulation study figure simulation study joint covariate distribution this figure depicts the joint distribution between x and the first covariates as described in the above text the plots are generated by one sample of size n additively separable figure gdp growth results empirical example gdp growth this section applies double selection to an international economic growth example the data comes from the barro and lee dataset which contains a panel of countries for the period of to this example was also considered in who apply lasso techniques in the context of a highdimensional linear model for the purpose of locating important variables which are predictive of gdp growth rates this considers growth in gdp per capita as a dependent variable y for the period the growth rate in gdp over a period from to is commonly defined as log studying the factors that influence growth in gdp is a problem of central importance in economics a difficulty with studying this problem empirically on a level is that the number of observations is limited by the total number of countries at the same time the number of potential factors which influence gdp growth can be large this leads naturally to the need to regularize econometric estimation on any data on a of countries this example specifically studies the relation between initial gdp level and subsequent gdp growth in the presence of a large number of other determinants of gdp growth the interest in studying this particular question is in testing the fundamental macroeconomic theory of convergence convergence predicts that countries with high initial gdp will show lower levels of gdp growth and conversely countries with low initial gdp will show higher levels of gdp growth there are many references for assumptions which imply such convergence see and references therein this analysis considers a model with p covariates which allows for a total of n complete observations since p is comparably large relative to n dimension reduction in this setting is necessary the goal here is to select a subset of these covariates and briefly compare the resulting to predictions made in the growth literature see and contain complete definitions and discussion of each of these variables the estimated model is given by the specification damian kozbur yi log gdpi log gdpi zi where log gdpi denotes the sample mean the observed covariates enter linearly so that the expansion zi is assumed the estimation is performed using cubic splines as detailed in appendix is normalized so that estimates of several average derivatives of the effect of initial gdp on gdp growth are constructed using postnonparametric double selection and are presented in table in addition a scatter plot of the primary variables of interest as well as an estimate of are shown in figure a nonlinear specification for allows testing of several hypotheses related to the convergence of gdp these include the hypothesis that conditional convergence can depend on initial gdp this is related to the idea of a poverty trap where countries with smaller initial gdp exhibit less convergence ie the relationship between initial gdp and gdp growth may be locally flat see the reference text for additional background and details conditional convergence could also imply that at the high end of the initial gdp distribution gdp growth is locally flat the existence of conditional convergence based on initial gdp can be tested by using a nonlinear specification for in order to study the overall convergence the data is divided into quartiles an average derivative is then estimated within each quartile in addition an overall average derivative is estimated over the support of all initial gdp observations the respective average derivatives are then compared estimates based on double selection are presented in table the estimate for the overall weighted average derivative is std err p the estimate is negative and statistically significant this result is consistent with convergence theory in addition the average derivative is calculated for various smaller ranges of initial gdp the empirical distribution of initial gdp is divided into quartiles estimates for the weighted average derivatives are calculated within each quartile the estimated average derivatives are std err p for std err p for std err p for std err p for the test of the hypothesis that the average derivative in is equal to the average derivative over rejects the null at the level p t stat the test of the hypothesis that the average derivative in is equal to the average derivative over fails to reject the null at the level p t stat the overall average derivative estimate is negative and statistically significant these estimates also agree with and thus support the previous findings reported in which relied on reasoning for covariate selection in addition the analysis supports the claim that conditional convergence is nonlinear in initial gdp being flatter for countries with lower initial gdp are calculated against a alternative for the null that the average derivative is additively separable table estimation results for gdp example estimates average derivative average derivative additional selected variables life expectancy average schooling years in female population over age infant mortality rate female gross enrollment ratio for secondary education male gross enrollment ratio for secondary education total fertility rate population proportion under additional hypothesis tests deriv deriv deriv deriv note double selection estimates with b basis k conclusion this paper considers the problem of selecting a conditioning set in the context of nonparametric regression convergence rates and inference results are provided for series estimators of a primary component of interest in additively separable models with conditioning information the finite sample performance of several double selection estimators are evaluated in a simulation study overall the proposed span option has good estimation and inferential properties in the data generating processes considered damian kozbur appendix a implementation details lasso implementation details lasso implementation given penalty in every case penalty loadings j are chosen as described in with one small modification the procedure suggested in requires an initial penalty loadings which are constructed using initial estimates yi followed by an iterative of regression residuals their suggestion is to use i procedure here instead are taken as the linear regression residuals after i regressing the outcome v on the most marginally correlated qjl ie the which have the highest v d qjl z such modification was also used in penalty level choice for single outcome in every case when a single outcome variable is considered in isolation this includes the reduced form selection step and the selection step corresponding to lasso is implemented with penalty as described in for ease of reference note that suggest given by fn where classo are tuning parameters in every instance in this paper classo and are used penalty level choice for simple in this case k lasso regressions are run simultaneously in this case for all is given by fn where classo and are used penalty level choice and implementation for span when the span option is used span is decomposed span each component has a corresponding penalty level applied to all within that component on the first component fn where classo and on the second component fn where classo and on the third component fn where classo and the following procedure is used for approximating in the case that a component of contains a continuum of test functions for each j l a lasso regression which is more likely to select qjl z than other specifically for each j is set to the linear combination of pkk with highest marginal correlation to qjl then the approximation to the first stage s model selection step proceeds by using x in place of penalty level choice for when the conservative span option is used is decomposed each component again has a corresponding penalty level applied to all within that component on the first component fn where classo and on the second component fn where classo and on the third component fn where classo and in order to approximate the variables selected on the continuum of lasso estimates indexed by the identical procedure with the span option above is used note that the only difference between the conservative span option and the span option is in additively separable pk implementation details in every simulation and in the empirical example pk is constructed using a cubic expansion for fixed k the approximating dictionary is chosen according to the following procedure knots points are chosen according to the following rule set tmax and tmin let tk for constants set k for k k the constants serve to insert more knot points where the density of x is higher the choices for are determined uniquely by the condition that and that the endpoints satisfy tmin and tmax next the formulation used here is given by the recursive formulation set x set for k outside of k in addition for spline order o x x tk bk bk o x tk set k x k x x x the dictionary is completed by adding the additional terms k x x k x pk k x b is chosen according to the following procedure first an initial set of terms k q initial z q l z is selected in each case q initial z contains the terms irf that is the terms selected in a lasso regression y on q l z next an initial value b c is chosen to minimize bic using pk x q initial z in the simulation k b is constrained to be finally in order to ensure undersmoothing k b study k b b is set to k b n targeted undersmoothing implementation details the following procedure is used to estimate the targeted undersmoothing tu specifically c k i be tu see confidence intervals for for each i p let ci the corresponding confidence interval for using k terms and the components of q l corresponding to i then the full tu confidence interval is defined by the convex cb hull of ci k irf j in this implementation a truncated tu confidence incb terval is calculated instead ci this is done be the simulation k irf j run time reduces to the order of a day from the order of a month and therefore helps facilitate easier replicability changing the code to calculate the full tu confidence intervals is trivial this also highlights that computing speed is another advantage of the double procedure relative to tu in certain settings in terms of approximation error the full tu estimator was implemented for the case n p for replications the full tu confidence intervals as well as the truncated tu confidence intervals each made false rejections in addition the average interval length for the full tu intervals was while the average interval length for the truncated tu intervals was therefore the truncated and full tu confidence intervals show very similar performance in this instance damian kozbur appendix b proofs preliminary setup and additional notation throughout the course of the proof as much reference as possible is made to results in this is done in order to maximize clarity and to present a better picture of the overall argument in many cases appealing directly to arguments in is possible because many of the bounds required for deriving asymptotic normality for series estimators depend only on properties of gb pk and less direct appeal to bounds in the original selection argument is possible since those arguments do not track k and do not have notions of quantities stemming from like however the main idea of decomposing pk into components in the span of and orthogonal to q l remains as a theme throughout the proofs for any function let x denote the vector xn similarly let z zn in addition define the following quantities let m be the n k matrix m pk z z pkk z let w p m b p p let let e w w let w w let m be partitioned m mk let w be partitioned m wk for any let q l l let ry q l l for any let x l let uy y l let f v let x be the function such that z f m let ma f m let wa x ma assume without loss of generality that bk idk the identity matrix of order the reason this is without loss of generality is that dictionary pk is used only in the estimation while is used for first stage model selection in addition assume without loss of generality that idk throughout the exposition there is a common naming convention for various regression coefficients quantities of the form i always denotes the sample regression coefficients from regressing the variable v on the components specified by i this implies that the quantities l l are equivalent since the specified components being regressed on are the same in addition x are equivalent next quantities of the form l and l without a hat accent are population quantities and are defined in the text above additively separable preliminary lemmas lemma under the assumptions of theorem wk op log kl op log l krm op kk w op k kmmk op k k log l k z op k k log l k l op k k k log l k l op k k k log l k op k log kl op k log l wa op log kl op kk krm a kmma op k k log l k l op k k k log l k op k log kl krm w kf op k k k proof statement by lemma of wk are sufficient for j e z e qjl z wik two conditions which together qjl zi wki op log kl are that o k and the rate condition log kl o k n note that e qjl z wik is bounded away from zero by assumption in addition by s inequality e z e z k damian kozbur this implies that the first condition holds the second condition is given in the assumptions statement follows similarly as statement statement this statement follows directly from the fact that e z e z bounded along with dim rm e k and krmk o allowing the use of the chebyshev inequality p statement w k i i wi o k by the facts that o and kwi k statement first note that the following two hold for any z m l l for any g linspan pk and any corresponding expansion g rg with r g z max l l krg krg z to show the first of the above two statements for each note that z z m z z m l p x m l m l m l m l l m l z m l x x mp l x x z m l l this establishes the first claim now turn to the second claim note that using the density assumption there are and a vector such that g rg for some remainder rg sufficiently small then g z x z rg z next looking at each in the above expansion ie each and combining the above expression gives additively separable g z m l m l rg z g applying s inequality and the fact that m is a projection and hence gives the bound max l l krg krg z these can then be applied directly to kmmk under assumption note that for mk the corresponding and rmk satisfy o k and krmk o k then we have the bound g z op k k log l k under assumption note that for each mk taking and rmk are feasible by assumption the result follows statement z km l l kmq l l l km l l l l y z z the first two terms above y z z are op k k log l k by the same reasoning as statement in addition n o by assumption this gives z op k k log l k damian kozbur statement l l l l op l l k op kpmk l op kmk mmk l op k mmk rmk op o op k o op op k k log l k op k k k log l k statement proven analogously to statement statement max max k wk max k wk max wk op k log kl statement proven analogously to statement statements proven analogously to statements statement n x rm i f x kw rmk k x k k k krm k additively separable by the density assumption krmk k this then implies that n x k k rm i f lemma kw pw kf mmkf mw kf z k kw z k kw k k z z proof statement kw pw kf x k x k k k x k x x kw pw kf damian kozbur statement mmkf x k mmk k x x mmk k mmkf statement mw kf pw kf krm w l w pw kf krm w l w w kf krm w l w kf krm w kf k l w kf w kf while then the first term in the last line is bounded above as krm the second term has k l w kf x l k x l k l x x therefore mw kf statement z z k max k additively separable statement kw z kw z w z kw z w z kw w l w z kw z k z w k max z l wk k statement x pe kw pw x k k e x k kw pekf k statement pekf krm e l w pekf krm e l w krm e l krm k l then the first term in the last line is bounded above as krm turning to the second term k l x l e x l k therefore k damian kozbur statements the argument is identical to the argument for statements adjusting appropriately for the fact that ma is rather than the following corollaries follow directly from assumed rate conditions and the above bounds these are used in the proof of theorems and corollary under the assumptions of theorem op k k op k k corollary under the assumptions of theorem k k op op proof of theorem lemma b op k k op b op k k o proof the argument in theorem of gives the bound b op k k next using the decomposition p m w write m w m m w w w idn m m b f kw by triangle inquality bounds for each of the three above terms are established above along b f op the last statement holds with the assumed rate conditions give by applying an expansion of the matrix inversion function around idk b idk idk b idk idk b idk b the sum given above is with probability absolutely convergent relative to the b frobenius norm in addition by the bound k k kf we have b b b idk idk kf kidk kidk op k k n o note that since has minimal eigenvalues bounded from below by assumption it b and are invertible with probability approaching the reference follows that b and later uses the fact that this works on the event event has probability this fact is used several times however its use is only implicitly in reference to arguments in b p op k lemma proof b p b kp b kw kw op k op by arguments in bounds for follows from the previous lemmas and from the assumed rate conditions additively separable b p m x p k op k lemma proof b p m x p k b p m x p x p mp op x p k z p k op k b p m by assumption on x p k and idempotency of mp mp p mmp b p z op lemma b has eigenvalues bounded below and above with probability approaching proof then b p z op kp z op k m w z op k m w z op z kw z op op op lemma k op k k b p m b x b p me proof note that g k i b p k n p z triangle inequality in conjuction with the bounds described in the previous three lemmas give the result the final statement of theorem follows from the bound on k using the arguments in proof of theorem recall that f v let pk x k and decompose the quantity f a b g a by f a b g a f a b g a d b g d d d b g d d lemma f d d o k proof this follows from arguments given in the proof of theorem in note that the statement does not contain any reference to random quantities lemma f a b g a g d b g d g op proof bounds on g given by theorem imply that f a b g a d b g d g op k k k op this is again identical to the reasoning given in theorem in since that references uses only a bound on g to prove the analogous result damian kozbur the last step is to show that f d b g d n lemma f d b g d n proof note that d b g can be expanded k b p my d b g d p x g d pk x b p m x z e d pk x b x z e d pk x b p m x z e b p x b p z b p me in addition d d pk x k k gives b p x k f d b g d f b p m z e f the above equation gives a decomposition of the right hand side into two terms which are next bounded separately before proceeding note that the followb op b op kf ing bounds kf o kf kf a op kf a o all hold by arguments in consider the first term b p x k f nf p p m g p b p x p k kf b kf a p n max xi xi b kf a n max xi xi b kf a op op nk op b p m z to handle this term first bound next consider f b p m z e f b kp m z e kf b kp m z e kf b k m w m z e kf b kf b kf op additively separable next consider the last remaining term for which a central limit result will be shown nf p m z e nf w m m z e n f w ma m z e nf w me m z e z nf w e nf w pe m z e z nf w op bound in the equation array above holds by the fact that note that the w nm m h z nw z nf a a the term nf a w satisfies the conditions central limit theorem by arguments given in the previous three lemmas prove that f a b g a n the next set of arguments bound vb for as in the statement of assumption b and u b af define the event ag g define u b g p af in addition define i wi wi an infeasible sample analogue of lemma b op ka kb u op op op proof statement in the case that a g is linear in g then a b a therefore consider the case that a g is not linear in using arguments a identical to those in with probability and b c k ka g statement this follows from arguments in statement this follows from arguments in statement an immediate implication of statement is that u b kb op op lemma zi b h zi op proof first note that max zi b h zi max zi q l zi l i i max l zi h q l zi l i damian kozbur the first term has the bound maxi zi q xi op by assumption next max l zi h q l zi l s max l zi h l s i i l max kq zi k h l i then k h l l l l next i k k k k x gb x x gb x k x gb x x gb x n x max n zi x gb x k j op k op op k k k putting these together it follows from the assumed rate conditions that max zi b h zi op i next let i xi gb xi and i zi b h zi then above lemma states i op in addition i g op let wi u and u wi u lemma vb f u op proof b b vb f u n x ci w u u w n x b n x u wi u n x b b additively separable both terms on the right hand side will be bounded consider the first term expanding b gives n x n x b n x n x n x n x pn note that op by arguments is in the five terms above are then bounded in order of their appearence by n n x x max op op n x max n x n x max max n x n x op op op op max op op max n x n x n x op op the second term is bounded by n x ci w wi b u b w u max n x ci w wi b u b w n x ci w wi max b max k w b max max kb op op op op k k op where the last bounds come from the rate condition in assumption and op by op these results give the conclusion that d vb f vb f n calculations which give the rates of convergence in each of the cases of assumption or of assumption as well as the proof of the second statement of theorem use the same arguments as in this concludes the proof damian kozbur references aghion and howitt the economics of growth mit press donald andrews and whang additive interactive regression models circumvention of the curse of dimensionality econometric theory donald andrews asymptotic normality of series estimators for nonparametric and semiparametric regression models econometrica bai and ng forecasting economic time series using targeted predictors journal of econometrics bai and ng boosting diffusion indices journal of applied econometrics barro and lee data set for a panel of countries nber http robert barro and lee losers and winners in economic growth working paper national bureau of economic research april belloni chen chernozhukov and hansen sparse models and methods for optimal instruments with an application to eminent domain econometrica arxiv belloni and chernozhukov least squares after model selection in sparse models bernoulli arxiv belloni chernozhukov and hansen program evaluation and causal inference with data econometrica belloni chernozhukov and hansen lasso methods for gaussian instrumental variables models arxiv http belloni chernozhukov and hansen inference for sparse econometric models advances in economics and econometrics world congress of econometric society august alexandre belloni and victor chernozhukov high dimensional sparse econometric models an introduction pages springer berlin heidelberg berlin heidelberg alexandre belloni victor chernozhukov denis chetverikov and kengo kato some new asymptotic theory for least squares series pointwise and uniform results journal of econometrics high dimensional problems in econometrics alexandre belloni victor chernozhukov and christian hansen inference on treatment effects after selection amongst controls with an application to abortion on crime review of economic studies alexandre belloni victor chernozhukov christian hansen and damian kozbur inference in panel models with an application to gun control journal of business economic statistics bickel ritov and a tsybakov simultaneous analysis of lasso and dantzig selector annals of statistics and van de geer statistics for data methods theory and applications springer andreas buja trevor hastie and robert tibshirani linear smoothers and additive models ann bunea tsybakov and wegkamp sparsity oracle inequalities for the lasso electronic journal of statistics bunea a tsybakov and wegkamp aggregation and sparsity via penalized least squares in proceedings of annual conference on learning theory colt lugosi and simon eds pages bunea a tsybakov and wegkamp aggregation for gaussian regression the annals of statistics and tao the dantzig selector statistical estimation when p is much larger than ann chen economic growth robert barro and xavier pp journal of economic dynamics and control may chen o linton and nonparametric estimation of additive separable regression models in wolfgang and michael schimek editors statistical theory and computational aspects of smoothing pages heidelberg hd additively separable norbert christopeit and stefan hoderlein local partitioned regression econometrica dennis cox approximation of least squares regression on nested subspaces ann brian eastwood and ronald gallant adaptive rules for seminonparametric estimators that achieve asymptotic normality econometric theory ildiko frank and jerome friedman a statistical view of some chemometrics regression tools technometrics hansen kozbur and misra targeted undersmoothing arxiv june trevor hastie and robert tibshirani generalized additive models rejoinder statist trevor hastie robert tibshirani and jerome friedman elements of statistical learning data mining inference and prediction springer new york ny jian huang joel horowitz and fengrong wei variable selection in nonparametric additive models ann jian huang joel horowitz and fengrong wei variable selection in nonparametric additive models ann guido imbens and keisuke hirano the propensity score with continuous treatments adel javanmard and andrea montanari confidence intervals and hypothesis testing for highdimensional regression journal of machine learning research jing shao and qiying wang large deviations for independent random variables ann keith knight shrinkage estimation for nearly singular designs econometric theory koltchinskii sparsity in penalized empirical risk minimization ann inst poincar probab hannes leeb and benedikt can one estimate the unconditional distribution of estimators econometric theory qi li and jeffrey scott racine nonparametric econometrics theory and practice princeton university press princeton nj lounici convergence rate and sign concentration property of lasso and dantzig estimators electron j lounici pontil a tsybakov and van de geer taking advantage of sparsity in learning meinshausen and yu recovery of sparse representations for data annals of statistics whitney newey convergence rates and asymptotic normality for series estimators journal of econometrics benedikt confidence sets based on sparse estimators are necessarily large ser a mathieu rosenbaum and alexandre tsybakov sparse recovery under matrix uncertainty the annals of statistics rudelson and zhou reconstruction from anisotropic random measurements ieee transactions on information theory june mark rudelson and roman vershynin on sparse reconstruction from fourier and gaussian measurements communications on pure and applied mathematics eric and stefan sperlich estimation of derivatives for additive separable models statistics charles j stone additive regression and other nonparametric models the annals of statistics tibshirani regression shrinkage and selection via the lasso roy statist soc ser b van de geer generalized linear models and the lasso annals of statistics sara van de geer peter bhlmann yaacov ritov and ruben dezeure on asymptotically optimal confidence regions and tests for models ann damian kozbur wainwright sharp thresholds for noisy and recovery of sparsity using quadratic programming lasso ieee transactions on information theory may lijian yang stefan sperlich and wolfgang hrdle derivative estimation and testing in generalized additive models journal of statistical planning and inference zhang and huang the sparsity and bias of the lasso selection in linear regression ann zhang and stephanie zhang confidence intervals for low dimensional parameters in high dimensional linear models journal of the royal statistical society series b statistical methodology zhou restricted eigenvalue conditions on subgaussian matrices
| 10 |
asymptotics for high dimensional regression m fixed design results lihua peter and noureddine el dec department of statistics university of california berkeley december abstract we investigate the asymptotic distributions of coordinates of regression in the moderate regime where the number of covariates p grows proportionally with the sample size under appropriate regularity conditions we establish the asymptotic normality of regression assuming a matrix our proof is based on the inequality chatterjee and analysis el karoui et some relevant examples are indicated to show that our regularity conditions are satisfied by a broad class of design matrices we also show a counterexample namely the design to emphasize that the technical assumptions are not just artifacts of the proof finally the numerical experiments confirm and complement our theoretical results introduction statistics has a long history huber wachter with considerable renewed interest over the last two decades in many applications the researcher collects data which can be represented as a matrix called a design matrix and denoted by x as well as a response vector y rn and aims to study the connection between x and y the linear model is among the most popular models as a starting point of data analysis in various fields a linear model assumes that y where rp is the coefficient vector which measures the marginal contribution of each predictor and is a random vector which captures the unobserved errors the aim of this article is to provide valid inferential results for features of for example a researcher might be interested in testing whether a given predictor has a negligible effect on the response or equivalently whether for some j similarly linear contrasts of such as might be of interest in the case of the group comparison problem in which the first two predictors represent the same feature but are collected from two different groups an defined as n arg min yi xti n contact support from grant frg is gratefully acknowledged from grant frg is gratefully acknowledged support from grant nsf is gratefully acknowledged ams msc primary secondary keywords robust regression statistics second order inequality analysis support where denotes a loss function is among the most popular estimators used in practice relles huber in particular if x is the famous least square estimator lse we intend to explore the distribution of based on which we can achieve the inferential goals mentioned above the most approach is the asymptotic analysis which assumes that the scale of the problem grows to infinity and use the limiting result as an approximation in regression problems the scale parameter of a problem is the sample size n and the number of predictors the classical approach is to fix p and let n grow to infinity it has been shown relles yohai huber that is consistent in terms of norm and asymptotically normal in this regime the asymptotic variance can be then approximated by the bootstrap bickel freedman later on the studies are extended to the regime in which both n and p grow to infinity but converges to yohai maronna portnoy mammen the consistency in terms of the norm the asymptotic normality and the validity of the bootstrap still hold in this regime based on q these results we can construct d where var d is a confidence interval for simply as var calculated by bootstrap similarly we can calculate for the hypothesis testing procedure we ask whether the inferential results developed under the assumptions and the software built on top of them can be relied on for moderate and highdimensional analysis concretely if in a study n and p can the software built upon the assumption that be relied on when results in random matrix theory pastur already offer an answer in the negative side for many questions in multivariate statistics the case of regression is more subtle for instance for standard degrees of freedom adjustments effectively take care of many problems but this nice property does not extend to more general regression once these questions are raised it becomes very natural to analyze the behavior and performance of statistical methods in the regime where is fixed indeed it will help us to keep track of the inherent statistical difficulty of the problem when assessing the variability of our estimates in other words we assume in the current paper that while let n grows to infinity due to identifiability issues it is impossible to make inference on if p n without further structural or distributional assumptions we discuss this point in details in section thus we consider the regime where we call it the moderate regime this regime is also the natural regime in random matrix theory pastur wachter johnstone bai silverstein it has been shown that the asymptotic results derived in this regime sometimes provide an extremely accurate approximations to finite sample distributions of estimators at least in certain cases johnstone where n and p are both small qualitatively different behavior of moderate regime first is no longer consistent in terms of norm and the risk tends to a quantity determined by the loss function and the error distribution through a complicated system of equations el karoui et el karoui bean et this prohibits the use of standard techniques to assess the behavior of the estimator it also leads to qualitatively different behaviors for the residuals in moderate dimensions in contrast to the case they can not be relied on to give accurate information about the distribution of the errors however this seemingly negative result does not exclude the possibility of inference since is still consistent in terms of norms for any and in particular in norm thus we can at least hope to perform inference on each coordinate second classical optimality results do not hold in this regime in the regime the maximum likelihood estimator is shown to be optimal huber bickel doksum in other words if the error distribution is known then the associated with the loss log is asymptotically efficient provided the design is of appropriate type where is the density of entries of however in the moderate regime it has been shown that the optimal loss is no longer the but an other function with a complicated but explicit form bean et at least for certain designs the suboptimality of maximum likelihood estimators suggests that classical techniques fail to provide valid intuition in the moderate regime third the joint asymptotic normality of as a random vector may be violated for a fixed design matrix x this has been proved for by huber in his pioneering work for general this negative result is a simple consequence of the results of el karoui et al they exhibit an anova design see below where even marginal fluctuations are not gaussian by contrast for random design they show that is jointly asymptotically normal when the design matrix is elliptical with general covariance by using the stochastic representation for as well as elementary properties of vectors uniformly distributed on the uniform sphere in rp see section of el karoui et al or the supplementary material of bean et al for details this does not contradict huber s negative result in that it takes the randomness from both x and into account while huber s result only takes the randomness from into account later el karoui shows that each coordinate of is asymptotically normal for a broader class of random designs this is also an elementary consequence of the analysis in el karoui however to the best of our knowledge beyond the anova situation mentioned above there are no distributional results for fixed design matrices this is the topic of this article last but not least bootstrap inference fails in this regime this has been shown by bickel and freedman for and residual bootstrap in their influential work recently el karoui and purdom studied the results to general and showed that all commonly used bootstrapping schemes including residual bootstrap and jackknife fail to provide a consistent variance estimator and hence valid inferential statements these latter results even apply to the marginal distributions of the coordinates of moreover there is no simple design independent modification to achieve consistency el karoui purdom our contributions in summary the behavior of the estimators we consider in this paper is completely different in the moderate regime from its counterpart in the regime as discussed in the next section moving one step further in the moderate regime is interesting from both the practical and theoretical perspectives the main contribution of this article is to establish asymptotic normality of for certain fixed design matrices x in this regime under technical assumptions the following theorem informally states our main result theorem informal version of theorem in section under appropriate conditions on the design matrix x the distribution of and the loss function as while n n o max dtv q var where dtv is the total variation distance and l denotes the law it is worth mentioning that the above result can be extended to finite dimensional linear contrasts of for instance one might be interested in making inference on in the problems involving the group comparison the above result can be extended to give the asymptotic normality of besides the main result we have several other contributions first we use a new approach to establish asymptotic normality our main technique is based on the secondorder inequality sopi developed by chatterjee to derive among many other results the fluctuation behavior of linear spectral statistics of random matrices in contrast to classical approaches such as the central limit theorem the inequality is capable of dealing with nonlinear and potentially implicit functions of independent random variables moreover we use different expansions for and residuals based on double ideas introduced in el karoui et al in contrast to the classical expansions see aforementioned paper and an informal interpretation of the results of chatterjee is that if the hessian of the nonlinear function of random variables under consideration is sufficiently small this function acts almost linearly and hence a standard central limit theorem holds second to the best of our knowledge this is the first inferential result for fixed non design in the moderate regime fixed designs arise naturally from an experimental design or a conditional inference perspective that is inference is ideally carried out without assuming randomness in predictors see section for more details we clarify the regularity conditions for asymptotic normality of explicitly which are checkable for lse and also checkable for general if the error distribution is known we also prove that these conditions are satisfied with by a broad class of designs the design described in section exhibits a situation where the distribution of is not going to be asymptotically normal as such the results of theorem below are somewhat surprising for complete inference we need both the asymptotic normality and the asymptotic bias and variance under suitable symmetry conditions on the loss function and the error distribution it can be shown that is unbiased see section for details and thus it is left to derive the asymptotic variance as discussed at the end of section classical approaches bootstrap fail in this regime for classical results continue to hold and we discuss it in section for the sake of completeness however for there is no result we briefly touch upon the variance estimation in section the derivation for general situations is beyond the scope of this paper and left to the future research outline of paper the rest of the paper is organized as follows in section we clarify details which are mentioned in the current section in section we state the main result theorem formally and explain the technical assumptions then we show several examples of random designs which satisfy the assumptions with high probability in section we introduce our main technical tool inequality chatterjee and apply it on as the first step to prove theorem since the rest of the proof of theorem is complicated and lengthy we illustrate the main ideas in appendix a the rigorous proof is left to appendix b in section we provide reminders about the theory of estimation for the sake of completeness by taking advantage of its explicit form in section we display the numerical results the proof of other results are stated in appendix c and more numerical experiments are presented in appendix more details on background moderate regime a more informative type of asymptotics in section we mentioned that the ratio measures the difficulty of statistical inference the moderate regime provides an approximation of finite sample properties with the difficulties fixed at the same level as the original problem intuitively this regime should capture more variation in finite sample problems and provide a more accurate approximation we will illustrate this via simulation consider a study involving participants and variables we can either use the asymptotics in which p is fixed to be n grows to infinity or is fixed to be and n grows to infinity to perform approximate inference current software rely on lowdimensional asymptotics for inferential tasks but there is no evidence that they yield more accurate inferential statements than the ones we would have obtained using moderate dimensional asymptotics in fact numerical evidence johnstone el karoui et bean et show that the reverse is true we exhibit a further numerical simulation showing that consider a case that n has entries and x is one realization of a matrix generated with gaussian mean variance entries for and different error distributions we use the ks statistics to quantify the distance between the finite sample distribution and two types of asymptotic approximation of the distribution of specifically we use the huber loss function k with default parameter k huber k k x k k k specifically we generate three design matrices x x and x x for small sample case with a sample size n and a dimension p x for asymptotics p fixed with a sample size n and a dimension p and x for asymptotics fixed with a sample size n and a dimension p each of them is generated as one realization of an standard gaussian design and then treated as fixed across k repetitions for each design matrix vectors of appropriate length are generated with entries the entry has either a standard normal distribution or a or a standard cauchy distribution then we use as the response or equivalently assume and obtain the repeating this procedure for k times results in k replications in three cases then we extract the first coordinate of each k k estimator denoted by k then the kolmogorovsmirnov statistics can be obtained by r r n n max x x max x x x x r r where is the empirical distribution of k we can then compare the accuracy of two asymptotic regimes by comparing and the smaller the value of ksi the better the approximation figure displays the results for these error distributions we see that for gaussian errors and even errors the approximation is uniformly more accurate than the widely used approximation for cauchy errors the approximation performs better than the moderatedimensional one when is small but worsens when the ratio is large especially when is close to moreover when grows the two approximations have qualitatively different behaviors the approximation becomes less and less accurate while the approximation does not suffer much deterioration when grows the qualitative and quantitative differences of these two approximations reveal the practical importance of exploring the asymptotic regime see also johnstone random vs fixed design as discussed in section assuming a fixed design or a random design could lead to qualitatively different inferential results in the random design setting x is considered as being generated from a super population for example the rows of x can be regarded as an sample from a distribution known or partially known to the researcher in situations where one uses techniques such as stone pairs bootstrap in regression efron statistics distance between the small sample and large sample distribution normal t cauchy kappa asym regime p fixed fixed figure axpproximation accuracy of asymptotics and asymptotics each column represents an error distribution the represents the ratio of the dimension and the sample size and the represents the statistic the red solid line corresponds to approximation and the blue dashed line corresponds to approximation efron or sample splitting wasserman roeder the researcher effectively assumes exchangeability of the data xti yi naturally this is only compatible with an assumption of random design given the extremely widespread use of these techniques in contemporary machine learning and statistics one could argue that the random design setting is the one under which most of modern statistics is carried out especially for prediction problems furthermore working under a random design assumption forces the researcher to take into account two sources of randomness as opposed to only one in the fixed design case hence working under a random design assumption should yield conservative confidence intervals for in other words in settings where the researcher collects data without control over the values of the predictors the random design assumption is arguably the more natural one of the two however it has now been understood for almost a decade that common random design assumptions in xi zi where zi j s are with mean and variance and a few moments and well behaved suffer from considerable geometric limitations which have substantial impacts on the performance of the estimators considered in this paper el karoui et as such confidence statements derived from that kind of analysis can be relied on only after performing a few graphical tests on the data see el karoui these geometric limitations are simple consequences of the concentration of measure phenomenon ledoux on the other hand in the fixed design setting x is considered a fixed matrix in this case the inference only takes the randomness of into consideration this perspective is popular in several situations the first one is the experimental design the goal is to study the effect of a set of factors which can be controlled by the experimenter on the response in contrast to the observational study the experimenter can design the experimental condition ahead of time based on the inference target for instance a oneway anova design encodes the covariates into binary variables see section for details and it is fixed prior to the experiment other examples include anova designs factorial designs designs etc scheffe another situation which is concerned with fixed design is the survey sampling where the inference is carried out conditioning on the data cochran generally in order to avoid unrealistic assumptions making inference conditioning on the design matrix x is necessary suppose the linear model is true and identifiable see section for details then all information of is contained in the conditional distribution l and hence the information in the marginal distribution l x is redundant the conditional inference framework is more robust to the data generating procedure due to the irrelevance of l x also results based on fixed design assumptions may be preferable from a theoretical point of view in the sense that they could potentially be used to establish corresponding results for certain classes of random designs specifically given a marginal distribution l x one only has to prove that x satisfies the assumptions for fixed design with high probability in conclusion fixed and random design assumptions play complementary roles in settings we focus on the least understood of the two the fixed design case in this paper modeling and identification of parameters the problem of identifiability is especially important in the fixed design case define in the population as n arg min yi xti n one may ask whether regardless of in the fixed design case we provide an affirmative answer in the following proposition by assuming that has a symmetric distribution around and is even d proposition suppose x has a full column rank and for all i further assume is an even convex function such that for any i and then regardless of the choice of the proof is left to appendix it is worth mentioning that proposition only requires the marginals of to be symmetric but does not impose any constraint on the dependence structure of further if is strongly convex then for all x x x as a consequence the condition is satisfied provided that is with positive probability if is asymmetric we may still be able to identify if are random variables in contrast to the last case we should incorporate an intercept term as a shift towards the centroid of more precisely we define and as n arg min yi xti n proposition suppose x is of full column rank and are such that as a function of has a unique minimizer then is uniquely defined with and the proof is left to appendix for example let z then the minimizer of a is a median of and is unique if has a positive density it is worth pointing out that incorporating an intercept term is essential for identifying for instance in the case no longer equals to if proposition entails that the intercept term guarantees although the intercept term itself depends on the choice of unless more conditions are imposed if s are neither symmetric nor then can not be identified by the previous criteria because depends on nonetheless from a modeling perspective it is popular and reasonable to assume that s are symmetric or in many situations therefore proposition and proposition justify the use of in those cases and derived from different loss functions can be compared because they are estimating the same parameter main results notation and assumptions let xti denote the row of x and xj denote the column of x throughout the paper we will denote by xij r the i j entry of x by x j the design matrix x after removing the column and by xti j the vector xti after removing entry the associated with the loss function is defined as arg min n n yk xtk arg min xtk n n we define to be the first derivative of we will write simply when no confusion can arise when the original design matrix x does not contain an intercept term we can simply replace x by x and augment into a p vector t t although being a special case we will discuss the question of intercept in section due to its important role in practice equivariance and reduction to the null case is invariant to the choice of provided that notice that our target quantity var is identifiable as discussed in section we can assume without loss of generality in this case we assume in particular that the design matrix x has full column rank then yk and n xtk arg min n similarly we define the version as n j arg min xtk j n based on these notations we define the full residuals rk as rk xtk k n and the residual as rk j xtk j j k n j three n n diagonal matrices are defined as d diag rk diag rk d j diag rk j we say a random variable z is if for any r e in addition we use jn p to represent the indices of parameters which are of interest intuitively more entries in jn would require more stringent conditions for the asymptotic normality finally we adopt landau s notation o o op op in addition we say an bn if bn o an and similarly we say an bn if bn op an to simplify the logarithm factors we use the symbol polylog n to denote any factor that can be upper bounded by log n for some similarly we use polylog n to denote any factor that can be lower bounded by log n for some technical assumptions and main result before stating the assumptions we need to define several quantities of interest let t t x x x x n n be the largest resp smallest eigenvalue of the matrix canonical basis vector and xt x n let ei rn be the t t d j x j x j ei i i d j x j x j j rn j t finally let i xj xj max max max i qj cov based on the quantities defined above we state our technical assumptions on the design matrix x followed by the main result a detailed explanation of the assumptions follows and there exists positive numbers polylog n o polylog n such that for any x r x x d p x p dx x ui wi where wn n and ui are smooth functions with and for some o polylog n moreover assume mini var polylog n o polylog n and xjt qj xj tr qj polylog n polylog n o polylog n theorem under assumptions as for some while n e j j n o max dtv q var where dtv p q supa a q a is the total variation distance we provide several examples where our assumptions hold in section we also provide an example where the asymptotic normality does not hold in section this shows that our assumptions are not just artifacts of the proof technique we developed but that there are probably many situations where asymptotic normality will not hold even discussion of assumptions now we discuss assumptions assumption implies the boundedness of the and the derivatives of the upper bounds are satisfied by most loss functions including the loss the smoothed loss the smoothed huber loss etc the lower bound implies the strong convexity of and is required for technical reasons it can be removed by considering first a and taking appropriate limits as in el karoui in addition in this paper we consider the smooth loss functions and the results can be extended to case via approximation assumption was proposed in chatterjee when deriving the inequality discussed in section it means that the results apply to nongaussian distributions such as the uniform distribution on by taking ui the cumulative distribution function of standard normal distribution through the gaussian concentration ledoux we see that implies that are thus controls the tail behavior of the boundedness of and are required only for the direct application of chatterjee s results in fact a look at his proof suggests that one can obtain a similar result to his inequality involving moment bounds on wi and wi this would be a way to weaken our assumptions to permit to have the distributions expected in robustness studies since we are considering strongly convex it is not completely unnatural to restrict our attention to errors furthermore efficiency and not only robustness questions are one of the main reasons to consider these estimators in the context the potential gains in efficiency obtained by considering regression bean et apply in the context which further justify our interest in this theoretical setup assumption is completely checkable since it only depends on x it controls the singularity of the design matrix under and it can be shown that the objective function is strongly convex the smallest eigenvalue of the hessian matrix everywhere lower bounded by polylog n assumption is controlling the left tail of quadratic forms it is fundamentally connected to aspects of the concentration of measure phenomenon ledoux this condition is proposed and emphasized under the random design setting by el karoui et al essentially it means that for a matrix qj which does not depend on xj the quadratic form xjt qj xj should have the same order as tr qj assumption is proposed by el karoui under the random design settings it is motivated by analysis note that is the maximum of linear contrasts of xj whose coefficients do not depend on xj it is easily checked for design matrix x which is a realization of a random matrix with entries for instance remark in certain applications it is reasonable to make the following additional assumption is an even function and s have symmetric distributions although assumption is not necessary to theorem it can simplify the result under d assumption when x is full rank we have if denotes equality in distribution n arg min n xti arg min xti n n n d arg min xti n this implies that is an unbiased estimator provided it has a mean which is the case here unbiasedness is useful in practice since then theorem reads n o max dtv q var for inference we only need to estimate the asymptotic variance an important remark concerning theorem when jn is a subset of p the coefficients in jnc become nuisance parameters heuristically in order for identifying one only needs the subspaces span xjn and span xjnc to be distinguished and xjn has a full column rank here xjn denotes the of x with columns in jn formally let t x i xjnc xjtnc xjnc xjtnc xjn n jn where denotes the generalized inverse of a and then characterizes the behavior of xjn after removing the effect of xjnc in particular we can modify the assumption by o polylog n and polylog n then we are able to derive a stronger result in the case where p than theorem as follows corollary under assumptions and as for some e j j n o max dtv q var it can be shown that and and hence the assumption is weaker than it is worth pointing out that the assumption even holds when xjcn does not have full column rank in which case is still identifiable and is still although and are not see appendix for details examples throughout this subsection except subsubsection we consider the case where x is a realization of a random matrix denoted by z to be distinguished from x we will verify that the assumptions are satisfied with high probability under different regularity conditions on the distribution of z this is a standard way to justify the conditions for fixed design portnoy in the literature on regression mestimates random design with independent entries first we consider a random matrix z with entries proposition suppose z has entries with var zij for some o polylog n and polylog n then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p in practice the assumption of identical distribution might be invalid in fact the assumptions and the first part of o polylog n are still satisfied with high probability if we only assume the independence between entries and boundedness of certain moments to control we rely on litvak et al which assumes symmetry of each entry we obtain the following result based on it proposition suppose z has independent entries with d zij var zij for some o polylog n and polylog n then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p under the conditions of proposition we can add an intercept term into the design matrix adding an intercept allows us to remove the assumption for zij s in fact suppose zij is symmetric with respect to which is potentially for all i then according to section we can replace zij by zij and proposition can be then applied proposition suppose z and has independent entries with d var and arbitrary then when x is a for some o polylog n polylog n realization of z assumptions and for x are satisfied with high probability over z for jn p dependent gaussian design to show that our assumptions handle a variety of situations we now assume that the observations namely the rows of z are random vectors with a covariance matrix in particular we show that the gaussian design zi n satisfies the assumptions with high probability proposition suppose zi n with o polylog n and polylog n then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p this result extends to the design muirhead chapter zij is one realization of a random variable z with multivariate gaussian distribution vec z znt n and is the kronecker product it turns out that assumptions are satisfied if both and are proposition suppose z is with vec z n and o polylog n polylog n then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p in order to incorporate an intercept term we need slightly more stringent condition on instead of assumption we prove that assumption see subsubsection holds with high probability proposition suppose z contains an intercept term z and satisfies the conditions of proposition further assume that maxi i mini i o polylog n then when x is a realization of z assumptions and for x are satisfied with high probability over z for jn p when i the condition is satisfied another example is the exchangeable case where are all equal for i j in this case is an eigenvector of and hence it is also an eigenvector of thus is a multiple of and the condition is satisfied elliptical design furthermore we can move from structure to generalized elliptical models where zi zi where zij i n j p are independent random variables zij having for instance mean and variance the elliptical family is quite flexible in modeling data it represents a type of data formed by a common driven factor and independent individual effects it is widely used in multivariate statistics anderson tyler and various fields including finance cizek et and biology posekany et in the context of statistics this class of model was used to refute universality claims in random matrix theory el karoui in robust regression el karoui et al used elliptical models to show that the limit of depends on the distribution of and hence the geometry of the predictors as such studies limited to design were shown to be of very limited statistical interest see also the deep classical inadmissibility results baranchik klebanov however as we will show in the next proposition the common factors do not distort the shape of the asymptotic distribution a similar phenomenon happens in the random design case see el karoui et al bean et al proposition suppose z is generated from an elliptical model zij zij where are independent random variables taking values in a b for some a b and zij are independent random variables satisfying the conditions of proposition or proposition further assume that i n and zij i n j p are independent then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p thanks to the fact that is bounded away from and the proof of proposition is straightforward as shown in appendix however by a more refined argument and assuming identical distributions we can relax this condition proposition under the conditions of proposition except the boundedness of and assume are samples generated from some distribution f independent of n with p t t for some fixed and f q for any q where f is the quantile function of f and is continuous then when x is a realization of z assumptions for x are satisfied with high probability over z for jn p a counterexample consider a anova situation in other words let the design matrix have exactly entry per row whose value is let ki be integers in p and let xi j j ki furthermore let us constrain nj i ki j to be such that nj taking for instance ki i mod p is an easy way to produce such a matrix the associated statistical model is just yi it is easy to see that x x arg min yi arg min i ki i ki this is of course a standard location problem in the setting we consider nj remains finite as n so is a function of finitely many random variables and will in general not be normally distributed for concreteness one can take x in which case is a median of yi i ki the cdf of is known exactly by elementary order statistics computations see david and nagaraja and is not that of a gaussian random variable in general in fact the anova design considered here violates the assumption since minj nj o further we can show that the assumption is also violated at least in the case see section for details comments and discussions asymptotic normality in high dimensions in the regime the asymptotic distribution is easily defined as the limit of l in terms of weak topology van der vaart however in regimes where the dimension p grows the notion of asymptotic distribution is more delicate a conceptual question arises from the fact that the dimension of the estimator changes with n and thus there is no distribution which can serve as the limit of l where l denotes the law one remedy is proposed by mallows under this framework a triangular array wn j j pn with ewn j ewn j is called jointly asymptotically pn normal if for any deterministic sequence an r with kan pn x an j wn j n when the zero mean and unit variance are not satisfied it is easy to modify the definition by normalizing random variables definition joint asymptotic normality wn wn rpn is jointly asymptotically normal if and only if for any sequence an an rpn atn wn ewn n l p atn cov wn an the above definition of asymptotic normality is strong and appealing but was shown not to hold for in the moderate regime huber in fact huber shows that ls is jointly asymtotically normal only if max x x t x x t i i i when provided x is full rank max x x t x x t i i i p tr x x t x x t n n in other words in moderate regime the asymptotic normality can not hold for all linear contrasts even in the case of in applications however it is usually not necessary to consider all linear contrasts but instead a small subset of them all coordinates or low dimensional linear contrasts such as we can naturally modify definition and adapt to our needs by imposing constraints on an a popular concept which we use in section informally is called asymptotic normality and defined by restricting an to be the canonical basis vectors which have only one element an equivalent definition is stated as follows definition asymptotic normal wn wn rpn is asymptotically normal if and only if for any sequence jn jn pn wn jn ewn jn p l n var wn jn a more convenient way to define the asymptotic normality is to introduce a metric d kolmogorov distance and total variation distance which induces the weak convergence topology then wn is asymptotically normal if and only if wn j ewn j n o max d l p j var wn j discussion about inference and technical assumptions variance and bias estimation to complete the inference we need to compute the bias and variance as discussed in remark the is unbiased if the loss function and the error distribution are symmetric for the variance it is easy to get a conservative estimate via resampling methods such as jackknife as a consequence of s inequality see el karoui and el karoui and purdom for details moreover by the variance decomposition formula h i h i h i var e var var e e var the unconditional variance when x is a random design matrix is a conservative estimate the unconditional variance can be calculated by solving a system see el karoui and donoho and montanari however estimating the exact variance is known to be hard el karoui and purdom show that the existing resampling schemes including jacknife residual bootstrap are either too conservative or too when is large the challenge as mentioned in el karoui el karoui and purdom is due to the fact that the residuals ri do not mimic the behavior of and that the resampling methods effectively modifies the geometry of the dataset from the point of view of the statistics of interest we believe that variance estimation in moderate regime should rely on different methodologies from the ones used in estimation technical assumptions on the other hand we assume that is strongly convex one remedy would be adding a ridge regularized term as in el karoui and the new problem is amenable to analysis with the method we used in this article however the regularization term introduces a bias which is as hard to be derived as the variance for unregularized mestimators the strong convexity is also assumed by other works el karoui donoho montanari however we believe that this assumption is unnecessary and can be removed at least for design matrices another possibility for errors that have more than moments is to just add a small quadratic term to the loss function with a small finally we recall that in many situations is actually more efficient than see numerical work in bean et al in moderate dimensions this is for instance the case for errors if is greater than or so as such working with strongly convex loss functions is as problematic for regression as it would be in the setting to explore traditional robustness questions we will need to weaken the requirements of assumption this requires substantial work and an extension of the main results of chatterjee because the technical part of the paper is already long we leave this interesting statistical question to future works proof sketch since the proof of theorem is somewhat technical we illustrate the main idea in this section first notice that the is an implicit function of independent random variables which is determined by n xi xi n the hessian matrix of the loss function in is x t dx ip under the notation introduced in section the assumption then implies that the loss function is strongly convex in which case is unique then can be seen as a function of s a powerful central limit theorem for this type of statistics is the inequality sopi developed in chatterjee and used there to central limit theorems for linear spectral statistics of large random matrices we recall one of the main results for the convenience of the reader proposition sopi chatterjee let w wn un wn where wi n and take any g c rn and let g and g denote the partial derivative gradient and hessian of let e n x g w w g w and u g w if u has finite fourth moment then u eu n dtv l p var u var u from it is not hard to compute the gradient and hessian of with respect to recalling the definitions in equation on we have lemma suppose c rn then etj x t dx x t d gt diag etj x t dx x t g where ej is the cononical basis vectors in rp and g i x x t dx x t recalling the definitions of ki s in assumption on we can bound and as follows lemma let defined as in proposition by setting w and g w let mj eketj x t dx x t d then mj mj as a consequence of the inequality we can bound the total variation distance between and a normal distribution by mj and var more precisely we prove the following lemma lemma under assumptions n op max dtv q j var maxj n minj var polylog n lemma is the key to prove theorem to obtain the asymptotic normality it is left to establish an upper bound for mj and a lower bound for var in fact we can prove that lemma under assumptions polylog n min var max mj o j j n n polylog n then lemma and lemma together imply that e polylog n j j max dtv l q n o o j var appendix a provides a roadmap of the proof of lemma under a special case where the design matrix x is one realization of a random matrix with entries it also serves as an outline of the rigorous proof in appendix b comment on the inequality pn notice that when g is a linear function such that g z ai zi then the inequality esseen implies that pn w ew dk l p n n var w where dk f g sup x g x x on the other hand the inequality implies that pn w ew w ew ai n dtv l p n pn dk l p var w var w ai this is slightly worse than the bound and requires stronger conditions on the distributions of variates but provides bounds for tv metric instead of kolmogorov metric this comparison shows that inequality can be regarded as a generalization of the bound for transformations of independent random variables estimator the estimator is a special case of an with x because the estimator can then be written explicitly the analysis of its properties is extremely simple and it has been understood for several decades see arguments in huber lemma and huber proposition in this case the hat matrix h x x t x x t captures all the problems associated with dimensionality in the problem in particular proving the asymptotic normality simply requires an application of the theorem it is however somewhat helpful to compare the conditions required for asymptotic normality in this simple case and the ones we required in the more general setup of theorem we do so briefly in this section asymptotic normality of lse under the linear model when x is full rank ls x t x x t thus each coordinate of ls is a linear contrast of with zero mean instead of assumption which requires to be we only need to assume maxi under which the bound for data esseen implies that t t t t n kej x x x kej x x x dk q kej x t x x t ketj x t x x t var j this motivates us to define a matrix specific quantity sj x such that sj x ketj x t x x t ketj x t x x t then the bound implies that sj x determines the asymptotic normality of ls theorem if e maxi then ls j j max dk q n a max sj x var j where a is an absolute constant and dk is the kolmogorov distance defined as dk f g sup x g x x it turns out that sj x plays in the setting the role of in assumption since it has been known that a condition like sj x is necessary for asymptotic normality of estimators huber proposition this shows in particular that our assumption or a variant is also needed in the general case see appendix for details discussion naturally checking the conditions for asymptotic normality is much easier in the leastsquares case than in the general case under consideration in this paper in particular asymptotic normality conditions can be checked for a broader class of random design matrices see appendix for details kx k for orthogonal design matrices x t x cid for some c sj x kxjj hence the condition sj x o is true if and only if no entry dominates the j th row of x the counterexample we gave in section still provides a counterexample the reason now is different namely the sum of finitely many independent random variables is evidently in general in fact in this case sj x is bounded away from nj inferential questions are also extremely simple in this context and essentially again dimensionindependent for the reasons highlighted above theorem naturally reads d q n t t ej x x ej estimating is still simple under minimal conditions provided n p see bickel and freedman theorem or standard computations concerning the normalized residual using variance computations for the latter may require up to moments for s then we can replace in by with n x rk where rk yk xtk and construct confidence intervals for based on if n p does not tend to the normalized residual sum of squares is evidently not consistent even in the case of gaussian errors so this requirement may not be dispensed of numerical results as seen in the previous sections and related papers there are five important factors that affect the distribution of the design matrix x the error distribution l the sample size n the ratio and the loss function the aim of this section is to assess the quality of the agreement between the asymptotic theoretical results of theorem and the empirical properties of we also perform a few simulations where some of the assumptions of theorem are violated to get an intuitive sense of whether those assumptions appear necessary or whether they are simply technical artifacts associated with the method of proof we developed as such the numerical experiments we report on in this section can be seen as a complement to theorem rather than only a simple check of its practical relevance the design matrices we consider are one realization of random design matrices of the following three types design xij f elliptical design xij where n and f in addition is independent of partial hadamard design a matrix formed by a random set of p columns of a hadamard matrix a n n matrix whose columns are orthogonal with entries restricted to here we consider two candidates for f in design and elliptical design standard normal distribution n and with two degrees of freedom denoted for the error distribution we assume that has entries with one of the above two distributions namely n and the violates our assumption to evaluate the finite sample performance we consider the sample sizes n and in this section we will consider a huber loss with k huber k x kx k k is the default in r and yields relative efficiency for gaussian errors in problems we also carried out the numerical work for x see appendix d for details asymptotic normality of a single coordinate first we simulate the finite sample distribution of the first coordinate of for each combination of sample size n and type of design elliptical and hadamard entry distribution f normal and and error distribution l normal and we run simulations with each consisting of the following steps step generate one design matrix x step generate the error vectors step regress each y on the design matrix x and end up with random samples of denoted by b step estimate the standard deviation of by the sample standard error sd h i k b k sd b for each step construct a confidence interval i k sd k step calculate the empirical coverage by the proportion of confidence intervals which cover the true finally we display the boxplots of the empirical coverages of for each case in figure it is worth mentioning that our theories cover two cases design with normal entries and normal errors orange bars in the first row and the first column see proposition elliptical design with normal factors and normal errors orange bars in the second row and the first column see proposition we first discuss the case in this case there are only two samples per parameter nonetheless we observe that the coverage is quite close to even with a sample size as small as in both cases that are covered by our theories for other cases it is interesting to see that the coverage is valid and most stable in the partial hadamard design case and is not sensitive to the distribution of multiplicative factor in elliptical design case even when the error has a distribution for designs the coverage is still valid and stable when the entry is normal by contrast when the entry has a distribution the coverage has a large variation in small samples the average coverage is still close to in the normal design case but is slightly lower than in the design case in summary the finite sample distribution of is more sensitive to the entry distribution than the error distribution this indicates that the assumptions on the design matrix are not just artifacts of the proof but are quite essential the same conclusion can be drawn from the case where except that the variation becomes larger in most cases when the sample size is small however it is worth pointing out that even in this case where there is samples per parameter the sample distribution of is well approximated by a normal distribution with a moderate sample size n this is in contrast to the classical rule of thumb which suggests that samples are needed per parameter asymptotic normality for multiple marginals since our theory holds for general jn it is worth checking the approximation for multiple coordinates in finite samples for illustration we consider coordinates namely simultaneously and calculate the minimum empirical coverage to avoid the finite sample dependence between coordinates involved in the simulation we estimate the empirical coverage independently for each coordinate specifically we run simulations with each consisting of the following steps step generate one design matrix x step generate the error vectors step regress each y on the design matrix x and end up with random samples of for each j by using the j to response vector y b j for step estimate the standard deviation of by the sample standard error sd j h i k k b j k sd b j for step construct a confidence interval ij sd j each j and k coverage of normal coverage of t normal ellip coverage ellip coverage iid iid t hadamard hadamard sample size entry dist normal t sample size hadamard entry dist normal t hadamard figure empirical coverage of with left and right using loss the corresponds to the sample size ranging from to the corresponds to the empirical coverage each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design step calculate the empirical coverage by the proportion of confidence intervals which cover the true denoted by cj for each j step report the minimum coverage cj if the assumptions are satisfied cj should also be close to as a result of theorem thus cj is a measure for the approximation accuracy for multiple marginals figure displays the boxplots of this quantity under the same scenarios as the last subsection in two cases that our theories cover the minimum coverage is increasingly closer to the true level similar to the last subsection the approximation is accurate in the partial hadamard design case and is insensitive to the distribution of multiplicative factors in the elliptical design case however the approximation is very inaccurate in the design case again this shows the evidence that our technical assumptions are not artifacts of the proof on the other hand the figure suggests using a conservative variance estimator the jackknife estimator or corrections on the confidence level in order to make simultaneous inference on multiple coordinates here we investigate the validity of bonferroni correction by modifying the step and step the confidence interval after bonferroni correction is obtained by h i k k b j k sd bj ij sd j where and is the quantile of a standard normal distribution the k proportion of k such that ij for all j should be at least if the marginals are all close to a normal distribution we modify the confidence intervals in step by k and calculate the proportion of k such that ij for all j in step figure displays the boxplots of this coverage it is clear that the bonferroni correction gives the valid coverage except when n and the error has a distribution min coverage of normal min coverage of t normal t ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal t sample size hadamard entry dist normal t hadamard figure mininum empirical coverage of with left and right using loss the corresponds to the sample size ranging from to the corresponds to the minimum empirical coverage each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design conclusion we have proved asymptotic normality for regression in the asymptotic regime for fixed design matrices under appropriate technical assumptions our design assumptions are satisfied with high probability for a broad class of random designs the main novel ingredient of the proof is the use of the inequality numerical experiments confirm and complement our theoretical results bonf coverage of normal bonf coverage of t normal iid iid hadamard hadamard ellip ellip coverage coverage t sample size entry dist normal t sample size hadamard entry dist normal t hadamard figure empirical coverage of after bonferroni correction with left and right using loss the corresponds to the sample size ranging from to the corresponds to the empirical uniform coverage after bonferroni correction each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design references anderson an introduction to multivariate statistical analysis wiley new york bai silverstein spectral analysis of large dimensional random matrices vol springer bai yin y limit of the smallest eigenvalue of a large dimensional sample covariance matrix the annals of probability baranchik a inadmissibility of maximum likelihood estimators in some multiple regression problems with three or more independent variables the annals of statistics bean bickel el karoui lim yu b penalized robust regression in technical report department of statistics uc berkeley bean bickel el karoui yu b optimal in highdimensional regression proceedings of the national academy of sciences bickel doksum a mathematical statistics basic ideas and selected topics volume i vol crc press bickel freedman a some asymptotic theory for the bootstrap the annals of statistics bickel freedman a bootstrapping regression models with many parameters festschrift for erich lehmann chatterjee fluctuations of eigenvalues and second order inequalities probability theory and related fields chernoff a note on an inequality involving the normal distribution the annals of probability cizek weron statistical tools for finance and insurance springer science business media cochran sampling techniques john wiley sons david nagaraja order statistics wiley online library donoho montanari a high dimensional robust asymptotic variance via approximate message passing probability theory and related fields durrett probability theory and examples cambridge university press efron efron b the jackknife the bootstrap and other resampling plans vol siam el karoui concentration of measure and spectra of random matrices applications to correlation matrices elliptical distributions and beyond the annals of applied probability el karoui effects in the markowitz problem and other quadratic programs with linear constraints risk underestimation the annals of statistics el karoui asymptotic behavior of unregularized and robust regression estimators rigorous results arxiv preprint el karoui on the impact of predictor geometry on the performance on highdimensional generalized robust regression estimators technical report department of statistics uc berkeley el karoui bean bickel lim yu b on robust regression with predictors technical report department of statistics uc berkeley el karoui bean bickel lim yu b on robust regression with predictors proceedings of the national academy of sciences el karoui purdom can we trust the bootstrap in technical report department of statistics uc berkeley esseen fourier analysis of distribution functions a mathematical study of the law acta mathematica geman a limit theorem for the norm of random matrices the annals of probability hanson wright a bound on tail probabilities for quadratic forms in independent random variables the annals of mathematical statistics horn johnson matrix analysis cambridge university press huber j robust estimation of a location parameter the annals of mathematical statistics huber j the wald lecture robust statistics a review the annals of mathematical statistics huber j robust regression asymptotics conjectures and monte carlo the annals of statistics huber j robust statistics john wiley sons new york huber j robust statistics springer johnstone on the distribution of the largest eigenvalue in principal components analysis annals of statistics klebanov inadmissibility of robust estimators with respect to norm lecture series latala some estimates of norms of random matrices proceedings of the american mathematical society ledoux the concentration of measure phenomenon no american mathematical soc litvak pajor rudelson smallest singular value of random matrices and geometry of random polytopes advances in mathematics mallows a note on asymptotic joint normality the annals of mathematical statistics mammen asymptotics with increasing dimension for robust regression with applications to the bootstrap the annals of statistics pastur a distribution of eigenvalues for some sets of random matrices mathematics of the muirhead j aspects of multivariate statistical theory vol john wiley sons portnoy asymptotic behavior of of p regression parameters when is large consistency the annals of statistics portnoy asymptotic behavior of m estimators of p regression parameters when is large ii normal approximation the annals of statistics portnoy on the central limit theorem in rp when p probability theory and related fields portnoy a central limit theorem applicable to robust regression estimators journal of multivariate analysis posekany felsenstein sykacek biological assessment of robust noise models in microarray data analysis bioinformatics relles a robust regression by modified tech dtic document rosenthal on the subspaces ofl p spanned by sequences of independent random variables israel journal of mathematics rudelson vershynin smallest singular value of a random rectangular matrix communications on pure and applied mathematics rudelson vershynin theory of random matrices extreme singular values arxiv preprint rudelson vershynin inequality and concentration electron commun probab scheffe the analysis of variance vol john wiley sons silverstein the smallest eigenvalue of a large dimensional wishart matrix the annals of probability stone choice and assessment of statistical predictions journal of the royal statistical society series b methodological tyler a of multivariate scatter the annals of statistics van der vaart asymptotic statistics cambridge university press vershynin introduction to the analysis of random matrices arxiv preprint wachter probability plotting points for principal components in ninth interface symposium computer science and statistics pp wachter the strong limits of random matrix spectra for sample matrices of independent elements the annals of probability wasserman roeder high dimensional variable selection annals of statistics yohai j robust m estimates for the general linear model universidad nacional de la plata departamento de matematica yohai maronna a asymptotic behavior of for the linear model the annals of statistics appendix a proof sketch of lemma in this appendix we provide a roadmap for proving lemma by considering a special case where x is one realization of a random matrix z with entries random matrix theory geman silverstein bai yin implies that op op and op thus the assumption is satisfied with high probability thus the lemma in holds with high probability it remains to prove the following lemma to obtain theorem lemma let z be a random matrix with entries and x be one realization of z then under assumptions and polylog n min var max mj op n n polylog n where mj is defined in in and the randomness in op and op comes from upper bound of mj first by proposition op in the rest of the proof the symbol e and var denotes the expectation and the variance conditional on z let d z then mj eketj t t let i t t j j j j then by block matrix inversion formula see proposition which we state as proposition in appendix t t t t t t i t i t i t this implies that e i i since z t i we have t z t dz t n i z t dz n and we obtain a bound for as i d i similarly t t ekzjt d i d z j z j dz j z j d ekzjt d i mj the vector in the numerator is a linear contrast of zj and zj has subgaussian entries for any fixed matrix a denote ak by its column then atk zj is kak see section of vershynin for a detailed discussion and hence by definition of p zj t therefore by a simple union bound we conclude that p kat zj max kak t k let t log n p kat zj max kak k p log n o n this entails that t ka zj op max kak polylog n op kakop polylog n k with high probability in mj the coefficient matrix i hj d depends on zj through d and hence we can not use directly however the dependence can be removed by replacing d by d j since ri j does not depend on zj since z has entries no column is highly influential in other words the estimator will not change drastically after removing column this would suggest ri ri j it is proved by el karoui that polylog n sup ri j op n i j it can be rigorously proved that kzjt d i kzjt d j i hj op polylog n n t t where hj i d j z j z j d j z j z j d j see appendix for details since d j i hj is independent of zj and kd j i hj kop kd j kop o polylog n it follows from and that kzjt d j i hj op polylog n n in summary mj op polylog n n lower bound of var approximating var by var bj it is shown by el karoui that nj bj n el karoui considers a ridge regularized m estimator which is different from our setting however this argument still holds in our case and proved in appendix b where n x nj zij ri j n t t t z d j d j z j x j d j x j z j d j zj n j it has been shown by el karoui that max bj op j polylog n n thus var var bj and a more refined calculation in appendix shows that polylog n var var bj op it is left to show that var bj n polylog n bounding var bj via var nj by definition of bj var bj polylog n n var nj polylog n as will be shown in appendix var op polylog n n as a result and var nj nj var var nj as in the previous paper el karoui we rewrite as t t t zj d j i d j z j x j d j x j z j d j d j zj n the middle matrix is idempotent and hence positive thus then we obtain that t z d j zj op polylog n n j var nj var nj polylog n and it is left to show that var nj polylog n bounding var nj via tr qj recall the definition of nj and that of qj see section in we have var nj t z qj zj n j notice that zj is independent of ri j and hence the conditional distribution of zj given qj remains the same as the marginal distribution of zj since zj has entries the inequality hanson wright rudelson vershynin see proposition shown in proposition implies that any quadratic form of zj denoted by zjt qj zj is concentrated on its mean zjt qj zj ezj zjt qj zj tr qj as a consequence it is left to show that tr qj n polylog n lower bound of tr qj by definition of qj tr qj n x var ri j to lower bounded the variance of ri j recall that for any random variable w var w e w w where w is an independent copy of w suppose g r r is a function such that x c for all x then implies that var g w e g w g w e w w var w in other words entails that var w is a lower bound for var g w provided that the derivative of g is bounded away from as an application we see that var ri j var ri j and hence tr qj n x var ri j by the variance decomposition formula var ri j e var ri j i var e ri j i e var ri j i where i includes all but entry of given i ri j is a function of using we have var ri j i inf j var i inf j var this implies that var ri j e var ri j i e inf j min var i summing var ri j over i n we obtain that tr qj n x x var ri j e i j inf min var i it will be shown in appendix that under assumptions x j n e inf polylog n i this proves and as a result min var j n polylog n b proof of theorem notation to be we summarize our notations in this subsection the model we considered here is y where x be the design matrix and is a random vector with independent entries notice that the target quantity is shift invariant we can assume without var loss of generality provided that x has full column rank see section for details let xti denote the row of x and xj denote the column of x throughout the paper we will denote by xij r the i j entry of x by x i r the design matrix x after removing the row by x j the design matrix x after removing the column by x i j r the design matrix after removing both row and column and by xi j the vector xi after removing entry the associated with the loss function is defined as n xtk arg min p n similarly we define the version as n j xtk j arg min p n based on these notation we define the full residual rk as rk xtk k n the residual as rk j xtk j j k n j jn diag rk j diag rk j t t g j i x j x j d j x j x j d j four diagonal matrices are defined as d diag rk d j diag rk j further we define g and g j as g i x x t dx x t d let jn denote the indices of coefficients of interest we say a if and only if a min max regarding the technical assumptions we need the following quantities t t x x x x n n be the largest resp smallest eigenvalue of the matrix canonical basis vector and j rn j t xt x n let ei rn be the i gt j ei finally let xj i xj max max max i qj cov we adopt landau s notation o o op op in addition we say an bn if bn o an and similarly we say an bn if bn op an to simplify the logarithm factors we use the symbol polylog n to denote any factor that can be upper bounded to denote any factor that can be by log n for some similarly we use polylog n lower bounded by log n for some finally we restate all the technical assumptions and there exists polylog n o polylog n such that for any x r d p x x p dx x x ui wi where wn n and ui are smooth functions with and for some o polylog n moreover assume mini var polylog n o polylog n and xjt qj xj tr qj polylog n polylog n o polylog n deterministic approximation results in appendix a we use several approximations under random designs ri ri j to prove them we follow the strategy of el karoui which establishes the deterministic results and then apply the concentration inequalities to obtain high probability bounds note that is the solution of n f xi xti n we need the following key lemma to bound by kf f which can be calculated explicily lemma el karoui proposition for any and kf f proof by the mean value theorem there exists xti xti such that xti xti xti then n xi xti n n x xi xti n kf f based on lemma we can derive the deterministic results informally stated in appendix a such results are shown by el karoui for and here we derive a refined version for unpenalized throughout this subsection we only assume assumption this implies the following lemma lemma under assumption for any x and y p p p x x y x y to state the result we define the following quantities n t max max kxi max kxj e i n n n xi n n xi n the following proposition summarizes all deterministic results which we need in the proof proposition under assumption i the norm of m estimator is bounded by u ii define bj as nj bj n where n x nj xij ri j n then t t t x d j d j x j x j d j x j x j d j xj n j max e n iii the difference between and bj is bounded by max bj t n iv the difference between the full and the residual is bounded by t max max ri j e e i n proof i by lemma kf kf f since is a zero of f by definition n n n f xi xi xi n n n this implies that kf u ii first we prove that since all diagonal entries of d j is lower bounded by we conclude that x t d j x n note that is the schur s complement horn johnson chapter of x t d j x n we have etj x t d j x n ej which implies as for nj we have nj xjt k xjt n n the the second term is bounded by by definition see for the first term the assumption that x implies that z x z x y y dy x x x y dy k here we use the fact that sign y sign y recall the definition of we obtain that sp sp n n p r i j ri j n n n since j is the minimizer of the loss function n pn xti j j it holds that n ri j n n putting together the pieces we conclude that p by definition of bj n iii the proof of this result is almost the same as el karoui we state it here for the sake of completeness let rp with j bj t t j j bj x j d j x j x j d j xj where the subscript j denotes the entry and the subscript j denotes the subvector formed by all but entry furthermore define with j t t j x j d j x j x j d j xj then we can rewrite as j j j j bj j by definition of j we have f j j and hence f j f j f j j n i h xi j xti xti j j n by mean value theorem there exists j xti xti j j such that xti xti j j j xti j j xti j xti j j xti j j xij bj i h t t j bj xti j x j d j x j x j d j xj xij let di j j ri j and plug the above result into we obtain that n i h t t xi j ri j di j bj xti j x j d j x j x j d j xj xij n f j n h i t t bj ri j xi j xti j x j d j x j x j d j xj xij n n t t di j xi j xti j x j d j x j x j d j xj xij n i t t t t x j d j x j x j d j x j x j d j xj x j d j xj bj n n di j xi j xti bj n n x di j xi j xti bj n bj now we calculate f j the entry of f note that n f j xij xti n n n n n h i t t xij ri j bj xij ri j di j xti j x j d j x j x j d j xj xij n n h i t t xij ri j bj ri j xij xti j x j d j x j x j d j xj xij n n n t bj di j xij xi n n x t t t nj bj d j x j x j d j xj x d j x j x j ri j xij n j n n n t bj di j xij xi n n t nj bj bj di j xij xi n n n t bj di j xij xi n where the second last line uses the definition of bj putting the results together we obtain that n t f bj di j xi xi n this entails that kf max j i now we derive a bound for maxi j where di j is defined in by lemma j j ri j j ri j j j xti by definition of and i t t j j xti xti j x j d j x j x j d j xj xij t t i x j x j d j x j x j d j xj i xj i where the last inequality is derived by definition of see since i t t is the column of matrix i d j x j x j d j x j x j its norm is upper bounded by the operator norm of this matrix notice that t t t t i d j x j x j d j x j x j d j d j i d j x j x j d j x j x j d j the middle matrix in rhs of the displayed atom is an orthogonal projection matrix and hence t t kop kd j kop ki d j x j x j d j x j x j kop kd j therefore max i max ki i j t t d j x j x j d j x j x j kop and thus r max j i as for we have x t d j x n xjt dj xj t j n t x j d j x j n j xjt d j x j j n recall the definition of in we have t x j d j x j t t t j j xjt d j x j x j d j x j x j d j xj n n and xjt d j x j t t j xjt d j x j x j d j x j x j d j xj n n as a result t t d j x j x j d j xjt d j i d j x j x j d j xj n xj kd j n t t i d j x j x j d j x j x j d j kd j xj n op kxj t n where t is defined in therefore we have s putting and part ii together we obtain that s r t kf s r e t n c t n by lemma kf f kf t n since bj is the entry of we have bj t n iv similar to part iii this result has been shown by el karoui here we state a refined version for the sake of completeness let be defined as in then ri j xti j j xti xti j j kxi xti j j note that kxi nt by part iii we have t kxi n k on the other hand similar to by r t t xi j j n therefore ri j n t e summary of approximation results under our technical assumptions we can derive the rate for approximations via proposition this justifies all approximations in appendix theorem under the assumptions i t o polylog n ii max polylog n iii max polylog n n iv max bj polylog n n v max max ri j i polylog n n proof i notice that xj xej where ej is the canonical basis vector in rp we have kxj xt x etj ej n n similarly consider the x t instead of x we conclude that kxi xx t n n recall the definition of t in we conclude that p t o polylog n ii since ui wi with the gaussian concentration property ledoux chapter implies that is and hence o for any finite k by lemma and hence for any finite k o by part i of proposition using the convexity of and hence e u eu pn recall that u xi u u n x xti i n x kxi n n n kxi kxi x x others since has a zero mean we have e for any i k k or k k and i as a consequence n x kxi e eu n x t xi kxi kxi e e n x x kxi e kxi e e n a for any i using the convexity of hence we have e max i by inequality e q p max i recall that kxi nt and thus t t max i t max o polylog n i n eu on the other hand let then o n polylog n and hence by definition of in r t t xx o polylog n n n n in summary o polylog n iii by theorem there exists ax x such that x ax by assumption and lemma we have x ax where is defined in lemma as a result i o recall the definition of e in and the convexity of we have n ee o o polylog n n under assumption by inequality q e e e ee o polylog n under assumptions and o polylog n putting all the pieces together we obtain that polylog n max n iv similarly by holder s inequality e e e ee o polylog n and under assumptions and t o polylog n therefore max bj polylog n n v it follows from the previous part that e e o polylog n under assumptions and the multiplicative factors are also o polylog n t o polylog n o polylog n therefore max max ri j i polylog n n controlling gradient and hessian proof of lemma recall that is the solution of the following equation n xi xti n taking derivative of we have t x d i x t t x t dx x t this establishes to establishes note that can be rewritten as x t dx x t fix k n note that xti i i k xti x t dx x t recall that g i x x t dx x t d we have eti gek where ei is the canonical basis of rn as a result diag gek taking derivative of we have xt x t x t dx xt t x t dx x t i x x t dx x t d t x t dx x t diag gek g where g i x x t dx x t d is defined in in then for each j p and k n etj x t dx x t diag gek g etk gt diag etj x t dx x t g where we use the fact that at diag b bt diag a for any vectors a b this implies that gt diag etj x t dx x t g proof of lemma throughout the proof we are using the simple fact that based on it we found that etj x t dx x t d etj x t dx x t d q etj x t dx x t dx x t dx ej q etj x t dx ej thus for any m recall that mj e etj x t dx x t d e etj x t dx x t d etj x t dx x t d mj m etj x t dx x t d we should emphasize that we can not use the naive bound that e etj x t dx x t d m etj x t dx x t d m e etj x t dx x t d polylog n m ol n m since it fails to guarantee the convergence of tv distance we will address this issue after deriving lemma by contrast as proved below polylog n etj x t dx x t d op mj op p n thus produces a slightly tighter bound etj x t dx x t d olm polylog n n it turns out that the above bound suffices to prove the convergence although implies the possibility to sharpen the bound from to using refined analysis we do not explore this to avoid extra conditions and notation bound for first we derive a bound for by definition j e t by lemma and with m e e etj x t dx x t d mj on the other hand it follows from that etj x t dx x t d etj x t dx x t d putting the above two bounds together we have mj bound for as a of we obtain that bound for finally we derive a bound for by lemma involves the operator norm of a symmetric matrix with form gt m g where m is a diagonal matrix then by the triangle inequality gt m g op km kop gt g op km kop kgkop note that d i d x x t dx x t d is a projection matrix which is idempotent this implies that d d op write g as d d gd d then we have kgkop d d gd d op r op op returning to we obtain that e gt diag etj x t dx x t g op kgkop e etj x t dx x t etj x t dx x t etj x t dx x t d assumption implies that ri p hence ri therefore etj x t dx x t d etj x t dx x t d by with m mj proof of lemma by theorem for any j then using the inequality proposition e c c c j j n o max dt v q var j var j j n n polylog n var n var polylog n it follows from that o polylog n and the above bound can be simplified as e j j max dt v l q n o polylog n n var var remark if we use the naive bound by repeating the above derivation we in which case obtain a worse bound for j o polylog n and o polylog n n n max dt v e j j q n o var polylog n n var however we can only prove that var without the numerator which will be shown to be o polylog n in the next subsection the convergence can not be proved upper bound of mj as mentioned in appendix a we should approximate d by d j to remove the functional dependence on xj to achieve this we introduce two terms mj mj e ketj x t dx x t d j mj and mj defined as e ketj x t d j x x t d j we will first prove that both mj and mj are negligible and then derive an upper bound for mj controlling mj by lemma kd d j max ri j rj i and by theorem q polylog n erj o n then we can bound mj via the fact that and algebra as follows mj e ketj x t dx x t d d j e ketj x t dx x t d d j r r e ketj x t dx x t d d j e etj x t dx x t d d j x x t dx ej by lemma q p ri ri j ri j rj thus d d j i r j this entails that mj q e etj x t dx x t dx x t dx ej q e etj x t dx ej q polylog n p e o n bound of mj first we prove a useful lemma lemma for any symmetric matrix n with kn kop i i n kn kop proof first notice that i i n i n i i n n i n and therefore i i n n i n since kn kop i n is positive and i kn kop i n therefore n i n n kn kop we now back to bounding mj let aj x t d j x bj x t d d j x by lemma kd d j max ri j rj i and hence kbj kop rj i where rj then by theorem v polylog n e o n using the fact that we obtain that t t t mj e ketj x d j j x d j ej aj bj r t t x t d e ketj j x d j ej aj bj j q x t d x a b e e etj j j j j j aj bj j q a a b e e etj j j j j j aj bj j the inner matrix can be rewritten as aj j aj bj j aj bj i i aj bj aj aj aj aj i i aj bj aj aj i i aj bj aj aj let nj aj bj aj then knj kop kaj kop kbj kop kaj kop on the event knj kop by lemma i i nj this together with entails that etj aj aj bj ej etj aj i i nj aj ej j aj bj aj aj ej etj j bj aj bj aj ej kaj bj aj bj aj kop since aj i and kbj kop we have j bj aj bj aj kop kaj kop kbj kop n j thus e etj a b a a a b e i j j j j j j j j j t polylog n o ej aj bj b a e j j j j j n on the event since i aj i and aj bj i aj ej j aj bj j aj bj ej j aj bj t ej j ej aj bj n this together with markov inequality implies htat e etj a b a a a b e i j j j j j j j j j p n n polylog n putting pieces together we conclude that q a a b e mj e etj j j j j j aj bj j s t e ej aj aj bj aj aj aj bj ej i s a a b e i e etj a b j j j j j j j j j polylog n n bound of mj similar to by block matrix inversion formula see proposition etj x t d j x x t d j xjt d j i hj xjt d j i hj d j xj t t where hj d j x j x j d j x j x j d j recall that by so we have i hj d j xj xjt d j as for the numerator recalling the definition of i we obtain that t t xj i d j x j x j d j x j x j d j n p t t x i d j x j x j d j x j x j n j p p max i xj max i kxjt d j i hj i i as proved in max i i this entails that kxjt d j i hj polylog n putting the pieces together we conclude that mj ekxjt d j i hj polylog n n summary based on results from section section we have polylog n mj o n note that the bounds we obtained do not depend on j so we conclude that polylog n max mj o n lower bound of var approximating var by var bj by theorem max e bj o j polylog n max o j polylog n n using the fact that bj bj bj bj bj we can bound the difference between and by e bj bj bj e bj q q polylog n e bj o similarly since polylog n ebj bj bj o putting the above two results together we conclude that polylog n var var bj o then it is left to show that var bj n polylog n controlling var bj by var nj recall that nj bj n where n x nj xij ri j n t t t x d j d j x j x j d j x j x j d j xj n j then n var bj e nj nj nj enj enj nj using the fact that a b a we have n var bj e nj enj enj nj controlling the assumption implies that tr cov t var nj xj qj xj n npolylog n it is left to show that tr cov polylog n since this result will also be used later in appendix c we state it in the following the lemma lemma under assumptions tr cov min var i n n polylog n proof the implies that var ri j var ri j note that ri j is a function of we can apply again to obtain a lower bound for var ri j in fact by variance decomposition formula using the independence of s var ri j e var ri j i var e ri j i e var ri j i where i includes all but the entry of apply again j var ri j i inf var and hence j var ri j e var ri j i e inf var j now we compute similar to in we have j eti g j ek where g j is defined in in when k i j eti g j ei eti d j d j g j d j d j ei eti d j g j d j ei by definition of g j t t d j x j x j d j g j d j i d j x j x j d j t t let j d j x j and hj j j j j denote by i j the matrix j after removing row then by block matrix inversion formula see proposition t i j j j j eti hj ei j i j t i j i j j t t i j j j i j i j i j t j i j i j j t j i j i j j t j i j i j j j this implies that eti d j g j d j ei eti i hj ei t j i j i j j t t e eti d j x j x i j d i j x i j x j j i t t e eti d j x j x i j x i j x j j i t t e d j i i eti x j x i j x i j x j i t t e eti x j x i j x i j x j i t t e eti x j x i j x i j x j i t t apply the above argument to hj x j x j x j x j we have t t eti i x j x j x j x j ei t x t x e eti x j j i i j x i j thus by and var ri j t t eti i x j x j x j x j ei summing i over n we obtain that n tr cov x t t t e i x j x j x j x j ei min var i n n i t t tr i x j x j x j x j min var i n min var i n by assumption we conclude that since mini var polylog n tr cov n polylog n in summary var nj polylog n recall that t t t xj d j d j x j x j d j x j x j d j xj xjt d j xj t n n we conclude that var nj t polylog n controlling by definition nj e enj enj e e enj nj var enj e e enj var cov nj enj var var nj var var by in the proof of theorem e e q ee o polylog n where the last equality uses the fact that e polylog n as proved in on the other hand let be an independent copy of then var e e since as shown in we have var e var to bound var we propose to using the standard inequality chernoff which is stated as follows proposition let w wn n and f be a twice differentiable function then w var f w e in our case ui wi and hence for any twice differentiable function g var g e t max e i applying it to we have var e for given k n using the chain rule and the fact that db dbb for any square matrix b we obtain that t t d j d j x j x j d j x j x j d j j j t t t t j x j x j d j x j x j d j d j x j x j d j x j x j j t t t t d j x j x j d j x j x j x j x j d j x j x j d j j g j j t t where g j i x j x j d j x j x j d j as defined in last subsection this implies that j xjt gt j g j xj n then entails that n x t t j e xj g j var g j xj n similar to in and recalling the definition of d j in first we compute j k and that of g j in in we have j j diag g j ek diag j g j ek let xj g j xj and xj xj where denotes hadamard product then xjt gt j j j g j xj xjt xj xjt diag j g j ek xj j g j ek here we use the fact that for any vectors x a rn xt diag a x n x ai x x t this together with imply that var n x e j g j ek e j g j n e j g j gt j j note that g j gt j kg j i and j i by lemma in therefore we obtain that e g j op var e g j op j n n e g j op kxj e g j op kxj n n as shown in kg j kop on the other hand notice that the row of g j is i see for definition by definition of we have kxj kg j xj max i xj max i i by and assumption kxj this entails that polylog n var o polylog n n combining with and we obtain that polylog n o n summary putting and together we conclude that polylog n var bj n var bj polylog n n polylog n polylog n n combining with var c polylog n n proof of other results proofs of propositions in section proof of proposition let hi first we prove that the conditions d imply that is the unique minimizer of hi for all i in fact since hi using the fact that is even we have hi by for any hi hi as a result is the unique minimizer of hi then for any rp n n n n yi xti xti hi xti hi n n n n the equality holds iff xti for all i since is the unique minimizer of hi this implies that x since x has full column rank we conclude that proof of proposition for any r and rp let n yi xti n g since minimizes it holds that n n g xti g n n note that is the unique minimizer of the above equality holds if and only if t xi x since x has full column rank it must hold that and proofs of corollary proposition suppose that are such that as a function of has a unique minimizer further assume that xjnc contains an intercept term xjn has full column rank and span xj j jn span xj j jnc let n t yi xi arg min min c n then proof let n g yi xti n for any minimizer of g which might not be unique we prove that it follows by the same argument as in proposition that xti x xjn jnc since xjnc contains the intercept term we have xjn span xj j jnc it then follows from that xjn since xjn has full column rank we conclude that the proposition implies that is identifiable even when x is not of full column rank a similar conclusion holds for the estimator and the residuals ri the following two propositions show that under certain assumptions and ri are invariant to the choice of in the presense of multiple minimizers proposition suppose that is convex and twice differentiable with x c for all x let be any minimizer which might not be unique of n f yi xti n then ri yi xi is independent of the choice of for any i proof the conclusion is obvious if f has a unique minimizer otherwise let and be two different minimizers of f denote by their difference since f is convex is a minimizer of f for all v by taylor expansion f f t f o v since both and are minimizers of f we have f f and by letting v tend to we conclude that t f the hessian of f can be written as f t cx t x x diag yi xti x n n thus satisfies that cx t x n this implies that y x y x and hence ri is the same for all i in both cases proposition suppose that is convex and twice differentiable with x c for all x further assume that xjn has full column rank and span xj j jn span xj j jnc let be any minimizer which might not be unique of n f yi xti n then is independent of the choice of proof as in the proof of proposition we conclude that for any minimizers and where decompose the term into two parts we have xjn span xj j jnc it then follows from that xjn since xjn has full column rank we conclude that and hence proof of corollary under assumption xjn must have full column rank otherwise there exists such that xjn in which case xjtn i xjn this violates the assumption that on the other hand it also guarantees that span xj j jn span xj j jnc this together with assumption and proposition implies that is independent of the choice of c c c let and assume that is invertible let such that xjn xjnc xjnc then rank x rank and model can be rewritten as y where let be an which might not be unique based on then proposition shows that is independent of the choice of and an invariance argument shows that in the rest of proof we use to denote the quantity obtained based on first we show that the assumption is not affected by this transformation in fact for any j jn by definition we have span j span x j and hence the residuals are not changed by proposition this implies that and qj recall the definition of the condition of entails that x t in particular xjtnc and this implies that for any rn cov xjtnc xjnc qj thus tr xjt qj xj xj xjcn j t qj xj xjnc j tr qj tr qj then we prove that the assumption is also not affected by the transformation the above argument has shown that xj on the other hand let b then b is and xb let b j j denote the matrix b after removing row and column then b j j is also and j x j b j j recall the definition of i we have t t i i j j j j j ei t t t i d j x j b j j b j j x j d j xj b j j b j j x j ei t i d j x j x j d j xj x j ei i on the other hand by definition t t t t x j i x j i d j x j x j d j x j x j ei thus i i xj xjcn j i xj in summary for any j jn and i n i i i xj i putting the pieces together we have c by theorem e j j n o max dtv q var provided that satisfies the assumption now let u be the singular value decomposition of xjnc where u v with u t u v t v ip and diag being the diagonal matrix formed by singular values of xjnc first we consider the case where xjnc has full column rank p then for all j let xjtn xjn xjtn xjn and t then t n n xjtn i xjnc xjtnc xjnc xjnc xjn ni this implies that t n n o max the assumption implies that t o polylog n n t n t n n o min polylog n by theorem we conclude that next we consider the case where xjcn does not have full column rank we first remove the redundant columns from xjcn replace xjnc by the matrix formed by its maximum linear independent subset denote by x this matrix then span x span x and span xj j jn span xj j jn as a consequence of proposition and neither nor is affected thus the same reasoning as above applies to this case proofs of results in section first we prove two lemmas regarding the behavior of qj these lemmas are needed for justifying assumption in the examples lemma under assumptions and kqj kop kqj kf where qj cov as defined in section proof of lemma by definition sup qj where is the unit sphere for given qj cov var it has been shown in in appendix that j eti g j ek t t where g j i x j x j d j x j x j d j this yields that n n n x x j x ri j ri j ri j eti g j j g j by standard inequality see proposition since ui wi n n x x ri j ri j max e var k e j g j gt j j j g j gt j j kg j we conclude from lemma and in appendix that kg j j kop therefore sup var and hence n x ri n lemma under assumptions tr qj k n n polylog n where k n mini var proof this is a direct consequence of lemma in throughout the following proofs we will use several results from the random matrix theory to bound the largest and smallest singular values of z the results are shown in appendix furthermore in contrast to other sections the notation p e var denotes the probability the expectation and the variance with respect to both and z in this section proof of proposition by proposition op op op and thus the assumption holds with high probability by inequality hanson wright rudelson vershynin see proposition for any given deterministic matrix a t p azj ezjt azj t exp min kakop for some universal constant let a qj and conditioning on z j then by lemma we know that k k kqj kf kqj kop and hence t t t p zj qj zj e zj qj zj z j z j exp min note that e zjt qj zj z j tr e zj zjt j qj tr qj tr qj by lemma we conclude that t q z z zjt qj zj t t j j j z j p p z j tr qj nk tr qj tr qj t exp min let t nk and take expectation of both sides over z j we obtain that zjt qj zj k k p exp min tr qj and hence p zjt qj zj min tr qj exp min k k o this entails that min zjt qj zj polylog n tr qj thus assumption is also satisfied with high probability on the other hand since zj has entries for any deterministic unit vector rn zj is and and hence p zj t let i i i and since i and are independent of zj a union bound then gives p log n p t log n by fubini s formula durrett lemma z z log n p t dt dt z log n p t dt z p p p log n t log n p t log n dt p log n log n dt o polylog n o polylog n this together with markov inequality guarantees that assumption is also satisfied with high probability proof of proposition it is left to prove that assumption holds with high probability the proof of assumption and is exactly the same as the proof of proposition by proposition op on the other hand by proposition litvak et t z z n p n and thus proof of proposition since jn excludes the intercept term the proof of assumption and is still the same as proposition it is left to prove assumption let rn be rademacher random variables p ri p ri and z diag bn z then z t z z t z it is left to show that the assumption holds for z with high probability note that t bi bi for any r and borel sets bp r p bi r bi bi p bi r p bi r p p p bi r p p p bi r p bi p bi where the last two lines uses the symmetry of then we conclude that has independent entries since the rows of z are independent z has independent entries since bi d are symmetric and with unit variance and bi which is also symmetric and with variance bounded from below z satisfies the conditions of propsition and hence the assumption is satisfied with high probability proof of proposition with proposition being a special case let then has standard gaussian entries by proposition satisfies assumption with high probability thus t op polylog n n n and n n polylog n as for assumption the first step is to calculate e zjt qj zj j let z then vec n i as a consequence j n i where j z j j j j j j j j j thus zj j n where z j j j j j it is easy to see that min max j j it has been shown that qj and hence zjt qj zj zj t qj zj let zj zj and qj then zj n i and zjt qj zj zjt zj by lemma kop kqj kop and hence kf by inequality hanson wright rudelson vershynin see proposition we obtain a similar inequality to as follows p qj zj e zjt qj zj z j t z j t exp min on the other hand e zjt qj zj j e zjt zj j tr by definition tr tr qj tr tr tr qj by lemma tr nk similar to we obtain that zjt qj zj t z j p tr qj nk t exp min let t nk we have p zjt qj zj tr qj exp min k and a union bound together with yields that min zjt qj zj min j tr qj polylog n polylog n n as for assumption let i i i then for i p p i note that zj t zj i zj t i zj i using the same argument as in we obtain that o max polylog n o polylog n j and by markov inequality and e op op polylog n proof of proposition the proof that assumptions and hold with high probability is exactly the same as the proof of proposition it is left to prove tion see corollary let c mini i and z recall the the definition of and we have where t n n rewrite as n n it is obvious that span n n span z as a consequence zt z n it remains to prove that t z z op polylog n n zt z n zt z n polylog n to prove this we let z where and then t t z z n n n and zt z n n it is left to show that t op polylog n n n n polylog n by definition mini and maxi o polylog n then t t n n n n n since has standard gaussian entries by proposition op n moreover n maxi o n polylog n and thus t op polylog n n on the other hand similar to proposition diag bn where bn are rademacher random variables the same argument in the proof of proposition implies that has independent entries with norm bounded by and variance lower bounded by by proposition satisfies assumption with high probability therefore holds with high probability proof of proposition let and z be the matrix with entries zij then by proposition or proposition zij satisfies assumption with high probability notice that t t z z z z op polylog n n n and z t z n zt z n polylog n thus z satisfies assumption with high probability conditioning on any realization of the law of zij does not change due to the independence between and z repeating the arguments in the proof of proposition and proposition we can show that zjt zj tr polylog n and e max n p t i zj op polylog n where i i i then zjt qj zj zjt zj tr zjt zj tr qj tr qj tr tr polylog n and max n p e max i max max max j i j i t i zj t i zj n p op polylog n by markov inequality the assumption is satisfied with high probability proof of proposition the concentration inequality of plus a union bound imply that p max log n log n o i thus with high probability t t z z z z log n op polylog n n n let b nc for some then for any subset i of n with size by proposition proposition under the conditions of proposition proposition there exists constants and which only depend on such that t zi zi n p n where zi represents the of z formed by zi i i where zi is the row of z then by a union bound t zi zi n p n n by stirling s formula there exists a constant such that n o n n c exp log log n n n n where for sufficiently small and sufficiently large n log log and hence p zit zi n n for some by lemma lim inf min nc zit zi n on the other hand since f is continuous at then b nc f where k is the largest of i n let i be the set of indices corresponding to the largest b nc then with probability t t t z z zi i zi z z lim inf lim inf b nc lim inf lim inf n n n t zi zi lim inf b nc lim inf min nc n f to prove assumption similar to in the proof of proposition it is left to show that tr min j tr qj polylog n furthermore by lemma it remains to prove that n min tr j polylog n recalling the equation in the proof of lemma we have eti qj ei t t t ei z j z i j z i j z j ei by proposition u u p zjt zj n n on the other hand apply to z i j we have p min nc z i j ti z i j i n n a union bound indicates that with probability np min n o t z j z j z i j ti z i j i min min max j i j nc n n this implies that for any j t z j z j n t z j z j n and for any i and j t z i j z i j n t i z i j z i j n min b nc b nc min nc z i j ti i z i j i n min b nc b nc moreover as discussed above log n min b nc b nc f almost surely thus it follows from that with high probability eti qj ei t z t z e eti z j j i i j z i j t z j et z j ei f i n log n f the above bound holds for all diagonal elements of qj uniformly with high probability therefore n tr b nc p log n f polylog n as a result the assumption is satisfied with high probability finally by we obtain that t e max i zj n p by cauchy s inequality r e max n p t z i j q e max i similar to we conclude that o polylog n and by markov inequality the assumption is satisfied with high probability more results of section the relation between sj x and in section we give a sufficient and almost necessary condition for the coordinatewise asymptotic normality of the estimator ls see theorem in this subsubsection we show that is a generalization of sj x for general mestimators consider the matrix x t dx x t where d is obtain by using general loss functions then by block matrix inversion formula see proposition dx x t dx x t t t t x x dx x t t i dx x dx x t dx x t d x d dx x t t i d x x d x x t dx x t d x d dx x where we use the approximation d d the same result holds for all j jn then t t i d x x d x x ketj x t dx x t t t t t t t kej x dx x i d x x d x x t t recall that i is row of i d x x d x x we have max i i ketj x t dx x t i ketj x t dx x t the side equals to sj x in the case therefore although of complicated form assumption is not an artifact of the proof but is essential for the asymptotic normality additional examples benefit from the analytical form of the estimator we can depart from subgaussinity of the entries the following proposition shows that a random design matrix z with entries under appropriate moment conditions satisfies sj z o with high probability this implies that when x is one realization of z the conditions theorem are satisfied for x with high probability over z proposition if zij i n j jn are independent random variables with m for some m var zij for some p z has full column rank o ezj span zj j jnc almost surely for all j jn where zj is the column of z then max sj z op op a typical practically interesting example is that z contains an intercept term which is not in jn and zj has entries for j jn with continuous distribution and sufficiently many moments in which case the first three conditions are easily checked and ezj is a multiple of which belongs to span zj j jnc in fact the condition allows proposition to cover more general cases than the above one for example in a census study a fix effect might be added into the model yi zit where si represents the state of subject i in this case z contains a formed by zi and a with anova forms as mentioned in example the latter is usually incorporated only for adjusting group bias and not the target of inference then condition is satisfied if only zij has same mean in each group for each j ezij j proof of proposition by formula etj z t z z t zjt i hj zjt i hj zj t t where hj z j z j z j z j is the projection matrix generated by z j then sj z ketj z t z z t kzjt i hj q ketj z t z z t zjt i hj zj similar to the proofs of other examples the strategy is to show that the numerator as a linear contrast of zj and the denominator as a quadratic form of zj are both concentrated around their means specifically we will show that there exists some constants and such that n max sup p kazj n p zjt azj n o n a tr a if holds since hj is independent of zj by assumptions we have t kz i h k j j p sj z p q t c z i hj zj j k i hj zj n p zjt i hj zj n t p k i hj zj n z j e p zj i hj zj n z j sup p kazj n p zjt azj n tr a max p kazj n sup tr a zjt azj n n thus with probability o o max sj z and hence max sj z op now we prove the proof although looks messy is essentially the same as the proof for other examples instead of relying on the exponential concentration given by the we show the concentration in terms of moments in fact for any idempotent a the sum square of each row is bounded by since x j j i by jensen s inequality ezij for any j by rosenthal s inequality rosenthal there exists some universal constant c such that n n n x x aij zij ezij e n n x aij ezij n n x cm aij aij let then for given i by markov inequality n x p aij zij n n and a union bound implies that p kazj n n now we derive a bound for zjt azj since there exists such that n p then ezjt azj n x aii ezij tr a n p to bound the tail probability we need the following result lemma bai and silverstein lemma let b be an n n nonrandom matrix and w wn t be a random vector of independent entries assume that ewi and then for any q q q t bw tr b cq tr bb t tr bb t where cq is a constant depending on q only it is easy to extend lemma to case by rescaling in fact denote by the variance of wi and let diag y wn then w t bw y t y with cov y i let then t t bb t this entails that tr t tr bb t tr t tr bb t on the other hand q q q q tr t t bb t bb t thus we obtain the following result lemma let b be an nonrandom matrix and w wn t be a random vector of independent entries suppose then for any q q q t bw ew t bw cq tr bb t tr bb t where cq is a constant depending on q only apply lemma with w zj b a and q we obtain that azj ezjt azj cm tr aat tr aat for some constant since a is idempotent all eigenvalues of a is either or and thus aat i this implies that tr aat n tr aat n and hence azj ezjt azj for some constant which only depends on m by markov inequality p azj ezjt azj n combining with we conclude that p zjt azj n o n where notice that both and do not depend on j and a therefore is proved and hence the proposition d additional numerical experiments in this section we repeat the experiments in section by using loss x is not smooth and does not satisfy our technical conditions the results are displayed below it is seen that the performance is quite similar to that with the huber loss coverage of normal coverage of t normal ellip coverage ellip coverage iid iid t hadamard hadamard sample size entry dist normal t sample size hadamard entry dist normal t hadamard figure empirical coverage of with left and right using loss the corresponds to the sample size ranging from to the corresponds to the empirical coverage each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design min coverage of normal min coverage of t normal t ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal sample size t hadamard entry dist normal t hadamard figure mininum empirical coverage of with left and right using loss the corresponds to the sample size ranging from to the corresponds to the minimum empirical coverage each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design bonf coverage of normal bonf coverage of t normal iid iid hadamard hadamard ellip ellip coverage coverage t sample size entry dist normal t sample size hadamard entry dist normal t hadamard figure empirical coverage of after bonferroni correction with left and right using loss the corresponds to the sample size ranging from to the corresponds to the empirical uniform coverage after bonferroni correction each column represents an error distribution and each row represents a type of design the orange solid bar corresponds to the case f normal the blue dotted bar corresponds to the case f the red dashed bar represents the hadamard design e miscellaneous in this appendix we state several technical results for the sake of completeness proposition horn johnson formula let a be an invertible matrix and write a as a block matrix with r being invertible matrices then s s s where s is the schur s complement proposition rudelson vershynin improved version of the original form by hanson wright let x xn rn be a random vector with independent components xi then for every t t p t ax ex t t exp min kakop proposition bai yin if zij i n j p are random variables with zero mean unit variance and finite fourth moment and then t t z z z z n n proposition latala suppose zij i n j p are independent random variables with finite fourth moment then sx sx sx q max e z t z c ezij ezij ezij i j j i i j for some universal constant in particular if ezij are uniformly bounded then zt z n r p op n proposition rudelson vershynin suppose zij i n j p are independent random variables then there exists a universal constant such that s r t z z p p nt n n proposition rudelson vershynin suppose zij i n j p are random variables with zero mean and unit variance then for s r t z z p n n for some universal constants c and proposition litvak et suppose zij i n j p are independent random variables such that d zij var zij for some and then there exists constants which only depends on and such that t z z n p n
| 10 |
dec the classification of kleinian groups of hausdorff dimensions at most one yong institute for advanced study princeton university abstract in this paper we provide the complete classification of kleinian group of hausdorff dimensions less than in particular we prove that every purely loxodromic kleinian groups of hausdorff dimension is a classical schottky group this upper bound is sharp as an application the result of then implies that every closed riemann surface is uniformizable by a classical schottky group the proof relies on the result of hou and space of rectifiable closed curves introduction and main theorem we take kleinian groups to be finitely generated discrete subgroups of psl c the main theorem is theorem classification any purely loxodromic kleinian group with limit set of hausdorff dimension is a classical schottky group this bound is sharp supported by ambrose monell fundation we note that by selberg lemma is not really a restriction since any finitely generated discrete subgroup of psl c has a finite index subgroup as an application we have the following corollary which is a resolution of a folklore problem of bers on classical schottky group uniformization of closed riemann surface corollary follows from the work of hou theorem hou every closed riemann surface is uniformizable by a schottky group of hausdorff dimension every point in moduli space has a hausdorff dimension fiber in the schottky space corollary uniformization every closed riemann surface can be uniformized by a classical schottky group strategy of proof first let us recall the result of hou theorem hou there exists such that any kleinian group with limit set of hausdorff dimension is a classical schottky group define hc sup theorem hc is the maximal parameter such that if is a schottky group of hausdorff dimension hc then is classical schottky group hence theorem can be rephrased as hc we prove by contradiction so from now on and throughout the paper we assume that hc then we will show that hc is not maximal recall that the hausdorff dimension function on the schottky space of c rank g is real analytic it is a consequence of theorem that jh g rankg schottky groups of hausdorff dimension hc see section is a dimensional open and connected submanifold of jg the schottky space c the proof is done as follows first we note that must contain a g schottky group otherwise hc is not maximal by definition see proposition second we show that if hc then every element of the c boundary g is either a classical schottky group or it is not a schottky group lemma this contradicts the first fact hence we must have hc the bulk of the paper is devoted to proof the second fact which we now summarize the idea in the following it is a result of bowen that a schottky group has hausdorff dimension if and only if there exist a rectifiable closed curve let r s w be the space of bounded length closed curves which intersects the compact set w c and equipped with metric it is complete space see section we show that if hc then every with bounded length of is the limit of a sequence of of in r s w we also show that if is a schottky group then every of has an open neighborhood in the relative topology of see section such that every element of the open neighborhood is a of we also define linearity and transversality invariant for and show that of classical schottky groups preserve these invariants and nonclassical schottky groups do not have transverse linear given a of a schottky group we show that there exists an open neighborhood in the relative topology of space of rectifiable curves with respect to frechet metric about the such that every point in the open neighborhood is a of see lemma next assume that we have a sequence of classical schottky groups to a schottky group and are all of hasudorff dimensions less than one we then study singularity formations of classical fundamental domains of when these singularities are of three types tangent degenerate and collapsing we show that all these singularities will imply that there exists a such that every open neighborhood about this quasicircle will contain some points which is not a essentially the existence of a singularity will be obstruction to the existence of any open neighborhood that are of see lemma hence it follows from these results that if with classical and all hausdorff dimensions are of less than one then must be a classical schottky group acknowledgement this work is made possible by unwavering supports and insightful conversations from peter sarnak whom i m greatly indebted to it is the groundbreaking works of peter sarnak that has guided the author to study this problem at first place i wish to express my deepest gratitude and sincere appreciation to dave gabai for the continuous of amazing supports and encouragements which allowed me to complete this work i want to express my sincere appreciation to the referee for detailed reading and helpful comments and suggestions i also want express sincere appreciation to ian agol matthew de for reading of the previous draft this paper is dedicated to my father shuying hou and generating jordan curves schottky group of rank g is defined as discrete faithful representation of the free group fg in psl c it follows that is freely generated by purely loxodromic elements this implies we can find collection of open topological disks di i g of disjoint closure in the riemann sphere c with boundary curves ci by definition ci are closed jordan curves in riemann sphere such that ci and di whenever there exists a set of generators with all ci as circles then it is called a classical schottky group with classical generators schottky space jg is defined as space of all rank g schottky groups up to conjugacy by psl c by normalization we can chart jg by complex parameters hence jg is dimensional complex manifold the bihomolomorphic auto jg group is out fg which is isomorphic to quotient of the group denote by jg o the set of all elements of jg that are classical schottky groups note that jg o is open in jg on the other hand it is nontrivial result due to marden that jg o is subset of jg however it follows from theorem jg o is dimensional open connected submanifold here denotes space of schottky groups of hausdorff dimension some notations given a kleinian group we denote by and and its limit set region of discontinuity and hausdorff dimension respectively throughout this paper given a fundamental domain f of we denote the orbit of f under actions of by we also say is a classical fundamental domain of classical schottky group if are disjoint circles definition given a geometrically finite kleinian group a closed jordan curve that contains the limit set is called of remark from now on we make the global assumption throughout this paper that hc and all schottky groups are of hausdorff dimension hc if not stated otherwise next we give a construction of of which is a generalization of the construction by bowen let f be a fundamental domain of and ci be the collection of disjoint jordan curves comprising let denote collection of arcs connecting points pi ci for i g and arcs on that connects to pi and to so is a set of g disjoint curves connecting disjoint points on collection of jordan curves of figure figure defines a closed curve containing defines a of obviously there are infinitely many and different gives a different note that the simply connected regions c gives the bers simultaneous uniformization of riemann surface definition generating curve given a of we say a collection of disjoint curves is a generating curve of if can be generated by note that the constructed in which requires that is a imagine of pi under element of is a subset of the collection that we have defined here in fact this generalization is also used for the construction of of schottky groups proposition every of is generated by some generating curves proof let be a of let f be a fundamental domain of set f and then consists of collection of disjoint curves which only intersects along hence we have for with since we have have hence is a generating curve of definition linear we call a linear if consists of points circular arcs or lines note that if is linear then there exists such that and are circular arcs or lines we say an arc is orthogonal if the tangents at intersections on are orthogonal with and an arc is parallel if definition given a linear of if all linear arcs intersect at then we say is definition transverse given a of we say is transverse if intersects orthogonally for some and have no parallel arc otherwise we say is definition parallel given a of we say is parallel if there exists some arc of such that proposition transverse always exists for a given schottky group proof let f be bounded by distinct jordan closed curves and take any curve connecting pi ci such that pi and for i g that intersects ci orthogonally it should be noted that a of in general is not necessarily rectifiable for instance if we take to be some generating curves then will be recall a curve is said to be rectifiable if and only if the hausdorff measure of the curve is finite this is not the only obstruction to rectifiability in fact we have the following result of bowen theorem for a given schottky group the hausdorff dimension of limit set is if and only if there exists a rectifiable for the proof of theorem relies on the fact that the poincare series of converges if and only if proposition let be a schottky group of suppose a given generating curve is a rectifiable curve then is a rectifiable of proof let be the hausdorff measure since we have let let denotes the derivative of then we have x x z z also since if and only if poincare series satisfies implies that is rectifiable if and only if rectifiable this let w c be a compact set denote the space of closed curves with bounded length in c that intersect with w by r s w h s s w h continuous rectifiable map for r s w let be it s respective arclength the distance is defined as df inf sup homeo s for a given compact w c the space of closed curves with bounded length r s w is a metric space with respect to df two curves in r s w are same if there exists parametrization such that and the topology on r s w is defined with respect to the metric df see let be a generating curve for fix a indexing of set let be a parametrization of such that define a parametrization of then df m inf for some m this implies continuity of with respect to generating curve proposition for a given compact w c the space r s w is complete metric space with respect to df proof let r s w be a cauchy sequence are rectifiable curves of bounded length and so there exists lipschitz parameterizations with bounded lipschitz constants such that t are uniformly lipschitz then completeness follows from the fact that all curves of r s w are contained within some large compact subset of hc c proposition there exists a jh g in the closure of jg such that is not a classical schottky group c proof suppose false then every element jh g with hausdorff dimension hc is a classical schottky group since the classical schottky space jg o is c open in the schottky space jg we have a open neighborhood u of jh g in jg c such that u jh g o by definition of hc it is maximal hence there are nonclassical schottky groups of hausdorff dimension arbitrarily close to hc and hc then there exists a sequence of schottky groups of hausdorff dimensions hc let limn since by assumption all schottky groups of hausdorff dimension hc is classical we must have either it is not a schottky group or it is a classical schottky group this implies that for large n we must have u which is a contradiction to the sequence been all schottky groups hence hc is not maximal proposition there exists a schottky group of hc and sequence of classical schottky groups such that with sup proof it follows from that the hausdorff dimension map d jg is real analytic map and by proposition there exists a with c hc theorem implies jh g is open submanifold of jg o hence there c exists a sequence of classical schottkys jh g with schottky space and rectifiable curves take and as given by proposition in particular is a sequence of classical schottky groups such that denote qc c space of quasiconformal maps on it follows from quasiconformal deformation theory of schottky space for classical schottky group of hausdorff dimension hc we can write jg f f psl c qc c c remark from now on throughout rest of the paper we fix to be a classical schottky group with hausdorff dimension hc notations set h to be the collection of all schottky groups of hausdorff dimension hc note that there exists a sequence of quasiconformal maps fn and f of c such that we can write fn and f here we write f f g f for a given kleinian group and quasiconformal map schottky space jg can also be considered as subspace of this provides jg analytic structure as complex analytic manifold proposition let be a sequence of schottky groups with to a schottky group let f be a fundamental domain of there exists a sequence of fundamental domain fn of such that fn proof let ci be the jordan curves which is the boundary of set cn ci then cn is the boundary of a fundamental domain of fn hence we have a fundamental domain fn of defined by cn with fn lemma suppose fn f and f every with bounded length of fn is in r s w for some compact proof let be a of note since limit set and limit set is compact and is rectifiable we can find some compact set w given h with let denote the collection of all bounded df length of we define to be the closure of the set of bounded length of in r s w proposition the subspace is compact proof curves in are bounded length and we have parametrization with bounded lipschitz constants it follows from theorem we have uniform convergence topology on since curves in r s w are all contained in some large compact set of c we have is closed and bounded hence compact we define o open sets about in relative topology given by o for some open set o r s w let be a schottky group let f be a fundamental domain of for o of and suppose every element is and let denote a generating curve of o with respect to then we have o the collection of all generating curves of the open set o gives a open set of generating curves of on set of collection of all generating curves of elements of we define the topology as if and only if for we will sometime denote by a curve which is the limit of rectifiable of proposition let h with then every bounded length of is in in addition if are linear then is linear of proof let note that since hc we have hc define for all then is a sequence of jordan closed curves it follows from proposition we have a generating curve of so and we have since and so its where hence is a generating curve of which are of denote by a generating curve for since is rectifiable curve modify if necessary we can assume are rectifiable curves let z since hc we have the hausdorff measure x z where is the derivative of hence are rectifiable it follows that there exists c such that for large n hence are bounded of r s w and by proposition we have and finally if are linear then are linear and since mobius maps preserves linearity we have is linear definition for a given sequence of quasi circles of h with we say is a of if it is convergent sequence and is a of we also call non transverse if all are also non transverse lemma existence let h be a sequence of schottky groups with there exists a of of in addition if is also non transverse then is non transverse proof let fn be maps such that fn let be a of then fn is a of the non transverse property obviously is preserved corollary let be a of linear of is a linear of proof linearity is obviously preserved at corollary let be a of quasicircles of then is a non transverse linear of if and only if is a non transverse proof lemma open let let be a schottky group let be a of then there exists a open neighboredood o of in such that every elements of o is a of proof let f be a fundamental domain of let be the generating curve of with respect to let fn be the fundamental domain of with fn by proposition we have let u to be the open set about the generating curve of and set o u then o is open sets of generating curves in figure let o fn o for o a open set of generating curves for since is a and fn f for large n we can choose sufficiently small neighborhood such that f is a open neighborhood of generating curves of let fn and be generated by assuming is sufficiently small neighborhood we have length c for some small c for large n and generated by all fn since we have is of with bounded length this defines a open set which all elements are in corollary let h with a schottky group let be a of of then there exists an open neighborhood o of such that every element of o is a of figure open set of generating curves about proof follows from lemma and lemma next we analyze the formations of singularities for a given sequence of classical schottky groups converging to a schottky group these types of singularities has been studied in lemma singularity let h with a schottky group assume that is schottky group then there exists such that every o contains a here we say a closed curve is a if it s contains a singularity point or it s not proof for each n let fn be a classical fundamental domain of given a sequence of classical fundamental domains fn the convergence is consider as follows is collection of circles ci n in the riemann sphere c pass to a subsequence if necessary then limn ci n is either a point or a circle we say fn convergents to g if g is a region that have boundary consists of limn ci n for each i which necessarily is either a point or a circle note that g is not necessarily a fundamental domain nor it s necessarily connected let lim fn by assumption that is not a classical schottky group we have g is not a classical fundamental domain of we have consists of circles or points however these circles may not be disjoint more precisely we have the following possible degeneration of circles of which gives of at least one of following singularities types tangency contains tangent circles degeneration contains a circles degenerates into a point collapsing contains two circles collapses into one circle here we have two concentric circles centered at origin and rest circles squeezed in between these two and these two collapse into a single circle in consider contains a tangency let p be a tangency point let be a of which pass through point assume the lemma is false then all sufficiently small open neighborhood o contains only of it follows from proposition we have a sequence with let be the generating curves of with respect to fn define as follows note that consists of points or linear arcs we define be a generating curve with to consists of linear arcs with one of its end point to be the point that converges into a tangency in in addition we also require the arcs on that with a end points which converges to tangency point be be connect by a arc between the other end points it is clear that every o fn contains a generating curve of this property let be the generated by and set lim then contains a loop singularity at a point of tangency figure hence is of since every o contains such a we must have the lemma to be true for this type of singularity figure tangency singularity consider contains a degeneration in this case we must have any pass through a degeneration point note that we can t have all circles degenerates into a single point there are two possibilities to have degenerate points case a two circles merge into a single degenerate point case b a circle degenerates into a point on a circle consider a in this case any will have two possible properties either there is a point q on some circle of such that every must pass through q or two separate arcs of meet at the second possibility implies there exists a loop singularity at p hence can not be a jordan curve figure therefore we only have the first possibility for but it follows from proposition all rectifiable of is the limit of some sequence of of and hence all must pass through q however q is not a limit point hence we can have some not passing through q a contradiction figure degenerate singularities now consider b here we can assume that there exists at least two circles that do not degenerates into points otherwise we will have the third type collapsing singularity which we will consider next having some circles degenerates into a point on to a circle at p we have a sequence of quasicircles which passes through this is given by the generating curves that have curves connecting the degenerating circle to point pn p converging to linear arc intersecting orthogonally at boundary but any neighboring of will converges to a with a loop singularity at p which is not a jordan curve figure finally we note that if we have a degeneration point p which is not a limit point then there exists a neighboring of that misses the point hence any o will contains some curve which is the limit of a sequence of that do not pass through the point this gives a in o consider contain collapsing let c denote the collapsed circle first suppose that there exists such that it has a fixed point not on then there are infinitely many elements with fixed points not on let with since figure degenerate singularities fn g c we must have c either identical or disjoint suppose that not all fixed points of elements of is contained in then we have infinitely many fixed points not in take three points a b c fixed points of elements of with a c and b c for sufficiently large n we must have some such that a b c is contained in three distinct disk of the complement of fn this follows from the fact that the orbit of fn will have images with disks converging to fixed points and since they are distinct fixed points we must have some disks that only contain one of the points only since fn c so converges to a circle but a b c are contained in distinct disks bounded by circles of fn for all large n which implies a b c must lies on c hence c c but they are not identical circles which is a contradiction hence all fixed points of elements of are c which implies since is schottky group we must have is fuchsian group of second kind by is classical schottky group a contradiction proof of theorem proof theorem we proof by contradiction suppose that hc first note that by selberg lemma we can just assume kleinian group to be now note that if a kleinian group of then it must be free to show this assume otherwise since and is purely loxodromic it is of there exists an imbedded surface r in if r is incompressible then subgroup r have r which is contradiction if r is compressible then we can cut along compression disks we either end with incompressible surface as before or after finitely many steps of cutting we obtain topological ball which implies is hence free let h and with a schottky group it follows from lemma that there exists an open set o such that every element is of now if is schottky group then by lemma we must have every open set contains some we must have o contain a but this gives a contradiction hence we must have is a classical schottky group finally sharpness comes from the fact that there exists kleinian groups which is not free of hausdorff dimension equal to one hence we have our result yonghou references y general relativity and the einstein equations oxford university press bowen hausdorff dimension of button j all fuchsian schottky groups are classical schottky groups geometry topology mono vol hou y on smooth moduli space of riemann surfaces hou y kleinian groups of small hausdorff dimension are classical schottky groups geometry topology p hou y all finitely generated kleinian groups of small hausdorff dimension are classical schottky groups http phillips sarnak the laplacian for domains in hyperbolic space and limit sets of kleinian groups acta
| 4 |
aug technische and keyword indexes for string searching by aleksander master s thesis in informatics department of informatics technische and keyword indexes for string searching indizierung von volltexten und keywords textsuche author aleksander supervisor burkhard rost advisors szymon grabowski tatyana goldberg msc master s thesis in informatics department of informatics august declaration of authorship i aleksander confirm that this master s thesis is my own work and i have documented all sources and material used signed date ii abstract string searching consists in locating a substring in a longer text and two strings can be approximately equal various similarity measures such as the hamming distance exist strings can be defined very broadly and they usually contain natural language and biological data dna proteins but they can also represent other kinds of data such as music or images one solution to string searching is to use online algorithms which do not preprocess the input text however this is often infeasible due to the massive sizes of modern data sets alternatively one can build an index a data structure which aims to speed up string matching queries the indexes are divided into ones which operate on the whole input text and can answer arbitrary queries and keyword indexes which store a dictionary of individual words in this work we present a literature review for both index categories as well as our contributions which are mostly the first contribution is the index which is a modification of the a compressed index that trades space for speed in our approach the count table and the occurrence lists store information about selected in addition to the individual characters two variants are described namely one using o n n bits of space with o m log m log log n average query time and one with linear space and o m log log n average query time where n is the input text length and m is the pattern length we experimentally show that a significant speedup can be achieved by operating on albeit at the cost of very high space requirements hence the name bloated in the category of keyword indexes we present the split index which can efficiently solve the problem especially for error our implementation in the language is focused mostly on data compaction which is beneficial for the search speed by being cache friendly we compare our solution with other algorithms and we show that it is faster when the hamming distance is used query times in the order of microsecond were reported for one mismatch for a natural language dictionary on a pc a minor contribution includes string sketches which aim to speed up approximate string comparison at the cost of additional space o per string they can be used in the context of keyword indexes in order to deduce that two strings differ by at least k mismatches with the use of fast bitwise operations rather than an explicit verification acknowledgements i would like to thank szymon grabowski for his constant support advice and mentorship he introduced me to the academia and i would probably not pursue the scientific path if it were not for him his vast knowledge and ability to explain things are simply unmatched i would like to thank burkhard rost and tatyana goldberg for their helpful remarks and guidance in the field of bioinformatics i would like to thank the whole gank incoming dota team the whole radogoszcz football and airsoft pack ester who keeps my ego in check duda whose decorum is still spoiled edwin fung for not letting the corporate giant consume him fuchs who still promises the movie florentyna gust who always supported me in difficult times for being a cat from korea jacek krasiukianis for introducing me to the realm of street fighter games madaj for biking tours jakub przybylski who recently switched focus from whisky to beer while remaining faithful to the interdisciplinary field of malt engineering for solving world problems together and wojciech terepeta who put up with my face being his first sight in the morning i am indebted to ozzy osbourne frank sinatra and roger waters for making the world slightly more interesting i would like to thank the developers of the free yuml software http for making my life somewhat easier many thanks also goes to my family as well as to all the giants who lent me their shoulders for a while iv contents declaration of authorship ii abstract iii acknowledgements iv contents v introduction applications natural language bioinformatics other preliminaries sorting trees binary search tree trie hashing data structure comparison compression entropy pigeonhole principle overview string searching problem classification error metrics online searching exact approximate offline searching indexes v vi contents exact suffix tree suffix array modifications other structures transform operation efficiency flavors binary rank superlinear space linear space approximate blast keyword indexes exact bloom filter inverted index approximate the problem permuterm index split index complexity compression parallelization inverted split index keyword selection minimizers string sketches experimental results split index string sketches conclusions future work a data sets b exact matching complexity c split index compression contents vii d string sketches e english letter frequency f hash functions bibliography list of symbols list of abbreviations list of figures list of tables why did the scarecrow win the nobel prize because he was out standing in his unknown for everybody who have devoted their precious time to read this work in its entirety ix chapter introduction the bible which consists of the old and the new testament is composed of roughly thousand words in the english language version bib literary works of such stature were often regarded as good candidates for creating concordances listings of words that originated from the specific work such collections usually included positions of the words which allowed the reader to learn about their frequency and context their assembly was a task that required a lot of effort under a rather favorable assumption that a friar today also referred to as a research assistant would be able to achieve a throughput of one word per minute compilation do not confuse with code generation for the bible would require over thirteen thousand or roughly one and a half years of constant work this naturally ignores additional efforts for instance printing and dissemination such a listing is one of the earliest examples of a data structure constructed with the purpose of faster searches at the cost of space and preprocessing luckily today we are capable of building and using various structures in a much shorter time with the aid of silicone electrons and capable human minds we have managed to decrease the times from years to seconds indexing and from seconds to microseconds searching applications string searching has always been ubiquitous in everyday life most probably since the very creation of the written word in the modern world we encounter text on a regular basis on paper glass rubber human skin metal cement and since the century also on electronic displays we perform various operations almost all the time often subconsciously this happens in trivial situations such as looking for interesting introduction news on a website on a slow sunday afternoon or trying to locate information in the bus timetable on a cold monday morning many familiar tasks can be finished faster thanks to computers and powerful machines are also crucial to scientific research specific areas are discussed in the following subsections natural language for years the main application of computers to textual data was natural language nl processing which goes back to the work of alan turing in the the goal was to understand the meaning as well as the context in which the language was used one of the first programs that could actually comprehend and act upon english sentences was bobrow s student which solved simple mathematical problems the first application to text processing where string searching algorithms could really shine was spell checking determining whether a word is written in a correct form it consists in testing whether a word is present in a nl dictionary a set of words such a functionality is required since spelling errors appear relatively often due to a variety of reasons ranging from writer ignorance to typing and transmission errors research in this area started around and the first spell checker available as an application is believed to have appeared in today spell checking is universal and it is performed by most programs which accept user input this includes dedicated text editors programming tools email clients interfaces and web browsers more sophisticated approaches which try to take the context into account were also described due to the fact that checking for dictionary membership is prone to errors mistyping were for where peterson reported that up to of errors might be undetected another familiar scenario is searching for words in a textual document such as a book or an article which allows for locating relevant fragments in a much shorter time than by skimming through the text determining positions of certain keywords in order to learn their context neighboring words may be also useful for plagiarism detection including the plagiarism of computer programs with the use of approximate methods similar words can be obtained from the nl dictionary and correct spelling can be suggested spelling correction is usually coupled with spell checking this may also include proper nouns for example in the case of shopping catalogs relevant products or geographic information systems specific locations cities such techniques are also useful for optical character recognition ocr where they serve as a verification mechanism other applications are in security where it is desirable to check whether a password is not too close to a word from a dictionary and in data cleaning which consists in detecting errors and duplication in introduction data that is stored in the database string matching is also employed for preventing the registration of fraudulent websites having similar addresses the phenomenon known as typosquatting it may happen that the pattern that is searched for is not explicitly specified as is the case when we use a web search engine we would like to find the entire website but we specify only a few keywords which is an example of information retrieval for instance methods form an important component of the architecture of the google engine bioinformatics the biological data is commonly represented in a textual form and for this reason it can be searched just like any other text most popular representations include dna the alphabet of four symbols corresponding to nucleobases a c g and it can be extended with an additional character n indicating that there might be any nucleobase at a specified position this is used for instance when the sequencing method could not determine the nucleobase with a desired certainty sometimes additional information such as the quality of the read the probability that a specific base was determined correctly is also stored rna four nucleobases a c g and u similarly to the dna additional information may be present proteins symbols corresponding to different amino acids uppercase letters from the english alphabet with additional symbols for amino acids occurring only in some species o and u and placeholders b j x z for situations where the amino acid is ambiguous all letters from the english alphabet are used computational information was an integral part of the field of bioinformatics from the very beginning and at the end of the there was a substantial activity in the development of string sequence alignment algorithms for rna structure prediction alignment methods allow for finding evolutionary relationships between genes and proteins and thus construct phylogenetic trees sequence similarity in proteins is important because it may imply structural as well as functional similarity researchers use tools such as blast which try to match the string in question with similar ones from the database of proteins or genomes approximate methods play an introduction important role here because related sequences often differ from one another due to mutations in the genetic material these include point mutations that is changes at a single position as well as insertions and deletions usually called indels another research area that would not thrive without computers is genome sequencing this is caused by the fact that sequencing methods can not read the whole genome but rather they produce hundreds of gigabytes of strings dna reads whose typical length is from tens to a thousand base pairs whose exact positions in the genome are not known moreover the reads often contain mistakes due to the imperfection of the sequencing itself the goal of the computers is to calculate the correct order using complicated statistical and tools with or without a reference genome the latter being called de novo sequencing this process is well illustrated by its name shotgun sequencing and it can be likened to shredding a piece of paper and reconstructing the pieces string searching is crucial here because it allows for finding repeated occurrences of certain patterns other other data can be also represented and manipulated in a textual form this includes music where we would like to locate a specific melody especially using approximate methods which account for slight variations or imperfections singing out of pitch another field where approximate methods play a crucial role is signal processing especially in the case of audio signals which can be processed by speech recognition algorithms such a functionality is becoming more and more popular nowadays due to the evolution of multimedia databases containing audiovisual data string algorithms can be also used in intrusion detection systems where their goal is to identify malicious activities by matching data such as system state graphs instruction sequences or packets with those from the database string searching can be also applied for the detection of arbitrary shapes in images and yet another application is in compression algorithms where it is desirable to find repetitive patterns in a similar way to sequence searching in biological data due to the fact that almost any data can be represented in a textual form many other application areas exist see navarro for more information this diversity of data causes the string algorithms to be used in very different scenarios the pattern size can vary from a few letters nl words to a few hundred dna reads and the input text can be of almost arbitrary size for instance google reported in that their web search index has reached over thousand terabytes bytes goo massive data is also present in bioinformatics where the size of the genome of a single introduction organism is often measured in gigabytes one of the largest animal genomes belong to the lungfish and the salamander each occupying approximately gbp roughly gb assuming that each base is coded with bits as regards proteins the uniprot protein database stores approximately million sequences each composed of roughly a few hundred symbols in and continues to grow exponentially uni it was remarked recently that biological textual databases grow more quickly than the ability to understand them when it comes to data of such magnitude it is feasible only to perform an search meaning that the data is preprocessed which is the main focus of this thesis it seems most likely that the data sizes will continue to grow and for this reason there is a clear need for the development of algorithms which are efficient in practice preliminaries this section presents an overview of data structures and algorithms which act as building blocks for the ones presented later and it introduces the necessary terminology string searching which is the main topic of this thesis is described in the following chapter throughout this thesis data structures are usually approached from two angles theoretical which concentrates on the space and query time and a practical one the latter focuses on performance in scenarios and it is often heuristically oriented and focused on cache utilization and reducing slow ram access it is worth noting that theoretical algorithms sometimes perform very poor in practice because of certain constant factors which are ignored in the analysis moreover they might not be even tested or implemented at all on the other hand a practical evaluation depends heavily on the hardware peculiarities of the cpu cache instruction prefetching etc properties of the data sets used as input and most importantly on the implementation moffat and gog provided an extensive analysis of experimentation in the field of string searching and they pointed out various caveats these include for instance a bias towards certain repetitive patterns when the patterns are randomly sampled from the input text or the advantage of smaller data sets which increase the probability that at least some of the data would fit into the cache the theoretical analysis of the algorithms is based on the big o family of asymptotic notations including o and the relevant lower case counterparts we assume that the reader is familiar with these tools and with complexity classes unless stated otherwise the complexity analysis refers to the scenario and all logarithms are assumed introduction to be base this might be also stated explicitly as when we state that the complexity or the average or the worst case is equal to some value we mean the running time of the algorithm on the other hand if the time or space is explicitly mentioned the word complexity might be omitted array string and vector indexes are always and they are assumed to be contiguous and collection indexes are a collection of strings sn we consider a standard hierarchical memory model with ram and a faster cpu cache and we take for granted that the data always fits into the main memory disk is ignored moreover we assume that the size of the data does not exceed bytes which means that it is sufficient for each pointer or counter to occupy bits bytes sizes that are given in kilobytes and megabytes are indicated with abbreviations kb and mb which refer to standard computer science quantities rather than and sorting sorting consists in ordering n elements from a given set s in such a way that the following holds n s i s i that is the smallest element is always in front in reverse sorting the highest element is in front and the inequality sign is reversed popular sorting methods include the heapsort and the mergesort with o n log n worstcase time guarantees another algorithm is the quicksort with average time o n log n although the worst case is equal to o which is known to be times faster in practice than both heapsort and mergesort there also exist algorithms linear in n which can be used in certain scenarios for instance the radix sort for integers with time complexity o wn or o n log n for a radix where w is the machine word size when it comes to sorting n strings of average length m a comparison sorting method would take o n log nm time assuming that comparing two strings is linear in time alternatively we could obtain an o nm time bound by sorting each letter column with a sorting method which is linear for a fixed alphabet essentially performing a radix sort using a counting sort moreover we can even achieve an o n complexity by building a trie with lexicographically ordered children at each level and performing a preorder search dfs see the following subsections for details when it comes to suffix sorting sorting all suffixes of the input text dedicated methods which do not have a linear time guarantee are often used due to reduced space requirements or good practical performance recently linear methods which are efficient in practice have also been described introduction trees a tree contains multiple nodes that are connected with each other with one node designated as the root every node contains zero or more children and a tree is an undirected graph where any two vertexes are connected by exactly one path there are no cycles further terminology which is relevant to trees is as follows sec a parent is a neighbor of a child and it is located closer to the root and vice versa a sibling is a node which shares the same parent leaves are the nodes without children and in the graphical representation they are always shown at the bottom of the diagram the leaves are also called external nodes and internal nodes are all nodes other than leaves descendants are the nodes located anywhere in the subtree rooted by the current node and ancestors are the nodes anywhere on the path from the root inclusive to the current node proper descendants and ancestors exclude the current node if is an ancestor of then is a descendant of and vice versa the depth of a node is the length of the path from this node to the root the height of a tree is the longest path from the root to any leaf the depth of the deepest node the maximum number of children can be limited for each node many structures are binary trees which means that every node has at most two children and a generic term is a tree or multiary for k a full complete tree is a structure where every node has exactly in the case of leaves or k in the case of internal nodes children a perfect tree is a tree where all leaves have the same depth a historical note apparently a binary tree used to be called a bifurcating arborescence in the early years of computer science a balanced tree is a tree whose height is maintained with respect to its total size irrespective of possible updates and deletions the height of a balanced binary tree is logarithmic o log n logk n for a tree it is often desirable to maintain such a balance because otherwise a tree may lose its properties search complexity this is caused by the fact that the time complexity of various algorithms is proportional to the height of the tree there exist many kinds of trees and they are characterized by some additional properties which make them useful for a certain purpose introduction binary search tree the binary search tree bst is used for determining a membership in the set or for storing pairs every node stores one value v the value of its right child is always bigger than v and the value of its left child is always smaller than v the lookup operation consists in traversing the tree towards the leaves until either the value is found or there are no more nodes to process which indicates that the value is not present the bst is often used to maintain a collection of numbers however the values can also be strings they are ordered alphabetically see figure it is crucial that the bst is balanced otherwise in the scenario where every node had exactly one child its height would be linear basically forming a linked list and thus the complexity for the traversal would degrade from o log n to o n the occupied space is clearly linear there is one node per value and the preprocessing takes o n log n time because each insertion costs o log n karen alan bob tom erin sasha zelda figure a binary search tree bst storing strings from the english alphabet the value of the right child is always bigger than the value of the parent and the value of the left child is always smaller than the value of the parent trie the trie digital tree is a tree in which the position of a node more specifically a path from the root to the node describes the associated value see figure the nodes often store ids or flags which indicate whether a given node has a word which is required because some nodes may be only intermediary and not associated with any value the values are often strings and the paths may correspond to the prefixes of the input text a trie supports basic operations such as searching insertion and deletion for the lookup we check whether each consecutive character from the query is present in the trie while moving towards the leaves hence the search complexity is directly proportional to the length of the pattern in order to build a trie we have to perform a full lookup for each introduction word thus the preprocessing complexity is equal to o n for words of total length the space is linear because there is at most one node per input character t t a e o te n ten e tea a i i n to d in n ted inn figure a trie which is one of the basic structures used in string searching constructed for strings from the set a in inn tea ted ten to each edge corresponds to one character and the strings are stored implicitly here shown for clarity additional information such as ids here shown inside the parentheses is sometimes kept in the nodes various modifications of the regular trie exist an example could be the patricia trie whose aim is to reduce the occupied space the idea is to merge every node which has no siblings with its parent thus reducing the total number of nodes and resulting edge labels include the characters from all edges that were merged the complexities are unchanged hashing a hash function h transforms data of arbitrary size into the data of fixed size typical output sizes include and bits the input can be in principle of any type although hash functions are usually designed so that they work well for a particular kind of data for strings or for integers hash functions often have certain desirable properties such as a limited number of collisions where for two chunks of data and the probability that h h should be relatively low h is called universal if p r h h for an hash table there exists a group of cryptographic hash functions which offer certain guarantees regarding the number of collisions they also provide which means that it is hard in the introduction mathematical sense for example the problem may be to deduce the value of the input string from the hash value such properties are provided at the price of reduced speed and for this reason cryptographic hash functions are usually not used for string matching a perfect hash function guarantees no collisions the fks scheme with o n space all keys have to be usually known beforehand although dynamic perfect hashing was also considered a minimal perfect hash function mphf uses every bucket in the hash table there is one value per bucket the lower space bound for describing an mphf is equal to roughly bits for n elements the complexity of a hash function is usually linear in the input length although it is sometimes assumed that it takes constant time a hash function is an integral part of a hash table ht which is a data structure that associates the values with buckets based on the key the hash of the value this can be represented with the following relation ht h v v for any value hash tables are often used in string searching because they allow for quick membership queries see figure the size of the hash table is usually much smaller than the number of all possible hash values and it is often the case that a collision occurs the same key is produced for two different values there exist various methods of resolving such collisions and the most popular ones are as follows chaining each bucket holds a list of all values which hashed to this bucket probing if a collision occurs the value is inserted into the next unoccupied bucket this may be linear probing where the consecutive buckets are scanned linearly until an empty bucket is found or quadratic probing where the gaps between consecutive buckets are formed by the results of a quadratic polynomial double hashing gaps between consecutive buckets are determined by another hash function a simple approach could be for instance to locate the next bucket index i using the formula i v v mod for any two hash functions and in order to resolve the collisions the keys have to be usually stored as well the techniques which try to locate an empty bucket as opposed to chaining are referred to as open addressing a key characteristic of the hash table is its load factor lf which is defined as the number of entries divided by the number of buckets let us note that lf for open addressing the performance degrades rapidly as lf however in the case of chaining it holds that lf n for n entries introduction keys hash function buckets john smith lisa smith sandra dee figure a hash table for strings reproduced from wikimedia data structure comparison in the previous subsections we introduced data structures which are used by more sophisticated algorithms described in the following chapters still they can be also used on their own for exact string searching in figure we present a comparison of their complexities together with a linear direct access array it is to be noted that even though the worst case of a hash table lookup is linear iterating over one bucket which stores all the elements it is extremely unlikely and any popular hash function offers reasonable guarantees against building such a degenerate hash table data structure array balanced bst hash table trie lookup o n o log n o m avg o n o m preprocessing o o n log n o n o n space o n o n o n o n table a comparison of the complexities of basic data structures which can be used for exact string searching here we assume that string comparison takes constant time compression compression consists in representing the data in an alternative encoded form with the purpose of reducing the size after compression the data can be decompressed decoded in order to obtain the original representation typical applications include reducing storage sizes and saving bandwidth during transmission compression can be jorge stolfi available at http cc introduction either lossless or lossy depending on whether the result of decompression matches the original data the former is useful especially when it comes to multimedia frequently used by methods such as those based on human perception of images where the lower quality may be acceptable or even indiscernible and storing the data in an uncompressed form is often infeasible for instance the original size of a full hd movie with bits per pixel and frames per second would amount to more than one terabyte data that can be compressed is sometimes called redundant one of the most popular compression methods is character substitution where the selected symbols bit are replaced with ones that take less space a classic algorithm is called huffman coding and it offers an optimal substitution method based on frequencies it produces a codebook which maps more frequent characters to shorter codes in such a way that every code is uniquely decodable and are uniquely decodable but and are not for data huffman coding offers compression rates that are close to the entropy see the following subsection and it is often used as a component of more complex algorithms we refer the reader to salomon s monograph for more information on data compression entropy we can easily determine the compression ratio by taking the size a number of occupied bits of the original data and dividing it by the size of the compressed data r it may seem that the following should hold r however certain algorithms might actually increase the data size after compressing it when operating on an inconvenient data set which is of course highly undesirable a related problem is how to determine the compressibility of the data the optimal compression ratio the highest r this brings us to the notion of entropy sometimes also called shannon s entropy after the name of the author it describes the amount of information which is contained in a message and in the case of strings it determines the average number of bits which is required in order to encode an input symbol under a specified alphabet and frequency distribution this means that the entropy describes a theoretical bound on data compression one that can not be exceeded by any algorithm higher entropy means that it is more difficult to compress the data when multiple symbols appear with equal frequency the formula is presented in figure e n x pi log pi figure the formula for shannon s entropy where e is the entropy function pi is the probability that symbol i occurs and k is any constant introduction a variation of entropy which is used in the context of strings is called a order entropy it takes the context of k preceding symbols into account and it allows for the use of different codes based on this context ignoring a symbol which always appears after the symbol shannon s entropy corresponds to the case of k denoted as or hk in general when we increase the k value we also increase the theoretical bound on compressibility although the size of the data required for storing context information may at some point dominate the space pigeonhole principle let us consider a situation where we have x buckets and n items which are to be positioned inside those buckets the pigeonhole principle often also called dirichlet principle states that if n x then at least one of the buckets must store more than one item the name comes from an intuitive representation of the buckets as boxes and items as pigeons despite its simplicity this principle has been successfully applied to various mathematical problems it is also often used in computer science for example to describe the number of collisions in a hash table later we will see that the pigeonhole principle is also useful in the context of string searching especially when it comes to string partitioning and approximate matching overview this thesis is organized as follows chapter provides an overview of the field of string searching deals with the underlying theory introduces relevant notations and discusses related work in the context of online search algorithms chapter includes related work and discusses current algorithms for indexing as well as our contribution to this area chapter does the same for keyword indexes chapter describes the experimental setup and presents practical results chapter contains conclusions and pointers to the possible future work appendix a offers information regarding the data sets which were used for the experimental evaluation introduction appendix b discusses the complexity of exact string comparison appendix c discusses the compression of the split index section in detail appendix d contains experimental results for string sketches section when used for the alphabet with uniform letter frequencies appendix e presents the frequencies of english alphabet letters appendix f contains internet addresses where the reader can obtain the code for hash functions which were used to obtain experimental results for the split index section chapter string searching this thesis deals with strings which are sequences of symbols over a specified alphabet the string is usually denoted as s the alphabet as and the length size as n or for strings and or for the alphabet an arbitrary string s is sometimes called a word which is not to be confused with the machine word a basic data unit in the processor and it is defined over a given alphabet that is s belongs to the set of all words specified over the said alphabet s both strings and alphabets are assumed to be finite and and alphabets are totally ordered a string with a specified value is written with the teletype font as in abcd the brackets are usually used to indicate the character at a specified position and the index is for instance if string s text then s a substring sometimes referred to as a factor is written as s an inclusive range for the previous example s ex and a single character is a substring of length usually denoted with c the last character is indicated with and p p indicates that the string is a substring of conversely indicates that is not a substring of the subscripts are usually used to distinguish multiple strings and two strings may be concatenated merged into one recorded as s or s in which case removing one substring from another is indicated with the subtraction sign s provided that and as a result occ for occ occurrences of in the equality sign indicates that the strings match exactly which means that the following relation always holds string searching or string matching refers to locating a substring a pattern p or a query q with length m in a longer text t the textual data t that is searched is called the input input string input text text database and its length is denoted by n o n indicates that the complexity is linear with respect to the size of the original data the pattern is usually much smaller than the input often in multiple orders of string searching magnitude m n based on the position of the pattern in the text we write that p occurs in t with a shift s p m t s s m s n as mentioned before the applications may vary see section and the data itself can come from many different domains still string searching algorithms can operate on any text while being oblivious to the actual meaning of the data the field concerning the algorithms for string processing is sometimes called stringology two important notions are prefixes and suffixes where the former is a substring s i and the latter is a substring s i for any i let us observe at this point that every substring is a prefix of one of the suffixes of the original string as well as a suffix of one of the prefixes this simple statement is a basis for many algorithms which are described in the following chapters a proper prefix or suffix is not equal to the string itself the strings can be lexicographically ordered which means that they are sorted according to the ordering of the characters from the given alphabet for the english alphabet letter a comes before b b comes before c etc formally for two strings of respective lengths and if s i i s s s min or when it comes to strings we often mention and which are lists of contiguous characters strings or substrings the former is usually used in general terms and the latter is used for biological data especially dna reads a or is a of length problem classification the match between the pattern and the substring of the input text is determined according to the specified similarity measure which allows us to divide the algorithms into two categories exact and approximate the former refers to direct matching where the length as well as all characters at the corresponding positions must be equal to each another this relation can be represented formally for two strings and of the same length n which are equal if and only if n i i or simply in the case of approximate matching the similarity is measured with a specified distance also called an error metric between the two strings it is to be noted that the word approximation is not used here strictly in the mathematical sense since approximate search is actually harder than the exact one when it comes to strings in general given two strings and the distance d is the minimum cost of edit operations that would transform into or vice versa the edits are usually defined as a finite set of rules e r r s s and each rule can be associated with a different cost when error metrics are used the results of string matching are limited to string searching those substrings which are close to the pattern this is defined by the threshold k that is we report all substrings s for which d s p for metrics with fixed penalties for errors k is called the maximum allowed number of errors this value may depend both on the data set and the pattern length for instance for spell checking a reasonable number of errors is higher for longer words it should hold that k m since otherwise the pattern could match any string and k corresponds to the exact matching scenario see subsection for detailed descriptions of the most popular error metrics the problem of searching also called a lookup can vary depending on the kind of answer that is provided this includes the following operations match determining the membership deciding whether p t a decision problem when we consider the search complexity we usually implicitly mean the match query count stating how many times p occurs in t this refers to the cardinality of the set containing all indexes i t i i m is equal to p specific values of i are ignored in this scenario the time complexity of the count operation often depends on the number of occurrences denoted with occ locate reporting all occurrences of p in t returning all indexes i t i i m is equal to p display showing k characters which are located before and after each match that is for all aforementioned indexes i we display substrings t i k i and t i m i m k in the case of approximate matching it might refer to showing all text substrings or keywords s d s p string searching algorithms can be also categorized based on whether the data is preprocessed one such classification adapted from melichar et al is presented in table offline searching is also called searching because we preprocess the text and build a data structure which is called an index this is opposed to online searching where no preprocessing of the input text takes place for detailed descriptions of the examples from these classes consult chapters and offline and section online error metrics the motivation behind error metrics is to minimize the score between the strings which are somehow related to each other character differences that are more likely to occur string searching text prepr no no yes yes pattern prepr no yes no yes algorithm type online online offline offline examples naive dynamic programming pattern automata rolling hash methods signature methods table algorithm classification based on whether the data is preprocessed should carry a lower penalty depending on the application area for instance in the case of dna certain mutations appear in the real world much more often than others the most popular metrics include hamming distance relevant for two strings of equal length n calculates the number of differing characters at corresponding positions hence it is sometimes called the problem throughout this thesis we denote the hamming distance with ham and given that n ham where e i i n i i and ham without preprocessing calculating the hamming distance takes o n time applications of the hamming distance include bioinformatics biometrics cheminformatics circuit design and web crawling levenshtein distance measures the minimum number of edits here defined as insertions deletions and substitutions it was first described in the context of error correction for data transmission it must hold that lev max the calculation using the dynamic programming algorithm takes o time using o min space see subsection ukkonen recognized certain properties of the dp matrix and presented an algorithm with o k min time for k errors and an approximation algorithm in a time was also described levenshtein distance is sometimes called simply the edit distance when the distance for approximate matching is not explicitly specified we assume the levenshtein distance other edit distances these may allow only a subset of edit actions longest common subsequence lcs which is restricted to indels or the episode distance with deletions another approach is to introduce additional actions examples include the distance which counts a transposition as one edit operation a distance which allows for matching one character with two and vice versa specifically designed for ocr or a distance which has weights for substitutions based on the probability that a user may mistype one character for another string searching sequence alignment there may exist gaps other characters in between substrings of and moreover certain characters may match each other even though they are not strictly equal the gaps themselves their lengths or positions as well as the inequality between individual characters are quantified the score is calculated using a similarity matrix which is constructed based on statistical properties of the elements from the domain in question the matrix for the sequence alignment of proteins the problem can be also formulated as where the width of the gaps is at most and for a set p of positions of corresponding characters it should hold that p this means that absolute values of numerical differences between certain characters can not exceed a specified threshold sequence alignment is a generalization of the edit distance and it can be also performed for multiple sequences although this is known to be regular expression matching the patterns may contain certain metacharacters with various meanings these can specify ranges of characters which can match at certain positions or use additional constructs such as the wildcard symbol which matches or more consecutive characters of any type online searching in this section we present selected algorithms for online string searching and we divide them into exact and approximate ones online algorithms do not preprocess the input text however the pattern may be preprocessed we assume that the preprocessing time complexity is equal to o and the time required for pattern preprocessing is subsumed under search complexity which means that we consider a scenario where the patterns are not known beforehand search time refers to the match query exact faro and lecroq provided a survey on online algorithms for exact matching and remarked that over algorithms have been proposed since the they categorized the algorithms into the following three groups character comparisons automata string searching bit parallelism the naive algorithm attempts to match every possible substring of t of length m with the pattern p this means that it iterates from left to right and checks whether t i p for each i right to left iteration would be also possible and the algorithm would report the same results time complexity is equal to o nm in the worst case although to o n on average see appendix b for more information and there is no preprocessing or space overhead even without text preprocessing the performance of the naive algorithm can be improved significantly by taking advantage of the information provided by the mismatches between the text and the pattern classical solutions for matching include the kmp and the bm algorithm the kmp uses information regarding the characters that appear in the pattern in order to avoid repeated comparisons known from the naive approach it reduces the time complexity from o nm to o n m in the worst case at the cost of o m space when a mismatch occurs at position i in the pattern p i t s i the algorithm shifts p by i l where l is the length of the longest proper prefix of ps p i which is also a suffix of ps instead of just position and it starts matching from the position i l instead of i information regarding ps is precomputed and stored in a table of size let us observe that the algorithm does not skip any characters from the input string interestingly and navarro reported that in practice the kmp algorithm is roughly two times slower than the search although this depends on the alphabet size the bm algorithm on the other hand omits certain characters from the input it begins the matching from the end of the pattern and allows for forward jumps based on mismatches thanks to the preprocessing the size of each shift can be determined in constant time one of the two rules for jumping is called a bad character rule which given that p i t s i t s i c aligns t s i with the rightmost occurrence of c in p p j c where j i or shifts the pattern by m if c p the other rule is a complex good suffix rule whose description we omit here and which is also not a part of the bmh algorithm which uses only the bad character rule this is because the good suffix rule requires extra cost to compute and it is often not practical the time complexity of the bm algorithm is equal to o nm with o min m m average the same holds for bmh and the average number of comparisons is equal to roughly this can be improved to achieve a linear time in the worst case by introducing additional rules string searching one of the algorithms developed later is the rk algorithm it starts with calculating the hash value of the pattern in the preprocessing stage and then compares this hash with every substring of the text sliding over it in a similar way to the naive algorithm verification takes place only if two hashes are equal to each other the trick is to use a hash function which can be computed in constant time for the next substring given its output for the previous substring and the next character a socalled rolling hash viz h t s t h t a simple p example would be to simply add the values of all characters h s s i there exist other functions such as the rabin fingerprint which treats the characters as n p ci the polynomial variables ci and the indeterminate x is a fixed base r s rk algorithm is suitable for matching since we can quickly compare the hash of the current substring with the hashes of all patterns using any efficient set data structure in this way we obtain the average time complexity of o n m assuming that hashing takes linear time however it is still equal to o nm in the worst case when the hashes do match and verification is required another approach is taken by the algorithm it builds a finite state machine fsm an automaton which has a finite number of states the structure of the automaton resembles a trie and it contains edges between certain nodes which represent the transitions it is constructed from the queries and attempts to match all queries at once when sliding over the text the transitions indicate the next possible pattern which can be still fully matched after a mismatch at a specified position occurs the search complexity is equal to o n log m z which means that it is linear with respect to the input length n the length of all patterns m for building the automaton and the number of occurrences z an example of a algorithm is the algorithm by and gonnet which aims to speed up the comparisons the pattern length should be smaller than the machine word size which is usually equal to or bits during the preprocessing a mismatch mask m is computed for each character c from the alphabet where m i if p i c and m i otherwise moreover we maintain a state mask r initially set to all which holds information about the matches so far we proceed in a similar manner to the naive algorithm trying to match the pattern with every substring but instead of comparisons we use bit operations at each step we shift the state mask to the left and or it with the m for the current character t i a match is reported if the most significant bit of r is equal to provided that m w the time complexity is equal to o n and the masks occupy o space based on the practical evaluation faro and lecroq reported that there is no superior algorithm and the effectiveness depends heavily on the size of the pattern and string searching the size of the alphabet the differences in performance are substantial algorithms which are the fastest for short patterns are often among the slowest for long patterns and vice versa approximate in the following paragraphs we use to denote the complexity of calculating the distance function between two strings consult subsection for the description of the most popular metrics navarro presented an extensive survey regarding approximate online matching where he categorizes the algorithms into four categories which resemble the ones presented for the exact scenario dynamic programming automata bit parallelism filtering the naive algorithm works in a similar manner to the one for exact searching that is it compares the pattern with every possible substring of the input text it forms a generic idea which can be adapted depending on the edit distance which is used and for this reason the time complexity is equal to o the oldest algorithms are based on the principle of dynamic programming dp this means that they divide the problem into subproblems these are solved and their answers are stored in order to avoid recomputing these answers it is applicable when the subproblems overlap one of the most examples is the nw algorithm which was originally designed to compare biological sequences starting from the first character of both strings it successively considers all possible actions insertion mis match deletion and constructs a matrix which holds all alignment scores it calculates the global alignment and it can use a substitution matrix which specifies alignment scores penalties the situation where the scores triplet is equal to for gaps matches and mismatches respectively corresponds directly to the levenshtein distance consult figure the nw method can be invoked with the input text as one string and the pattern as the other a closely related variation of the nw algorithm is the sw algorithm which can also identify local and not just global alignments by not allowing negative scores this means that the alignment does not have to cover the entire length string searching of the text and it is therefore more suitable for locating a pattern as a substring both algorithms can be adapted to other distance metrics by manipulating the scoring matrix for example by assigning infinite costs in order to prohibit certain operations the time complexity of the nw and sw approaches is equal to o nm and it possible to calculate them using o min n m space despite their simplicity both methods are still popular for sequence alignment because they might be relatively fast in practice and they report the true answer to the problem which is crucial when the quality of an alignment matters t a x i t e x t figure calculating an alignment with levenshtein distance using the wunsch nw algorithm we follow the path from the to the corner selecting the highest possible score underlined the optimal global alignment is as follows text taxi no gaps multiple other dynamic programming algorithms were proposed over the years and they gradually tightened the theoretical bounds the difference lies mostly in their flexibility that is a possibility of being adapted to other distance metrics as well as practical performance notable results for the edit distance include the algorithm with o average time using o m space and the algorithm with the time o n m k c with c for periodic patterns and c otherwise taking o m os space where os refers to occurrences of certain substrings of the pattern in the text the analysis is rather lengthy for the lcs metric grabowski provided the algorithm with o nm log log n time bound and linear space a significant achievement in the automata category is the algorithm that uses the four russians technique which consists in partitioning the matrix into blocks precomputing the values for each possible block and then using a lookup table it implicitly constructs the automaton where each state corresponds to the values in the dp matrix they obtain an o log n expected time bound using o n space as regards bit parallelism myers presented the calculation of the dp matrix in o average time an important category is formed by the filtering algorithms which try to identify parts of the input text where it is not possible to match any substrings with the pattern after string searching parts of the text are rejected a algorithm is used on the remaining parts numerous filtering algorithms have been proposed and one of the most significant is the algorithm with the time bound of o n k m for the error level when it holds that for very large as regards the problem a notable example is the porat algorithm which can answer the locate query in o n k log k time this p was refined to o log k in the word ram model where w log n recently clifford et al described an algorithm with search time complexity o nk log n polylog m offline searching an online search is often infeasible for data since the time required for one lookup might be measured in the order of seconds this is caused by the fact that any online method has to access at least characters from the input text and it normally holds that m this thesis is focused on offline methods where a data structure an index pl indexes or indices we opt for the former term is built based on the input text in order to speed up further searches which is a classic example of data preprocessing this is justified even if the preprocessing time is long since the same text is often queried with multiple patterns the indexes can be divided into two following categories indexes keyword dictionary indexes the former means that we can search for any substring in the input text string matching text matching whereas the latter operates on individual words word matching keyword matching dictionary matching matching in dictionaries keyword indexes are usually appropriate where there exist boundaries between the keywords which are often simply called words for instance in the case of a natural language dictionary or individual dna reads it is worth noting that the number of distinct words is almost always smaller than the total number of words in a dictionary d all of which are taken from a document or a set of documents heaps law states that o where n is the text size and is an empirical constant usually in the interval and keyword indexes are actually related to each other because they are often based on similar concepts the pigeonhole principle and they may even use the other kind as the underlying data structure string searching the indexes can be divided into static and dynamic ones depending on whether updates are allowed after the initial construction another category is formed by external indexes these are optimized with respect to disk and they aim to be efficient for the data which does not fit into the main memory we can also distinguish compressed indexes see subsection for more information on compression which store the data in an encoded form one goal is to reduce storage requirements while still allowing fast searches especially when compared to the scenario where a naive decompression of the whole index has to be performed on the other hand it is also possible to achieve both space saving and a speedup with respect to the uncompressed index this can be achieved mostly due to reduced and rather surprisingly fewer comparisons required for the compressed data navarro and note that the most successful indexes can nowadays obtain both almost optimal space and query time a compressed data structure usually also falls into the category of a succinct data structure this is a rather loose term which is commonly applied to algorithms which employ efficient data representations with respect to space often close to the theoretic bound thanks to reduced storage requirements succinct data structures can process texts which are an order of magnitude bigger than ones suitable for classical data structures the term succinct may also suggest that the we are not required to decompress the entire structure in order to perform a lookup operation moreover certain indexes can be classified as which means that they implicitly store the input string in other words it is possible to transform decompress the index back to s and thus the index can essentially replace the text the main advantage of indexes when compared to the online scenario are fast queries however this naturally comes at a price indexes might occupy a substantial amount of space sometimes even orders of magnitude more than the input they are expensive to construct and it is often problematic to support functionality such as approximate matching and updates still navarro et al point out that in spite of the existence of very fast both from a practical and a theoretical point of view online algorithms the data size often renders online algorithms infeasible which is even more relevant in the year methods are explored in detail in the following chapters indexes in chapter and keyword indexes in chapter experimental evaluation of our contributions can be found in chapter chapter indexes indexes allow for searching for an arbitrary substring from the input text formally for a string t of length n having a set of x substrings s sx over a given alphabet i s is a index supporting matching with a specified distance for any query pattern p it returns all substrings s from t d p s k with k for exact matching in the following sections we describe data structures from this category divided into exact section and approximate ones section our contribution in this field is presented in subsection which describes a variant of the called exact suffix tree the suffix tree st was introduced by weiner in it is a trie see subsection which stores all suffixes of the input string that is n suffixes in total for the string of length moreover the suffix tree is compressed which in this context means that each node which has only one child is merged with this child as shown in figure searching for a pattern takes o m time since we proceed in a way similar to the search in a regular trie suffix trees offer a lot of additional functionality beyond string searching such as calculating the compression or searching for string repeats the st takes linear space with respect to the total input size o if uncompressed however it occupies significantly more space than the original string in a implementation around bytes on average in practice and even up to in the worst case which might be a bottleneck when dealing with massive data moreover the space complexity given in bits is actually equal indexes to o n log n which is also the case for the suffix array rather than o n log required to store the original text when it comes to preprocessing there exist algorithms which construct the st in linear time as regards the implementation an important consideration is how to represent the children of each node a straightforward approach such as storing them in a linked list would degrade the search time since in order to achieve the overall time of o m we have to be able to locate each child in constant time this can be accomplished for example with a hash table which offers an o average time for a lookup a a banana banana na a ana ana na na na nana na anana figure a suffix tree st which stores all suffixes of the text banana with an appended terminating character which prevents a situation where a suffix could be a prefix of another suffix a common variation is called generalized suffix tree and it refers to a st which stores multiple strings that is all suffixes for each string sx additional information which identifies the string si is stored in the nodes and the complexities are the same as for a regular st compressed suffix trees which reduce the space requirements were also described they are usually based on a compressed suffix array suffix array the suffix array sa comes from manber and myers and it stores indexes of sorted suffixes of the input text see figure for an example according to suffix arrays perform comparably to suffix trees when it comes to string indexes matching however they are slower for other kinds of searches such as regular expression matching even though the sa takes more space than the original string bytes in its basic form and the original string has to be stored as well it is significantly smaller than the suffix tree and it has better locality properties the search over a sa takes o m log n time since we perform a binary search over n suffixes and each comparison takes at most m time although the comparison is constant on average see appendix b the space complexity is equal to o n there are n suffixes and we store one index per suffix and it is possible to construct a sa in linear time see puglisi et al for an extensive survey of multiple construction algorithms with a practical evaluation which concludes that the algorithm by maniscalco and puglisi is the fastest one parallel construction algorithms which use the gpu were also considered let us point out a similarity of sa to the st since sorted suffixes correspond to the traversal over the main disadvantage with respect to the st is a lack of additional functionality such as this mentioned in the previous subsection suffix a ana anana banana na nana index figure a suffix array sa which stores indexes of sorted suffixes of the text banana the suffixes are not stored explicitly although the entire input text has to be stored modifications multiple modifications of the original sa have been proposed over the years their aim is to either speed up the searches by storing additional information or to reduce space requirements by compressing the data or omitting a subset of the data most notable examples are presented below the enhanced suffix array esa is a variant where additional information in the form of a longest common prefix lcp table is stored for a suffix array sa over the string of length n the lcp table l holds integers from the range n and it has the following properties l and l i holds the length of the longest common prefix of suffixes from sa i and sa i esa can essentially replace the suffix tree since it offers the same functionality and it can deal with the same problems in the same indexes time complexity although constant alphabet size is assumed in the analysis for certain applications it also required to store the transform see subsection of the input string and an inverse suffix array sa sa i i the size of the index can be always reduced using any compression method however such a naive approach would certainly have a negative impact on search performance because of the overhead associated with decompression and a much better approach is to use a dedicated solution presented a compact suffix array cosa with average search time o n where is the length of the cosa and practical space reduction of up to by replacing repetitive suffixes with links to other suffixes grossi and vitter introduced a compressed suffix array csa which uses o n log bits instead of o n log n bits it is based on a transformation of the sa into the array which points to the position of the next suffix in the text for instance for the text banana and the suffix ana the next suffix is na see figure these transformed values are compressible because of certain properties such as the fact that number of increasing sequences is in o the search takes o n n time and the relation between search time and space can be using certain parameters for more information on compressed indexes including the modifications of the sa we refer the reader to the survey by navarro and the which is presented in subsection can be also regarded as a compressed variant of the sa i suffix a ana anana banana na nana sa index csa index figure a compressed suffix array csa for the text banana which stores indexes pointing to the next suffix from the text the sa is shown for clarity and it is not stored along with the csa the sparse suffix array stores only suffixes which are located at the positions in the form iq for a fixed q value in order to answer a query q searches and q explicit verifications are required and it must hold that m q another notable example of a modified suffix array which stores only a subset of data is the sampled suffix array the idea is to select a subset of the alphabet denoted with and extract corresponding substrings from the text the array is constructed only indexes over those suffixes which start with a symbol from the chosen subalphabet although the sorting is performed on full suffixes only the part of the pattern which contains a character c is searched for there is one search in total and the matches are verified by comparing the rest of the pattern with the text the disadvantage is that the following must hold p c practical reduction in space in the order of was reported recently grabowski and raniszewski proposed an alternative sampling technique based on minimizers see section which allows for matching all patterns p q where q is the minimizer window length and requires only one search other structures the suffix tray combines just as the name suggests the suffix tree with the suffix array the structure is a st whose nodes are divided into heavy and light depending on whether their subtrees have more or fewer leaves than some predefined threshold light children of heavy nodes store their corresponding sa interval the query time equals o m log and preprocessing and space complexities are equal to o n the authors also described a dynamic variant which is called a suffix trist and allows updates yet another modification of the classical st is called suffix cactus sc here reworks the compaction procedure which is a part of the construction of the st instead of collapsing only the nodes which have only one child every internal node is combined with one of its children various methods of selecting such a child exist alphabetical ordering and thus the sc can take multiple forms for the same input string the original article reports the best search times for the dna whereas the sc performed worse than both st and sa for the english language and random data the space complexity is equal to o n the is a compressed succinct index which was introduced by ferragina and manzini in the year it was applied in a variety of situations for instance for sequence assembly or for ranked document retrieval multiple modifications of the were described throughout the years some are introduced in the following subsections the strength of the original lies in the fact that it occupies less space than the input text while still allowing fast queries the search time of its unmodified version is linear with respect to the pattern length although a alphabet is assumed and the space complexity indexes is equal to o hk t log log log n bits per input symbol taking the alphabet size into account grabowski et al provide a more accurate total size bound of o hk t n log log log n logn n bits for transform is based on the transform bwt which is an ingenious method of transforming a string s in order to reduce its entropy bwt permutes the characters of s in such a way that duplicated characters often appear next to each other which allows for easier processing using methods such as or encoding as is the case in the compressor sew most importantly this transformation is reversible as opposed to straightforward sorting which means that we can extract the original string from the permuted order bwt could be also used for compression based on the order entropy described in subsection since basic context information can be extracted from bwt however the loss of speed renders such an approach impractical in order to calculate the bwt we first append a special character we describe it with but in practice it can be any character c s to s in order indicate its end the character is lexicographically smaller than all c c s the next step is to take all rotations of s rotations in total and sort them in a lexicographic order thus forming the bwt matrix where we denote the first column sorted characters with f and the last column the result of the bwt t bwt with in order to finish the transform we take the last character of each rotation as demonstrated in figure let us note the similarities between the bwt and the suffix array described in subsection since the sorted rotations correspond directly to sorted suffixes see figure the calculation takes o n time assuming that the prefixes can be sorted in linear time and the space complexity of the naive approach is equal to o but it is linear if optimized in order to reverse the bwt we first sort all characters and thus obtain the first column of the matrix at this point we have two columns namely the first and the last one which means that we also have all character from the original string sorting these gives us the first and the second column and we proceed in this manner later we sort etc until we reach and thus reconstruct the whole transformation matrix at this point s can be found in the row where the last character is equal to indexes a e n p r t t p t r a n e t a t n p t r e t e a t p n r t r p t e a n e n a t r t p r t e n t a p n p t r e t a figure calculating a transform bwt for the string pattern with an appended terminating character it is required for reversing the transform the rotations are already sorted and the result is in the last column bwt pattern nptr eta operation important aspects of the are as follows count table c which describes the number of occurrences of lexicographically smaller characters for all c s see figure rank operation which counts the number of set bits in a bit vector v before a certain position i we assume that v i is included as well that is rank i v i v select operation used only in some variants the rlfm which reports the position of the set bit in the bit vector v that is select i v p if and only if p v i note that both rank and select operations can be generalized to any finite alphabet when we perform the search using the we iterate the pattern characterwise in a reverse order while maintaining a current range r s e initially i m and r n that is we start from the last character in the pattern and the range covers the whole input string here the input string corresponds to t bwt that is a text after the bwt at each step we update s and e using the formulae presented in figure the size of the range after the last iteration gives us the number of occurrences of p in t or it turns out that p t if s e at any point this mechanism is also known as the efficiency we can see that the performance of c lookup and rank is crucial to the complexity of the search procedure in particular if these operations are constant the search takes indexes i bwt n p t r e t a sa suffix attern ern n pattern rn tern ttern figure a relation between the bwt and the sa for the string pattern with an appended terminating character let us note that bw t i s sa i where s corresponds to the last character in s that is a character at the position i in bwt is a character preceding a suffix which is located at the same position in the sa c c i m p s figure count table c which is a part of the for the text mississippi the entries describe the number of occurrences of lexicographically smaller characters for all c for instance for the letter m there are occurrences of i and occurrence of in s hence c m it is worth noting that c is actually a compact representation of the f column s c p i rank s p i e c p i rank e p i figure formulae for updating the range during the search procedure in the fmindex where p i is the current character and c is the count table rank is invoked on t bwt and it counts occurrences of the current character p i o m time for the count table we can simply precompute the values and store them an array of size with o lookup as regards rank a naive implementation which would iterate the whole array would clearly take o n time on the other hand if we were to precompute all values we would have to store a table of size o one of the popular solutions for an efficient rank uses two structures which are introduced in the following paragraphs the rrr from authors names raman raman and rao is a data structure which can answer the rank query in o time for bit vectors where while providing compression at the same time it divides a bit vector v of size n into blocks each of size b and groups each consecutive s blocks into one superblock see figure for each block we store a weight w which describes the number of set bits and offset o which describes its position in a table tr the maximum value of o depends on w in tr for each w and each o we store a value of rank for each index i where indexes i b see figure this means that we have to keep b w entries each of size b for each of the b consecutive weights such a scheme provides compression with respect to storing all n bits explicitly we achieve the o query time by storing a rank value for each superblock and thus during the search we only iterate at most s blocks s is constant the space complexity is equal to v o n log log log n bits figure an example of rrr blocks for b and s where the first superblock is equal to and the second superblock is equal to offset block value rank figure an example of an rrr table for w and b where the number of all block values of length with weight is equal to with rank presented for successive indexes i block values do not have to be stored explicitly the wavelet tree wt from grossi et al is a balanced tree data structure that stores a hierarchy of bit vectors instead of the original string which allows the use of rrr or any other bit vector with an efficient rank operation starting from the root we recursively partition the alphabet into two subsets of equal length if the number of distinct characters is even until we reach single symbols which are stored as leaves characters belonging to the first subset are indicated with and characters belonging to the second subset are indicated with consult figure for an example thanks to the wt we can implement a rank query for any fixed size alphabet in o log time assuming that a binary rank is calculated in constant time since the height of the tree is equal to log for a given character c we query the wt at each node and proceed left or right depending on the subset to which c belongs each subsequent rank is called in the form rank c p where p is the result of the rank at the previous level ferragina et al described generalized wts for instance a multiary wt with o log log log n traversal time consult bowe s thesis for more information and a practical evaluation flavors multiple flavors of the were proposed over the years with the goal of decreasing the query time having o m time without the dependence on or reducing the occupied space the structures which provide asymptotically optimal bounds are often indexes abracadabra a b abaaaba a a c d r rcdr b b c d r rdr c d d r r figure a wavelet tree wt over the string abracadabra the alphabet is divided into two subsets at each level with corresponding to one subset and to the other not practical due to the very large constants which are involved for this reason many authors focus on practical performance and these structures are usually based on a fast rank operation and take advantage of compressed representations of bit vectors the following paragraphs present selected examples consult navarro and for an extensive survey of compressed indexes which discusses the whole family of one of the notable examples where the query time does not depend on is the alphabetindependent by grabowski et al the idea is to first compress the text using huffman coding and then apply the bwt transform over it obtaining a bit vector this vector is then used for searching in a manner corresponding to the fmindex the array c stores the number of zeros up to a certain position and the relation c c rank t bwt i c is replaced with i rank v i if c and rank v rank v i if c where is the length of the text compressed with huffman the space complexity is equal to o n t bits and the average search time is equal to o m t under reasonable assumptions on the practical front grabowski et al recently described a rank with cache miss moreover they proposed the index with several indexes variants for instance one which stores a separate bit vector for each alphabet symbol vectors in total other variants include using certain dense codes as well as using multiary wavelet trees with different arity values a wavelet tree is unbalanced and the paths for frequent characters are shorter which translates to a smaller number of rank queries on bit vectors moreover the operations which are performed in the same manner as for the regular wavelet tree are faster on average they reported search times which are times faster than those for other methods at the cost of using times more space data structures which concentrate on reducing space requirements rather than the query time include the compressed bit vectors from et al where different compression methods are used for blocks depending on the type of the block for instance encoding for blocks with a small number of runs another notable example is a by huo et al which encodes the bit vectors resulting from the wt using gamma coding a kind of coding and thus obtain one of the best compression ratios in practice binary rank as described in the previous subsection in order to achieve good overall performance it is sufficient to design a data structure which supports an efficient rank query for bit vectors thanks to the use of a wavelet tree rrr being a notable example jacobson originally showed that it is possible to obtain a rank operation using o n extra bits for n the same holds for select vigna proposed to interleave store next to one another blocks and superblocks concepts which were introduced for the rrr structure for uncompressed bit vectors in order to reduce the number of cache and translation lookaside buffer tlb misses from to this was extended by gog and petri who showed better practical performance by using a slightly different layout with counters gonzalez and navarro provided a discussion of the dynamic scenario where insertions and deletions to the vector are allowed and they obtain a space bound of v o n log bits and o log n log log log n time for all operations queries and updates one of the crucial issues when it comes to the performance of the is the number of cpu cache misses which occur during the search this comes from the fact that in order to calculate the access to the bwt sequence is often required in the order of m misses during the search for a pattern of length m indexes even for a small alphabet the problem of cache misses during the backward search was identified as the main performance limiter by et al who proposed to perform the with several symbols at a time in practice at most for the dna alphabet for which the scheme was described this solution allowed for example to improve the search speed by a factor of for the price of occupying roughly times the size of the here we address the problem of cache misses during the pattern search count query in a way related to the et al solution we also work on yet the algorithmic details are different two following subsections describe two variants of our approach and experimental results can be found in section superlinear space is a variation of the which aims to speed up the queries at the cost of additional space we start by calculating the bwt for the input string in the same way as for the regular however the difference is that we operate on rather than on individual characters and the count table stores results for each sampled from the bwt matrix this is the case for all q where q is the power of up to some predefined value qmax for instance namely for each suffix t i n we take all in the following form t i t i i t i i etc the are extracted until we reach qmax or one of the contains the terminating character such a with the terminating character is discarded consult figure for an example let q denote a collection of all for all i for each distinct item s from q we create a list ls of its occurrences in the sorted suffix order simply called sa order this resembles an inverted index on yet the main difference is that the elements in the lists are arranged in sa rather than the text order for the t in figure the list of occurrences corresponding to rows would be as follows row bwt pattern attern p ern patt n patter pattern rn patte tern pat ttern pa n rn tern p t tt patt r er tter e te atte t at a pa figure extraction in the structure with superlinear space for the text t pattern all are extracted qmax indexes for a given pattern p we start the with its longest suffix ps qmax for some c z the following backward steps deal with the remaining prefix of p in a similar way note that the number of steps is equal to the number of in the binary representation of m it is in the order of o log m and if m is a power of two then the result for match and count queries can be reported in constant time we simply return when qmax is bigger the overall index size is bigger but the search is faster for patterns of sufficient length because it allows for farther jumps towards the beginning of the pattern in our representation each step translates to performing two predecessor queries on a list ls a naive solution is a binary search with o log n time or even a linear search which may be faster if the list is short yet the predecessor query can be also handled in o log log n time using a trie hence the overall average search complexity is equal to o m log m log log n with o log m log log n cache misses where cl is the cache line size in bytes provided that each symbol from the pattern occupies one byte as regards the space complexity there is a total of n log n occurrences log n positions for each of n rows of the bwt matrix hence the total length of all occurrence lists is equal to n log n and the total complexity is equal to o n n bits since we need log n bits to store one position from the bwt matrix of n rows as regards the implementation in the language our focus is on data compaction each acts as a key in a hash table where collisions are resolved with chaining and the are stored implicitly as a pointer length pair where the pointer refers to the original string the values in the hash table include the count and the list of occurrences which are stored in one contiguous array we use a binary search for calculating rank on lists whose length is greater than or equal to an empirically determined value and a linear search otherwise linear space in this variant instead of extracting all etc for each row of the bwt matrix we extract only selected with the help of minimizers consult subsection for the description of minimizers the first step is to calculate all q minimizers for the input text t with some fixed and q parameters and lexicographic ordering where ties are resolved in favor of the leftmost of the smallest substrings next we store both the count table and the occurrence lists for all single characters in the same way as for the regular using a wavelet tree moreover we store information about the counts and occurrences of all which are located in between the minimizers from the set m t these are referred to as phrases for the set of minimizer indexes i t consecutive phrases pi are constructed in the following indexes manner pi t i i i i consult figure it is worth noting that this approach resembles the recently proposed samsami index a sampled suffix array on minimizers t m i phrases phrase ranges appearance ap ar an appe ar figure constructing phrases for the text appearance with the use of the search proceeds as follows we calculate all minimizers for the pattern we search for the pattern suffix ps p sr where sr is the starting position of the rightmost minimizer using the regular mechanism processing character at a time we operate on the phrases between the minimizers rather than individual characters and the search for these is performed in the same way as for the superlinear variant if it turns out that the phrase is a a faster mechanism for single characters can be used we search for the pattern prefix pp p sl where sl is the starting position of the leftmost minimizer using the regular mechanism processing character at a time the use of minimizers ensures that the phrases are selected from p in the same way as they are selected from t during the index construction the overall average search complexity is equal to o m log log n again assuming that a trie is used and the space complexity is linear approximate navarro et al provided an extensive survey of indexes for approximate string matching they categorized the algorithms into three categories based on the search procedure indexes neighborhood generation all strings in s s d s p k for a given pattern p are searched for directly partitioning into exact searching pies substrings of the pattern are searched for in an exact manner and these matches are extended into approximate matches intermediate partitioning substrings of the pattern are searched for approximately but with a fewer number of errors this method lies in between the two other ones in the neighborhood generation approach we generate the k of the pattern which contains all strings which could be possible matches over a specified alphabet if the alphabet is finite the amount of such strings is finite as well these strings can be searched for using any exact index such as a suffix tree or a suffix array the main issue is the fact that the size of k grows exponentially o mk k which means that basically all factors and especially k should be small when the suffix tree is used as an index for the input text cobbs proposed a solution which reduces the amount of nodes that have to be processed it runs in o mq time and occupies o q space where q n q depends on the problem instance and is the size of the output when the pattern is partitioned and searched for exactly pies we have to again store the index which can answer these exact queries let us note that this approach is based on the pigeonhole principle in the context of approximate string searching this means that for a given k at least one of k parts of average length k must match the text exactly more generally s parts match if k s parts are created the value of k should not be too large otherwise it could be the case that a substantial part of the input text has to be verified especially if the pattern is small alternatively the pattern can be divided into m q overlapping and these are searched for using the locate query against the index of extracted from the text see figure for an example of extraction these which are stored by the index are situated at fixed positions with an interval h and it must hold that h b k c for occurrences of p in t to contain s samples sutinen and tarhio suggested that the optimal value for q is in the order of m if it turns out that the positions of subsequent may correspond to a match explicit verification is performed similarly to the scenario any index can be used in order to answer the exact queries let us note that this approach with pattern substring lookup and verification can be also used for exact searching indexes in the case of intermediate partitioning we split the pattern into s pieces and we search for these pieces in an approximate manner using neighborhood generation the case of s corresponds to pure neighborhood generation whereas the case of s k is almost like pies in general this method requires more searching but less verification when compared to pies and thus lies in between the two approaches which were previously described consult and nowak in order to see a detailed comparison of the complexities of modern text indexing methods for approximate matching notable structures from the theoretical point of view include the trie by cole et al which is based on the suffix tree and the lcp structure see subsection for a description of the lcp it can be used in various contexts including and keyword indexing as well as wildcard matching for indexing and the problem it uses o n logk space and offers o m logk occ query time this was extended by tsur who described a structure similar to the one from cole et al with time complexity o m log log n occ for constant k and o space for a constant as regards a solution which is dedicated for the hamming distance gabriele et al provided an index with average search time o m occ and o n logl n space for some l let us note that these indexes can be usually easily adapted to the keyword matching scenario which is described in the following chapter an interesting category of data structures are indexes which can be used for approximate matching and especially sequence alignment they employ heuristic approaches in order to speed up the searching and for this reason they are not guaranteed to find the optimal match this means that they are also approximate in the mathematical sense they do not return the true answer to the problem their popularity is especially widespread in the context of bioinformatics where the massive sizes of the databases often force the programmers to use efficient filtering techniques notable examples include blast and fasta tools blast blast stands for basic local alignment search tool and it was published by altschul et al in with the purpose of comparing biological sequences see subsection for more information about biological data the name may refer to the indexes algorithm or to the whole suite of string searching tools for bioinformatics which are based on the said algorithm blast relies heavily on various heuristics and for this reason it is highly domain specific in fact there exist various flavors of blast for different data sets for instance one for protein data blastp and one for the dna blastn another notable modification is the which is combined with dynamic programming in order to identify distant protein relationships the basic algorithm proceeds as follows certain regions are removed from the pattern these include repeated substrings and regions of low complexity measured statistically using dust for dna we create a set q containing with overlaps that is all available see figure which are extracted from the pattern each s q is scored against all possible these can be precomputed and ones with the highest scores are retained creating a candidate set qc each word from qc is searched for in an exact manner against the database using for instance an inverted index see subsection these exact matches create the seeds which are later used for extending the matches the seeds are extended to the left and to the right as long as the alignment score is increasing alignment significance is assessed using statistical tools size te ex xt ti in ng tex ext xti tin ing text exti xtin ting texti extin xting figure selecting all overlapping with the shift of from the text t texting it must always hold that q in general blast is faster than other alignment algorithms such as the sw algorithm see subsection due to its heuristic approach however this comes at a price of reduced accuracy and shpaer et al state that there is a substantial chance that blast will miss a distant sequence similarity moreover implementations of the sw have been created and in certain cases they can match the performance of blast still blast is currently the most common tool for sequence alignment using massive biological data and it is openly available via its website bla which means that it can be conveniently run without consuming local resources chapter keyword indexes keywords indexes operate on individual words rather than the whole input string formally for a collection d dx of x strings words of total length n over a given alphabet i d is a keyword index supporting matching with a specified distance for any query pattern p it returns all words d from d d p d k with k for exact matching approximate dictionary matching was introduced by minsky and papert in in the following sections we describe algorithms from this category divided into exact section and approximate ones section our contribution in this field is presented in subsection which describes an index for approximate matching with few mismatches especially mismatch exact if the goal were to support only the match query for a finite number of keywords we could use any efficient set data structure such as a hash table or a trie see subsections and in order to store all those keywords boytsov reported that depending on the data set either one of these two may be faster in order to reduce space requirements we could use minimal perfect hashing see subsection and we could also compress the entries in the buckets bloom filter alternatively we could provide only approximate answers in a mathematical sense in order to occupy even less space a relevant data structure is the bloom filter bf keyword indexes which is a probabilistic data structure with possible false positive matches but no false negatives and an adjustable error rate the bf uses a bit vector a of size n where no bits are initially set each element e is hashed with k different hash functions hi in the form h e i where i z i n and a hi e when the lookup is performed the queried element is hashed with the same functions and it is checked whether a i for all i and if that is the case a possible match is reported consult figure broder and mitzenmacher provided the following formula for the expected false positive rate fp ln where m is the size of the filter in bits and n is the number of elements they note that for example when m the false positive probability is slightly above recently fan et al described a structure based on cuckoo hashing which takes even less space than the bf and supports deletions unlike the bf x y z w figure a bloom filter bf for approximate membership queries with n and k holding the elements from the set x y z the element w is not in the set since w and a reproduced from wikimedia inverted index an inverted index is a keyword index which contains a mapping from words d d to the lists which store all positions pi of their occurrences in the text d pn these positions can be for instance indexes in a string of characters or if a more approach were sufficient they could identify individual documents or databases see figure for an example with a single input string the positions allow a search on the whole phrase multiple words by searching for each word separately and checking whether the positions describe consecutive words in the text that is by looking for list intersections with a shift it could be also used for searching for a query which may cross the boundaries of the words by searching for substrings of a pattern and comparing the respective positions consult section for more information this means that the goal of an inverted index is to support various kinds of queries locate see section efficiently david eppstein available at http file in public domain keyword indexes word this is a banana occurrence list figure an inverted index which stores a mapping from words to their positions in the text this banana is a banana main advantage of the inverted index are fast queries which can be answered in constant average time using for example a hash table an inverted index is a rather generic idea which means that it could be also implemented with other data structures such as binary trees on the other hand there is a substantial space overhead in the order of o n and the original string has to be stored as well for this reason one of the key challenges for inverted indexes is how to succinctly represent the lists of positions while still allowing fast access multiple methods were proposed and they are often combined with one other the most popular one is to store gaps that is differences between subsequent positions for the index in figure the list for banana would be equal to instead of the values of the gaps are usually smaller than the original positions and for this reason they can stored using a fewer amount of bits another popular approach is to use coding here each byte contains a flag which is set if the number is bigger or equal to that is when it does not fit into bits and the other seven bits are used for the data if the number does not fit bits are stored in the original byte and the algorithm tries to store the remaining bits in the next byte proceeding until the whole number has been exhausted in order to reduce the average length in bits of the occurrence list one could also divide the original text into multiple blocks of fixed size instead of storing exact positions only block indexes are stored and after the index is retrieved the word is searched for explicitly within the block if the size of the data is so massive that it is infeasible to construct a single index as is often the case for web search engines sometimes only the most relevant data is selected for being stored in the index thus forming a pruned index approximate boytsov presented an extensive survey of keyword indexes for approximate searching including a practical evaluation he divided the algorithms into two following categories keyword indexes direct methods like neighborhood generation see section where certain candidates are searched for exactly filtering methods the dictionary is divided into many disjoint or overlapping clusters during the search a query is assigned to one or several clusters containing candidate strings and thus an explicit verification is performed only on a fraction of the original dictionary notable results from the theoretical point of view include the trie by cole et al which was already mentioned in the previous chapter for the hamk ming distance and dictionary matching it uses o n d logk d space and offers o m log d k k log log n occ query time where d this also holds for the edit distance but with larger constants another theoretical work describing the algorithm which is similar to our split index which we describe in subsection was given by shi and widmayer who obtained o n preprocessing time and space complexity and o n expected time if k is bounded by o log m they introduced the notion of home strings for a given which is the set of strings in d that contain the in the exact form the value of q is set to k in the search phase they partition p into k disjoint and use a candidate inspection order to speed up finding the matches with up to k edit distance errors on the practical front bocek et al provided a generalization of the fraenkel mf algorithm for k which is called fastss to check if two strings and match with up to k errors we first delete all possible ordered subsets of k symbols for all k k from and then we conclude that and may be in edit distance at most k if and only if the intersection of the resulting lists of strings is explicit verification is still required for instance if abbac and k then its neighborhood is as follows abbac bbac abac abac abbc abba abb aba abc aba abc aac bba bbc bac and bac of course some of the resulting strings are repeated and they may be removed if baxcy then its respective neighborhood for k will contain the string bac but the following verification will show that and are in edit distance greater than if however lev then it is impossible not to have in the neighborhood of at least one string from the neighborhood of hence we will never miss a match the lookup requires o kmk log nk time where m is the average word length from the dictionary and the index occupies o nk space another practical filter was presented by karch et al and it improved on the fastss method they reduced space requirements and query time by splitting long words similarly to fastblockss which is a variant of the original method and storing the neighborhood implicitly with indexes and pointers to original dictionary entries they claimed to be faster than other approaches such as the aforementioned fastss and the keyword indexes recently chegrane and belazzougui described another practical index and they reported better results when compared to karch et al their structure is based on the dictionary by belazzougui for the edit distance of see the following subsection an approximate in the mathematical sense data structure for approximate matching which is based on the bloom filter see subsection was also described the problem it is important to consider methods for detecting a single error since over of errors even up to roughly are within k for the edit distance with transpositions belazzougui and venturini presented a compressed index whose space is bounded in terms of the order empirical entropy of the indexed dictionary it can be based either on perfect hashing having o m occ query time or on a compressed permuterm index with o m min m n log log n occ time when logc n for some constant c but improved space requirements the former is a compressed variant of a dictionary presented by belazzougui which is based on neighborhood generation and occupies o n log space and can answer queries in o m time chung et al showed a theoretical work where external memory is used and their focus is on operations they limited the number of these operations to o where w is the size of the machine word and b is the number of words within a block a basic unit of and their structure occupies o blocks in the category of filters mor and fraenkel described a method which is based on the for the problem yao and yao described a data structure for binary strings of fixed length m with o m log log query time and o log m space requirements this was later improved by brodal and with a data structure with o m query time which occupies o n space this was improved with a structure with o query time and o log m space in the cell probe model where only memory accesses are counted another notable example is a recent theoretical work of chan and lewenstein who introduced the index with the optimal query time o which uses additional o wd d bits of space beyond the dictionary itself assuming a alphabet permuterm index a permuterm index is a keyword index which supports queries with one wildcard symbol the idea is store all rotations of a given word appended with the terminating keyword indexes character for instance for the word text the index would consist of the following permuterm vocabulary text ext t xt te t tex text when it comes to searching the query is first rotated so that the wildcard appears at the end and subsequently its prefix is searched for using the index this could be for example a trie or any other data structure which supports a prefix lookup the main problem with the standard permuterm index is its space usage as the number of strings inserted into the data structure is the number of words multiplied by the average string length ferragina and venturini proposed a compressed permuterm index in order to overcome the limitations of the original structure with respect to space they explored the relation between the permuterm index and the bwt see subsection which is applied to the concatenation of all strings from the input dictionary and they provided a modification of the known from in order to support the functionality of the permuterm index split index one of the practical approximate indexes was described by thesis author and grabowski experimental results for this structure can be found in section as indexes supporting approximate matching tend to grow exponentially in k the maximum number of allowed errors it is also a worthwhile goal to design efficient indexes supporting only a small for this reason we focus on the problem of dictionary matching with few mismatches especially one mismatch where ham di p k for a collection of words d d a pattern p and the hamming distance ham the algorithm that we are going to present is uncomplicated and based on the dirichlet principle ubiquitous in approximate string matching techniques we partition each word d into k disjoint pieces of average length k hence the name split index and each such piece acts as a key in a hash table ht the size of each piece pi of word d is determined using the following formula k e or k c p depending on the practical evaluation and the piece size is rounded to the nearest integer and the last piece covers the characters which are not in other pieces this means that the pieces might be in fact unequal in length and for k the values in ht are the lists of words which have one of their pieces as the corresponding key in this way every word occurs on exactly k lists this seemingly bloats the space usage still in the case of small k the occupied space is acceptable moreover instead of storing full words on the respective lists we only store their missing prefix or suffix for instance for the word table and k we would keyword indexes have a relation tab le on one list tab would be the key and le would be the value and le tab on the other in the case of k we first populate each list with the pieces without the prefix and then with the pieces without the suffix additionally we store the position on the list as a index where the latter part begins in this way we traverse only a half of a list on average during the search we can also support k larger than in this case we ignore the piece order on a list and we store k e bits with each piece that indicate which part of the word is the list key let us note that this approach would also work for k however it turned out to be less efficient as regards the implementation in the language our focus is on data compactness in the hash table we store the buckets which contain word pieces as keys le and pointers to the lists which store the missing pieces of the word tab ft these pointers are always located right next to the keys which means that unless we are very unlucky a specific pointer should already be present in the cpu cache during the traversal the memory layouts of these substructures are fully contiguous successive strings are represented by multiple characters with a prepended counter which specifies the length and the counter with the value indicates the end of the list during the traversal each length can be compared with the length of the piece of the pattern as mentioned before the words are partitioned into pieces of fixed length this means that on average we calculate the hamming distance for only half of the pieces on the list since the rest can be ignored based on their length any hash function for strings can be used and two important considerations are the speed and the number of collisions since a high number of collisions results in longer buckets which may in turn have a negative effect on the query time this subject is explored in more detail along with the results in chapter figure illustrates the layout of the split index the preprocessing stage proceeds as follows duplicate words are removed from the dictionary the following steps refer to each word d from d the word d is split into k pieces for each piece pi if pi ht we create a new list ln containing the missing pieces later simply referred to as a missing piece in the case of k this is always one contiguous piece p pj j k j i and add it to the hash table we append pi and the pointer to ln to the bucket otherwise if pi ht we append the missing pieces p to the already existing list li keyword indexes figure split index for keyword indexing which shows the insertion of the word table for k the index also stores the words left and tablet only selected lists containing pieces of these two words are shown and and indicate pointers to the respective lists the first cell of each list indicates a word position the word count from the left where the missing prefixes begin k hence we deal with two parts namely prefixes and suffixes and means that the list has only missing suffixes adapted from wikimedia as regards the search the pattern p is split into k pieces we search for each piece pi the prefix and the suffix if k the list li is retrieved from the hash table or we continue if pi ht we traverse each missing piece pj from li if the verification is performed and the result is returned if ham pj p pi the pieces are combined into one word in order to form the answer jorge stolfi available at http cc keyword indexes complexity let us consider the average word length where the average time complexity of the preprocessing stage is equal to o kn where k is the allowed number of errors and n is the total input dictionary size the length of the nation of all words from d n this is because for each word and for each piece pi we either add the missing pieces to a new list or append them to the already existing one in o time let us note that n we assume that adding a new element to the bucket takes constant time on average and the calculation of all hashes takes o n time in total this is true irrespective of which list layout is used there are two layouts for k and k see the preceding paragraphs the occupied space is equal to o kn because each piece appears on exactly k lists and in exactly bucket the average search complexity is equal to o kt where t is the average length of the list we search for each of k pieces of the pattern of length m and when the list corresponding to the piece p is found it is traversed and at most t verifications are performed each verification takes at most o min m time where dmax is the longest word in the dictionary or o k time in theory using the old technique from landau and vishkin after o n log preprocessing but o time on average again we assume that determining a location of the specific list that is iterating a bucket takes o time on average as regards the list its average length t is higher when there is a higher probability that two words and from d have two parts of the same length l which match exactly p r l l since all words are sampled from the same alphabet t depends on the alphabet size that is t f still the dependence is rather indirect in dictionaries which store words from a given language t will be rather dependent on the order entropy of the language compression in order to reduce storage requirements we apply a basic compression technique we find the most frequent in the word collection and replace their occurrences on the lists with unused symbols byte values the values of q can be specified at the preprocessing stage for instance q and q are reasonable for the english alphabet and dna respectively different q values can be also combined depending on the distribution of in the input text we may try all possible combinations of up to a certain q value and select ones which provide the best compression in such a case longer should be encoded before shorter ones for example a word compression could be encoded as p using the following substitution keyword indexes list com re co om sion note that not all from the substitution list are used possibly even a recursive approach could be applied although this would certainly have a substantial impact on the query time see section for the experimental results and a further discussion the space usage could be further reduced by the use of a different character encoding for the dna assuming symbols only it would be sufficient to use bits per character and for the basic english alphabet bits in the latter case there are letters which in a simplified text can be augmented only with a space character a few punctuation marks and a capital letter flag such an approach would be also beneficial for space compaction and it could have a further positive impact on cache usage the compression naturally reduces the space while increasing the search time and a sort of a middle ground can be achieved by deciding which additional information to store in the index this can be for instance the length of an encoded compressed piece after decoding which could eliminate some pieces based on their size without performing the decompression and explicit verification parallelization the algorithm could be sped up by means of parallelization since index access during the search procedure is in the most straightforward approach we could simply distribute individual queries between multiple threads a more variation would be to concurrently operate on word pieces after the word has been split up with the number of pieces being dependent on the k parameter we could even access in parallel the lists which contain missing pieces prefixes and suffixes for k although the gain would be probably limited since these lists usually store at most a few words if we had a sufficient amount of threads at our disposal these approaches could be combined still it is to be noted that the use of multiple threads has a negative effect on cache utilization inverted split index the split index could be extended in order to include the functionality of an inverted index for approximate matching as mentioned in subsection the inverted index could be in practice any data structure which supports an efficient word lookup let us consider the compact list layout of the split index presented in figure where each piece is located right next to other pieces instead of storing only the counter which specifies the length of the piece we could also store right next to this piece its position in the text such an approach would increase the average length of the list only by keyword indexes constant factor and it would not break the contiguity of the lists while also keeping the o kn space complexity moreover the position should be already present in the cpu cache during the list traversal keyword selection keyword indexes can be also used in the scenario where there are no explicit boundaries between the words in such a case we would like to select the keywords according to a set of rules and form a dictionary d from the input text t such an index which stores sampled from the input text may be referred to as a index it is useful for answering keyword rather than queries which might be required for example due to time requirements when we would like to trade space for speed examples of the input which can not be easily divided into words include some natural languages chinese where it is not possible to clearly distinguish the words their boundaries depend on the context or other kinds of data such as a complete genome let us consider the input text t which is divided into n q the issue lies in the amount of space which is occupied by all tuples s li where s is the and li identifies its positions which is in the order of o n or o nqmax for all possible up to some qmax value general compression techniques are usually not sufficient and thus a dedicated solution is required this is especially the case in the context of bioinformatics where data sets are substantial the applications could be for instance retrieving the seeds in the algorithm described in section one of the approaches was proposed by kim et al and it aims to eliminate the redundancy in position information consecutive are grouped into subsequences and each is identified by the position of the subsequence within the documents and the position of a within the subsequence which forms a index structure this concept was also extended by the original authors to include the functionality of approximate matching minimizers the idea of minimizers was introduced by roberts et al with applications in genome sequencing with de bruijn graphs and counting and it consists in storing only selected rather than all from the input text here the goal is to choose such from a given string s a set m s so that for two strings and if it holds for a pattern p that p keyword indexes and is above some threshold then it should also hold that m in order to find a q we slide a window of length q consecutive over t shifting it by character at a time and at each window position we select a which is the smallest one lexicographically ties may be resolved for instance in favor of the leftmost of the smallest substrings figure demonstrates this process t t t e e e x x x x t t t t t i n g i i i n n g figure selecting underlined that is choosing while sliding a window of length over the text texting the results belong to the following set ex in let us repeat an important property of the minimizers which makes them useful in practice if for two strings and it holds that p p q then it is guaranteed that and share an q because they share one full window this means that for certain applications we can still ensure that no exact matches are overlooked by storing the minimizers rather than all string sketches we introduce the concept of string sketches whose goal is to speed up string comparisons at the cost of additional space for a given string s a sketch s is constructed as s f s using some function f which returns a block of data in particular for two strings and we would like to determine with certainty that or ham k when comparing only sketches and there exists a similarity between sketches and hash functions however hash comparison would work only in the context of exact matching when the sketch comparison is not decisive we still have to perform an explicit verification on and but the sketches allow for reducing the number of such verifications since the sketches refer to individual words they are relevant in the context of keyword indexes assuming that each word d d is stored along with sketches could be especially useful if the queries are known in advance or is relatively high since sketch calculation might be sketches use individual bits in order to store information about frequencies in the string various approaches exist and main properties of the said include keyword indexes size for instance individual letters are sensible for the english alphabet but pairs might be better for the dna frequency we can store binary information in each bit that indicates whether a certain appears in the string in total for a sketch we call this approach an occurrence sketch or we can store their count at most using per in total for a sketch we call this approach a count sketch selection which should be included in the sketches these could be for instance which occur most commonly in the sample text for instance let us consider an occurrence sketch which is built over most common letters of the english alphabet namely e t a o i n s h consult appendix e to see the frequencies for the word instance the sketch where each bit corresponds to one of the letters from the aforementioned set would be as follows we can quickly compare two sketches by taking a binary xor operation and counting the number of bits which are set in the result calculating the hamming weight hw note that hw can be determined in constant time using a lookup table of size bytes where n is the sketch size in bytes we denote the sketch difference with hs and hs hw let us note that hs does not determine the number of mismatches for instance for run and ran hs might be equal to occurrence differences in a and u but there is still only one mismatch on the other extreme for two strings of length n where each string consists of a repeated occurrence of one different letter hs might be equal to but the number of mismatches is in general hs can be used to provide a lower bound on the true number of errors for sketches which record information about single characters the following holds ham dhs the side can be calculated quickly using a lookup table since hs the true number of mismatches is underestimated especially by count sketches since we calculate the hamming weight instead of comparing the counts for instance for the count of bits and the count of bits the difference is instead of still even though the true error is higher than hs sketches can be used in order to speed up the comparisons because certain strings will be compared and rejected in constant time using fast bitwise operations and array lookups as regards the space overhead incurred by the sketches it is equal to o since we have to store one sketch per word together with the lookup tables which are used to speed up the processing consult section in order to see the experimental results chapter experimental results the results were obtained on the machine with the following specifications processor intel running at ghz ram gb memory operating system ubuntu kernel version programs were written in the programming language with certain prototypes in the python language using features from the standard they use the standard library boost libraries version and linux system libraries correctness was analyzed using valgrind a tool for error checking and profiling no errors or memory leaks were reported the source code was compiled as a version with clang compiler which turned out to be produce a slightly faster executable than the gcc when checked under the optimization flag for the description of the structure consult subsection here we present experimental results for the superlinear index version as regards the hash function xxhash was used available on the internet consult appendix f and the load factor was equal to the length of the pattern has a crucial impact on the search time since the number steps is equal the number of in the binary representation of this means that the search will be the fastest for m in the form constant time for m up experimental results to a certain maximum value and the slowest for m in the form where c z we can see in figure that the query time also generally decreases as the pattern length increases mostly due to the fact that the times are given per character the results are the average times calculated for one million queries which were extracted from the input text query time per char ns pattern length figure query time per character vs pattern length and for the english text of size mb let us point out notable differences between pattern lengths and and and we also compare our approach with other structures consult figure we used the implementations from the sdsl library available on the internet sds and the implementations of structures by grabowski et al available on the internet ran as regards the space the structure just as the name suggests is roughly two order of magnitude bigger than other indexes the index size for other methods ranged from approximately to where n is the input text size on the other hand occupied the amount of space equal to almost for qmax split index in this section we present the results which appeared in a preprint by thesis author and grabowski for the description of the split index consult subsection experimental results superlinear huffman wt csa compressed bit vector query time per char ns pattern length figure query time per character vs pattern length and for different methods for the english text of size mb note the logarithmic one of the crucial components of the split index is a hash function ideally we would like to minimize the average length of the bucket let us recall that we use chaining for collision resolution however the hash function should be also relatively fast because it has to be calculated for each of k parts of the pattern of total length m we investigated various hash functions and it turned out that the differences in query times are not negligible although the average length of the bucket was almost the same in all cases relative differences were smaller than we can see in table that the fastest function was the xxhash available on the internet consult appendix f and for this reason it was used for the calculation of other results hash function xxhash sdbm superfast city farsh farm query time table evaluated hash functions and search times per query for the english dictionary of size mb and k a list of common english misspellings was used as queries max lf experimental results decreasing the value of the load factor did not strictly provide a speedup in terms of the query time as demonstrated in figure this can be explained by the fact that even though the relative reduction in the number of collisions was substantial the absolute difference was equal to at most a few collisions per list moreover when the lf was higher pointers to the lists could be possibly closer to each other which might have had a positive effect on cache utilization the best query time was reported for the maximum lf value of hence this value was used for the calculation of other results index size mb query time s load factor load factor figure query time and index size vs the load factor for the english dictionary of size mb and k a list of common english misspellings was used as queries the value of lf can be higher than because we use chaining for collision resolution in table we can see a linear increase in the index size and an exponential increase in query time with growing even though we concentrate on k and the most promising results are reported for this case our index might remain competitive also for higher k values k query time index size kb table query time and index size vs the error value k for the english language dictionary of size mb a list of common english misspellings was used as queries substitution coding provided a reduction in the index size at the cost of increased query time were generated separately for each dictionary d as a list of experimental results which provided the best compression for d they minimized the size of all encoded words se di for the english language dictionaries we also considered using only or only and for the dna only a maximum of and since mixing the of various sizes has a further negative impact on the query time for the dna queries were generated randomly by introducing noise into words sampled from dictionary and their length was equal to the length of the particular word up to errors were inserted each with a probability for the english dictionaries we opted for the list of common misspellings and the results were similar to the case of randomly generated queries the evaluation was run times and the results were averaged we can see the relation for the english dictionaries in figure and for the dna in figure in the case of english using the optimal from the compression point of view minimizing the index size combination of mixed provided almost the same index size as using only substitution coding methods performed better for the dna where because the sequences are more repetitive let us note that the compression provided a higher relative decrease in index size with respect to the original text as the size of the dictionary increased for instance for the dictionary of size mb the compression ratio was equal to and the query time was still index size mb query time s around consult appendix c for more information about the compression dictionary size mb no mixed dictionary size mb figure query time and index size vs dictionary size for k with and without coding mixed refer to the combination of which provided the best compression and for the three dictionaries these were equal to grams and respectively english language dictionaries and the list of common english misspellings were used index size mb query time s experimental results dictionary size mb no mixed dictionary size mb figure query time and index size vs dictionary size for k with and without coding mixed refer to the combination of which provided the best compression and these were equal to grams due to computational constraints they were calculated only for the first dictionary but used for all four dictionaries dna dictionaries and the randomly generated queries were used tested on the english language dictionaries promising results were reported when compared to methods proposed by other authors others consider the levenshtein distance as the edit distance whereas we use the hamming distance which puts us at the advantageous position still the provided speedup is significant and we believe that the more restrictive hamming distance is also an important measure of practical use see subsection for more information the implementations of other authors are available on the internet boy che as regards the results reported for the mf and boytsov s reduced alphabet neighborhood generation it was not possible to accurately calculate the size of the index both implementations by boytsov and for this reason we used rough ratios based on index sizes reported by boytsov for similar dictionary sizes let us note that we compare our algorithm with chegrane and belazzougui who published better results when compared to karch et al who in turned claimed to be faster than other methods we have not managed to identify any indexes for matching in dictionaries over any fixed alphabet dedicated for the hamming distance which could be directly compared to our split index the times for the algorithm are not listed since they were roughly orders of magnitude higher than the ones presented consult figure for details we also evaluated different word splitting schemes for instance for k one could experimental results our method our method compression chegrane and belazzougui boytsov query time s index size mb figure query time vs index size for different methods the method with compression encoded mixed we used the hamming distance and the other authors used the levenshtein distance for k english language dictionaries of size mb mb and mb were used as input and the list of common misspellings was used for queries split the word into two parts of different sizes instead of however unequal splitting methods caused slower queries when compared to the regular one as regards hamming distance calculation it turned out that the naive implementation simply iterating and comparing each character was the fastest one the compiler with automatic optimization was simply more efficient than other implementations ones based directly on sse instructions that we have investigated string sketches string sketches which were introduced in section allow for faster string comparison since in certain cases we can deduce for two strings and that d k for some k without performing an explicit verification in our implementation a sketch comparison requires performing one bitwise operation and one array lookup constant operations in total we analyze the comparison time between two strings using various sketch types versus an explicit verification the sketch is calculated once per query and it is then reused for the comparison with consecutive words we examine the situation where a single query is compared against a dictionary of words the dictionary size for which a speedup was reported was around words or more since in the case of fewer words sketch construction was too slow in relation with the comparisons when the experimental results sketch comparison was not decisive a verification was performed and it contributed to the elapsed time the words were generated over the english alphabet consult appendix e in order to see letter frequencies and each sketch occupied bytes sketches were not effective figures and contain the results for occurrence and count sketches respectively consult appendix d for more information regarding the letter distribution in the alphabet comparison time ns no sketches occurrence sketch common occurrence sketch mixed occurrence sketch rare word size figure comparison time vs word size for mismatch using occurrence sketches for words generated over the english alphabet each sketch occupies bytes and time refers to average comparison time between a pair of words common sketches use most common letters rare sketches use least common letters and mixed sketches use most common and least common letters note the logarithmic experimental results comparison time ns no sketches count sketch common count sketch mixed count sketch rare word size figure comparison time vs word size for mismatch using count sketches for words generated over the english alphabet each sketch occupies bytes and time refers to average comparison time between a pair of words common sketches use most common letters rare sketches use least common letters and mixed sketches use most common and least common letters note the logarithmic chapter conclusions string searching algorithms are ubiquitous in computer science they are used for common tasks performed on home pcs such as searching inside text documents or spell checking as well as for industrial projects genome sequencing strings can be defined very broadly and they usually contain natural language and biological data dna proteins but they can also represent various kinds of data such as music or images an interesting aspect of string matching is the diversity and complexity of the solutions which have been presented over the years both theoretical and practical despite the simplicity of problem formulation one of the most common ones being check if pattern p exists in text t we investigated string searching methods which preprocess the input text and construct a data structure called an index this allows to reduce the time required for searching and it is often indispensable when it comes to massive sizes of modern data sets the indexes are divided into ones which operate on the whole input text and can answer arbitrary queries and keyword indexes which store a dictionary of individual these can corresponds to words in a natural language dictionary or dna reads key contributions include the structure called which is a modification of the a compressed index that trades space for speed two variants of the were described one using o n n bits of space with o m log m log log n average query time and one with linear space and o m log log n average query time where n is the input text length and m is the pattern length we experimentally show that by operating on in addition to individual characters a significant speedup can be achieved albeit at the cost of very high space requirements hence the name bloated conclusions the split index is a keyword index for the problem with a focus on the case it performed better than other solutions for the hamming distance and times in the order of microsecond were reported for one mismatch for a natural language dictionary on a pc a minor contribution includes string sketches which aim to speed up approximate string comparison at the cost of additional space o per string future work we presented results for the superlinear variant of the index in order to demonstrate its potential and capabilities multiple modifications and implementations of this data structure can be introduced let us recall that we store the count table and occurrence lists for selected in addition to individual characters from the regular this selection process can be the more we store the faster the search should be but the index size grows as well for instance the linear space version could be augmented with additional etc which start at the position of each minimizer up to an where s is the maximum gap size between two minimizers this would eliminate two phases of the search for prefixes and suffixes cf subsection where individual characters have to be used for the mechanism moreover the comparison with other methods could be augmented with an inverted index on whose properties should be more similar to than those of variants especially when it comes to space requirements as regards the split index we describe possible extensions in subsections and these include using multiple threads and introducing the functionality of an inverted index on moreover the algorithm could be possibly extended to handle the levenshtein distance as well although this would certainly have a substantial impact on space usage another desired functionality could include a dedicated support for a binary alphabet in such a case individual characters could be stored with bits which should have a positive effect on cache usage thanks to further data compaction and possibly an alignment with the cache line size appendix a data sets the following tables present information regarding the data sets that were used in this work table describes data sets from the popular pizza chili p c corpus piz which were used for indexes was extracted from table describes data sets which were used for keyword indexes the english dictionaries come from linux packages and the webpage by foster fos and the list of common misspellings which were used as queries was obtained from the wikipedia typ the dna dictionaries contain which were extracted from the genome of drosophila melanogaster that was collected from the flybase database fly the provided sizes refer to the size of the dictionary after preprocessing for keyword indexes duplicates as well as delimiters usually newline characters are removed the abbreviation nl refers to natural language name source p c p c p c type nl english nl english nl english size mb mb mb table a summary of data sets which were used for the experimental evaluation of indexes data sets name iamerican foster misspellings source linux package foster linux package wikipedia flybase flybase flybase flybase nl nl nl nl type english english english english dna dna dna dna size mb mb mb kb words mb mb mb mb table a summary of data sets which were used for the experimental evaluation of keyword indexes appendix b exact matching complexity in the theoretical analysis we often mention exact string comparison determining whether it must hold that and the complexity of this operation is equal to o n all characters have to be compared when the two strings match on the other hand the average complexity depends on the alphabet if for instance we have probability that characters and match that characters and match as well etc in the case of uniform letter frequencies more generally the probability that there is a match between all characters up to a position i is equal to and the average number of required comparisons ac is equal to for any we can derive the following relation lim ac and hence treating the average time required for exact comparison of two random strings from the same alphabet as o is justified for any in figure we present the relation between the average number of comparisons and the value in the case of such as the english language alphabet context information in the form of order entropy should be taken into account in a simplified analysis let us consider the frequencies from appendix e the probability that two characters sampled at random match is equal to for a for t etc proceeding in this manner the probability for the match between the first pair of characters is equal to for the first and the second pair etc as regards an empirical evaluation on the text the average number of comparisons between a random pair of strings was equal to approximately exact matching complexity avg number of comparisons alphabet size figure average number of character comparisons when comparing two random strings for exact matching from the same alphabet with uniform letter frequency vs the alphabet size appendix c split index compression this appendix presents additional information regarding the compression of the split index consult subsection for the description of this data structure and section for the experimental results in figures and we can see the relation between the index size and the selection of and for the english alphabet where the clearly provided a better compression and and for the dna index size mb count figure index size vs the number of used for the compression for the english dictionary were used and the remaining were split index compression index size mb count figure index size vs the number of used for the compression for the dna dictionary were used and the remaining were appendix d string sketches in section we discussed the use of string sketches for the english alphabet where we could take advantage of the varying letter frequency here we present the results for the alphabet with uniform distribution and instead of selecting the most or the least common letters the sketches contain information regarding occurrence or count randomly selected letters we can see in figure that in this case the sketches do not provide the desired speedup no sketches occurrence sketch count sketch comparison time ns word size figure comparison time vs word size for mismatch using various string sketches generated over the alphabet with uniform letter frequency and each sketch occupies bytes and time refers to average comparison time between a pair of words note the logarithmic appendix e english letter frequency frequencies presented in table were used for the generation of random queries where the letter distribution corresponded to the english use letter e t a o i n s h r d l c u frequency letter m w f g y p b v k j x q z frequency table frequencies of english alphabet letters appendix f hash functions table contains internet addresses of hash functions which were used to obtain experimental results for the split index section if the hash function is not listed it means that our own implementation was used name city farm farsh superfast xxhash address https https https https http http https table a summary of internet addresses of hash functions bibliography alfred aho and margaret corasick efficient string matching an aid to bibliographic search communications of the acm stephen altschul and bruce erickson optimal sequence alignment using affine gap costs bulletin of mathematical biology stephen altschul warren gish webb miller eugene myers and david lipman basic local alignment search tool journal of molecular biology mohamed ibrahim abouelhoda stefan kurtz and enno ohlebusch the enhanced suffix array and its applications to genome analysis in algorithms in bioinformatics pages springer mohamed ibrahim abouelhoda stefan kurtz and enno ohlebusch replacing suffix trees with enhanced suffix arrays journal of discrete algorithms alexandr andoni robert krauthgamer and krzysztof onak polylogarithmic approximation for edit distance and the asymmetric query complexity in foundations of computer science annual ieee symposium on pages ieee amihood amir moshe lewenstein and ely porat faster algorithms for string matching with k mismatches journal of algorithms vo ngoc anh and alistair moffat inverted index compression using wordaligned binary codes information retrieval mohamed ibrahim abouelhoda enno ohlebusch and stefan kurtz optimal exact string matching based on suffix arrays in string processing and information retrieval pages springer bibliography gregory bard tolerant via the distance metric in proceedings of the fifth australasian symposium on acsw pages australian computer society horst bunke and urs applications of approximate string matching to shape recognition pattern recognition djamal belazzougui fabiano botelho and martin dietzfelbinger hash displace and compress in algorithms esa annual european symposium copenhagen denmark september proceedings pages djamal belazzougui and fabio cunial detection of unusual words arxiv preprint djamal belazzougui faster and edit distance dictionary in combinatorial pattern matching pages springer gerth brodal and leszek approximate dictionary queries in combinatorial pattern matching pages springer thomas bocek ela hunt burkhard stiller and fabio hecht fast similarity search in large dictionaries technical report department of informatics university of zurich switzerland bib the holy bible king james version walter burkhard and robert keller some approaches to file searching communications of the acm bla blast basic local alignment search tool http online accessed burton bloom in hash coding with allowable errors communications of the acm robert boyer and strother moore a fast string searching algorithm communications of the acm eric brill and robert moore an improved error model for noisy channel spelling correction in proceedings of the annual meeting on association for computational linguistics pages association for computational linguistics bibliography andrei broder and michael mitzenmacher network applications of bloom filters a survey internet mathematics alexander bowe multiary wavelet trees in practice honours thesis rmit university australia boy leonid boytsov s software http software online accessed leonid boytsov indexing methods for approximate dictionary searching comparative analysis journal of experimental algorithmics sergey brin and lawrence page the anatomy of a hypertextual web search engine computer networks and isdn systems gerth brodal and srinivasan venkatesh improved bounds for dictionary with one error information processing letters djamal belazzougui and rossano venturini compressed string dictionary with edit distance one in combinatorial pattern matching pages springer michael burrows and david wheeler a lossless data compression algorithm technical report systems research center ricardo and gaston gonnet a new approach to text searching communications of the acm ricardo and gonzalo navarro text searching theory and practice in formal languages and applications pages springer manolis christodoulakis and gerhard brey edit distance with singlesymbol combinations and splits in prague stringology conference pages ibrahim chegrane and djamal belazzougui simple compact and robust approximate string dictionary journal of discrete algorithms clifford allyx fontaine ely porat benjamin sach and tatiana starikovskaya the problem revisited arxiv preprint bibliography aleksander and szymon grabowski a practical index for proximate dictionary matching with few mismatches arxiv preprint surajit chaudhuri kris ganjam venkatesh ganti and rajeev motwani robust and efficient fuzzy match for online data cleaning in proceedings of the acm sigmod international conference on management of data pages acm richard cole gottlieb and moshe lewenstein dictionary matching and indexing with errors and don t cares in proceedings of the thirtysixth annual acm symposium on theory of computing pages acm richard cole and ramesh hariharan approximate string matching a simpler faster algorithm siam journal on computing che simple compact and robust approximate string dictionary https online accessed maxime crochemore costas iliopoulos christos makris wojciech rytter athanasios tsakalidis and tsichlas approximate string matching with gaps nordic journal of computing richard cole tsvi kopelowitz and moshe lewenstein suffix trays and suffix trists structures for faster text indexing in automata languages and programming pages springer william chang and jordan lampe theoretical and empirical comparisons of approximate string matching algorithms in combinatorial pattern matching pages springer timothy chan and moshe lewenstein fast string dictionary lookup with one error in combinatorial pattern matching pages springer david clark compact pat trees phd thesis university of waterloo canada rayan chikhi antoine limasset shaun jackman jared simpson and paul medvedev on the representation of de bruijn graphs in research in computational molecular biology pages springer bibliography thomas cormen charles leiserson ronald rivest and clifford stein introduction to algorithms the mit press edition william chang and thomas marr approximate string matching and local similarity in combinatorial pattern matching pages springer alejandro juan carlos moure antonio espinosa and porfidio for faster pattern matching procedia computer science francisco claude gonzalo navarro and alberto efficient compressed wavelet trees over large alphabets arxiv preprint francisco claude gonzalo navarro hannu peltola leena salmela and jorma tarhio string matching with alphabet sampling journal of discrete algorithms archie cobbs fast approximate matching using suffix trees in combinatorial pattern matching pages springer richard cole tight bounds on the complexity of the string matching algorithm siam journal on computing standard for programming language technical report shane culpepper matthias petri and falk scholer efficient document retrieval in proceedings of the international acm sigir conference on research and development in information retrieval pages acm chung yufei tao and wei wang dictionary search with one edit error in string processing and information retrieval pages springer lorinda cherry and william vesterman writing tools the style and diction programs technical report rutgers university usa lawrence carter and mark wegman universal classes of hash functions in proceedings of the ninth annual acm symposium on theory of computing pages acm bibliography fred damerau a technique for computer detection and correction of spelling errors communications of the acm sebastian deorowicz context exhumation after the transform information processing letters gautam das rudolf fleischer leszek dimitris gunopulos and juha episode matching in combinatorial pattern matching pages springer george davida yair frankel and brian matt on enabling secure applications through biometric identification in security and privacy proceedings ieee symposium on pages ieee mrinal deo and sean keely parallel suffix array and least common prefix for the gpu in acm sigplan notices volume pages acm sebastian deorowicz marek kokot szymon grabowski and agnieszka kmc fast and counting bioinformatics martin dietzfelbinger anna karlin kurt mehlhorn friedhelm meyer auf der heide hans rohnert and robert tarjan dynamic perfect hashing upper and lower bounds siam journal on computing sean eddy where did the alignment score matrix come from nature biotechnology elliman and lancaster a review of segmentation and contextual analysis techniques for text recognition pattern recognition bin fan dave andersen michael kaminsky and michael mitzenmacher cuckoo filter practically better than bloom in proceedings of the acm international conference on emerging networking experiments and technologies pages acm martin farach optimal suffix tree construction with large alphabets in annual symposium on foundations of computer science focs miami beach florida usa october pages bibliography kimmo fredriksson and szymon grabowski fast convolutions and their applications in approximate string matching in combinatorial algorithms pages springer paolo ferragina rodrigo gonzalo navarro and rossano venturini compressed text indexes from theory to practice journal of experimental algorithmics michael l fredman and endre storing a sparse table with o worst case access time journal of the acm simone faro and thierry lecroq the exact online string matching problem a review of the most recent results acm computing surveys darren flower on the properties of bit measures of chemical similarity journal of chemical information and computer sciences fly flybase homepage http online accessed paolo ferragina and giovanni manzini opportunistic data structures with applications in foundations of computer science proceedings annual symposium on pages ieee paolo ferragina and giovanni manzini indexing compressed text journal of the acm paolo ferragina giovanni manzini veli and gonzalo navarro compressed representations of sequences and indexes acm transactions on algorithms fos http txt online accessed edward fredkin trie memory communications of the acm paolo ferragina and rossano venturini the compressed permuterm index acm transactions on algorithms zvi galil on improving the worst case running time of the string matching algorithm communications of the acm bibliography eugene garfield the permuterm subject index an autobiographical review journal of the american society for information science simon gog timo beller alistair moffat and matthias petri from theory to practice plug and play with succinct data structures in international symposium on experimental algorithms sea pages roberto grossi ankur gupta and jeffrey scott vitter text indexes in proceedings of the fourteenth annual symposium on discrete algorithms pages society for industrial and applied mathematics patrick girard christian landrault serge pravossoudovitch and daniel severac reduction of power consumption during test application by test vector ordering electronics letters szymon grabowski veli and gonzalo navarro first man then a simple in string processing and information retrieval pages springer alessandra gabriele filippo mignosi antonio restivo and marinella sciortino indexing structures for approximate string matching in algorithms and complexity pages springer rodrigo and gonzalo navarro on dynamic compressed sequences and applications theoretical computer science szymon grabowski gonzalo navarro przywarski alejandro salinger and veli a simple international journal of foundations of computer science travis gagie gonzalo navarro simon puglisi and jouni relative compressed suffix trees arxiv preprint simon gog compressed suffix trees design construction and applications phd thesis university of ulm germany goo how search works the story http online accessed bibliography simon gog and matthias petri optimized succinct data structures for massive data software practice and experience szymon grabowski and marcin raniszewski sampling the suffix array with minimizers arxiv preprint szymon grabowski new algorithms for exact and approximate text matching zeszyty naukowe nr politechnika available at http szymon grabowski new tabulation and sparse dynamic programming based techniques for sequence similarity problems in proceedings of the prague stringology conference prague czech republic september pages szymon grabowski marcin raniszewski and sebastian deorowicz fmindex for dummies arxiv preprint ryan gregory genome size and developmental complexity genetica dan gusfield algorithms on strings trees and sequences computer science and computational biology cambridge university press roberto grossi and jeffrey scott vitter compressed suffix arrays and suffix trees with applications to text indexing and string matching siam journal on computing hongwei huo longgang chen heng zhao jeffrey scott vitter yakov nekrich and qiang yu a in proceedings of the seventeenth workshop on algorithm engineering and experiments alenex san diego ca usa january pages harold stanley heaps information retrieval computational and theoretical aspects academic press daniel hirschberg a linear space algorithm for computing maximal common subsequences communications of the acm nigel horspool practical fast searching in strings software practice and experience david huffman a method for the construction of minimum redundancy codes proceedings of the ire bibliography guy jacobson static trees and graphs in foundations of computer science annual symposium on pages ieee juha suffix cactus a cross between suffix tree and suffix array in combinatorial pattern matching pages springer karlsson beyond the standard library an introduction to boost stefan kurtz jomuna choudhuri enno ohlebusch chris schleiermacher jens stoye and robert giegerich reputer the manifold applications of repeat analysis on a genomic scale nucleic acids research juha dominik kempa and simon puglisi hybrid compression of bitvectors for the in data compression conference pages ieee daniel karch dennis luxen and peter sanders improved fast similarity search in dictionaries in string processing and information retrieval pages springer donald knuth james morris and vaughan pratt fast pattern matching in strings siam journal on computing donald knuth the art of computer programming volume addisonwesley jesse kornblum identifying almost identical files using context triggered piecewise hashing digital investigation richard karp and michael rabin efficient randomized patternmatching algorithms ibm journal of research and development sandeep kumar and eugene h spafford a pattern matching model for misuse intrusion detection technical report department of computer science purdue university usa juha and esko ukkonen sparse suffix trees in computing and combinatorics pages springer stefan kurtz reducing the space requirement of suffix trees software practice and experience bibliography kim k whang and lee a ngram inverted index structure for approximate string matching computer systems science and engineering kim whang lee and lee a space and time efficient inverted index structure in proceedings of the international conference on very large data bases pages vldb endowment vladimir levenshtein binary codes capable of correcting deletions insertions and reversals in soviet physics doklady volume pages robert lewand cryptological mathematics maa lin liu yinhu li siliang li ni hu yimin he ray pong danni lin lihua lu and maggie law comparison of sequencing systems biomed research international david lipman and william pearson rapid and sensitive protein similarity searches science gad landau jeanette schmidt and dina sokol an algorithm for approximate tandem repeats journal of computational biology ben langmead cole trapnell mihai pop steven salzberg et al ultrafast and alignment of short dna sequences to the human genome genome biology gad landau and uzi vishkin fast parallel and serial approximate string matching journal of algorithms veli compact suffix array in combinatorial pattern matching pages springer tyler moore and benjamin edelman measuring the perpetrators and funders of typosquatting in financial cryptography and data security pages springer moshe mor and aviezri fraenkel a hash code method for detecting and correcting spelling errors communications of the acm bibliography giovanni manzini and paolo ferragina engineering a lightweight suffix array construction algorithm algorithmica alistair moffat and simon gog string search experimentation using massive data philosophical transactions of the royal society of london a mathematical physical and engineering sciences aleksandr morgulis michael gertz alejandro and richa agarwala a fast and symmetric dust implementation to mask lowcomplexity dna sequences journal of computational biology melichar jan holub and polcar text searching algorithms department of computer science and engineering czech technical university in prague czech republic roger mitton spelling checkers spelling correctors and the misspellings of poor spellers information processing management gurmeet singh manku arvind jain and anish das sarma detecting for web crawling in proceedings of the international conference on world wide web pages acm udi manber and gene myers suffix arrays a new method for string searches siam journal on computing moritz and johannes nowak text indexing with errors in combinatorial pattern matching pages springer veli and gonzalo navarro succinct suffix arrays based on encoding in combinatorial pattern matching pages springer donald morrison patricia practical algorithm to retrieve information coded in alphanumeric journal of the acm marvin minsky and seymour papert perceptrons mit press cambridge massachusetts michael maniscalco and simon puglisi an efficient versatile approach to suffix sorting journal of experimental algorithmics svetlin manavski and giorgio valle cuda compatible gpu cards as efficient hardware accelerators for sequence alignment bmc bioinformatics suppl bibliography udi manber and sun wu an algorithm for approximate membership checking with application to password security information processing letters udi manber and sun wu glimpse a tool to search through entire file systems in usenix winter pages gene myers a fast algorithm for approximate string matching based on dynamic programming journal of the acm gonzalo navarro a guided tour to approximate string matching acm computing surveys gonzalo navarro ricardo erkki sutinen and jorma tarhio indexing methods for approximate string matching ieee data engineering bulletin alexandros ntoulas and junghoo cho pruning policies for inverted index with correctness guarantee in proceedings of the annual international acm sigir conference on research and development in information retrieval pages acm gonzalo navarro and veli compressed indexes acm computing surveys ge nong practical o suffix sorting for constant alphabets acm transactions on information systems gonzalo navarro and mathieu raffinot fast and simple character classes and bounded gaps pattern matching with applications to protein searching journal of computational biology nicholas nethercote and julian seward valgrind a program supervision framework electronic notes in theoretical computer science saul needleman and christian wunsch a general method applicable to the search for similarities in the amino acid sequence of two proteins journal of molecular biology christos ouzounis and alfonso valencia early bioinformatics the birth of a discipline a personal view bioinformatics james peterson computer programs for detecting and correcting spelling errors communications of the acm bibliography james peterson a note on undetected typing errors communications of the acm alan parker and hamblen james computer algorithms for plagiarism detection ieee transactions on education piz pizza chili corpus compressed indexes and their testbeds http online accessed victor pankratius ali jannesari and walter tichy parallelizing a case study in multicore software engineering software ieee simon puglisi william smyth and andrew turpin a taxonomy of suffix array construction algorithms acm computing surveys joseph pollock and antonio zamora automatic spelling correction in scientific and scholarly text communications of the acm michael rabin fingerprinting by random polynomials technical report department of mathematics the hebrew university of jerusalem israel ran http online accessed michael roberts wayne hayes brian hunt stephen mount and james yorke reducing storage requirements for biological sequence comparison bioinformatics stuart russell and peter norvig artificial intelligence a modern approach prentice hall edition rajeev raman venkatesh raman and srinivasa rao succinct indexable dictionaries with applications to encoding trees and multisets in proceedings of the thirteenth annual symposium on discrete algorithms pages society for industrial and applied mathematics david salomon data compression the complete reference springer science business media bibliography jared simpson and richard durbin efficient construction of an assembly string graph using the bioinformatics sds https online accessed sew julian seward http online accessed claude elwood shannon a mathematical theory of communication the bell systems technical journal steven skiena the algorithm design manual volume springer science business media eugene shpaer max robinson david yee james candlin robert mines and tim hunkapiller sensitivity and selectivity in protein similarity searches a comparison of in hardware to blast and fasta genomics erkki sutinen and jorma tarhio on using locations in approximate string matching in algorithms esa third annual european symposium corfu greece september proceedings pages temple smith and michael waterman identification of common molecular subsequences journal of molecular biology fei shi and peter widmayer approximate multiple string searching by clustering genome informatics nathan tuck timothy sherwood brad calder and george varghese deterministic string matching algorithms for intrusion detection in infocom annual joint conference of the ieee computer and communications societies volume pages ieee dekel tsur fast index for approximate string matching journal of discrete algorithms alan turing computing machinery and intelligence mind bibliography typ lists of common misspellings http wikipedia online accessed esko ukkonen algorithms for approximate string matching information and control esko ukkonen finding approximate patterns in strings journal of algorithms esko ukkonen construction of suffix trees algorithmica uni uniprot http online accessed sebastiano vigna broadword implementation of queries in experimental algorithms pages springer peter weiner linear pattern matching algorithms in switching and automata theory swat ieee conference record of annual symposium on pages ieee dan willard range queries are possible in space n information processing letters lusheng wang and tao jiang on the complexity of multiple sequence alignment journal of computational biology sun wu udi manber and gene myers a subquadratic algorithm for approximate limited expression matching algorithmica andrew yao and frances yao dictionary with small errors in combinatorial pattern matching pages list of symbols b block either an unit or a piece of a bit vector c character a string of length c count table in the cl cache line size d distance metric time required for calculating d for two strings over the same alphabet d dictionary of keywords for keyword indexes d word string from a dictionary enc d encoded compressed word d f first column of the bwt matrix h hash function hk s order entropy of string s ht hash table hw hamming weight number of in a bit vector ham hamming distance bie number i rounded to the nearest integer i index for string matching k number of errors in approximate matching l last column of the bwt matrix lf load factor list of symbols lev levenshtein distance m pattern size m n input size occ number of occurrences of the pattern m s set of minimizers of string s p pattern p piece of a word in the case of word partitioning p r e probability of event e q collection of s string s string sketch over s s substring s set of substrings for indexes sa suffix array alphabet set of all strings over the alphabet alphabet size t input string text tr rrr table t bwt input text t after applying the bwt w size of the machine word typically or bits v bit vector list of abbreviations bf bloom filter lf load factor bm algorithm mf algorithm bmh algorithm mphf minimal perfect hash function bst binary search tree nl natural language bwt transform nw algorithm csa compressed suffix array ocr optical character recognition cosa compact suffix array p c pizza chili corpus dfs search pies partitioning into exact searching dp dynamic programming rk algorithm esa enhanced suffix array sa suffix array fsm finite state machine sc suffix cactus tlb translation lookaside buffer kmp algorithm st suffix tree lcp longest common prefix sw algorithm lcs longest common subsequence wt wavelet tree list of figures a binary search tree bst storing strings from the english alphabet a trie which is one of the basic structures used in string searching a hash table for strings the formula for shannon s entropy calculating an alignment with levenshtein distance using the wunsch nw algorithm a suffix tree st which stores all suffixes of a given text a suffix array sa which stores indexes of sorted suffixes of a given text a compressed suffix array csa which stores indexes pointing to the next suffix from the text calculating the transform bwt a relation between the bwt and the sa count table c which is a part of the formulae for updating the range during the search procedure in the fmindex an example of rrr blocks an example of an rrr table a wavelet tree wt extraction in the structure with superlinear space constructing phrases with the use of minimizers selecting all overlapping from a given text a bloom filter bf for approximate membership queries an inverted index which stores a mapping from words to their split index for keyword indexing selecting minimizers from a given text query time per character vs pattern length for the english text of size mb query time per character vs pattern length for different methods for the english text of size mb query time and index size vs the load factor split index query time and index size vs dictionary size with and without coding for english dictionaries split index query time and index size vs dictionary size with and without coding for dna dictionaries split index query time vs index size for different methods split index positions list of figures comparison time vs word size for mismatch using occurrence sketches for words generated over the english alphabet comparison time vs word size for mismatch using count sketches for words generated over the english alphabet average number of character comparisons when comparing two random strings from the same alphabet with uniform letter frequency vs the alphabet size index size vs the number of used for the compression for the english dictionary split index index size vs the number of used for the compression for the dna dictionary split index comparison time vs word size for mismatch using various string sketches generated over the alphabet with uniform letter frequency and list of tables a comparison of the complexities of basic data structures which can be used for exact string searching algorithm classification based on whether the data is preprocessed evaluated hash functions and search times per query split index query time and index size vs the error value k split index a summary of data of indexes a summary of data of keyword indexes sets which were used sets which were used for for the experimental the experimental evaluation evaluation frequencies of english alphabet letters a summary of internet addresses of hash functions
| 8 |
nov artin approximation property and the general neron desingularization dorin popescu abstract this is an exposition on the general neron desingularization and its applications we end with a recent constructive form of this desingularization in dimension one key words artin approximation neron desingularization conjecture quillen s question smooth morphisms regular morphisms smoothing ring morphisms mathematics subject classification primary secondary introduction let k be a field and r khxi x xm be the ring of algebraic power series in x over k that is the algebraic closure of the polynomial ring k x in the formal power series ring k x let f fq in y yn over r and be a solution of f in the completion of theorem artin for any c n there exists a solution y c in r such that y c mod x c in general we say that a local ring a m has the artin approximation property if for every system of polynomials f fq a y q y yn a solution of f in the completion and c n there exists a solution y c in a of f such that y c mod mc in fact a has the artin approximation property if every finite system of polynomial equations over a has a solution in a if and only if it has a solution in the completion of a we should mention that artin proved already in that the ring of convergent power series with coefficients in c has the artin approximation property as it was later called a ring morphism u a of noetherian rings has regular fibers if for all prime ideals p spec a the ring is a regular ring its localizations are regular local rings it has geometrically regular fibers if for all prime ideals p spec a and all finite field extensions k of the fraction field of the ring k is regular a flat morphism of noetherian rings u is regular if its fibers are geometrically regular if u is regular of finite type then u is called smooth a localization of a smooth algebra is called essentially smooth we gratefully acknowledge the support from the project granted by the romanian national authority for scientific research cncs uefiscdi a henselian noetherian local ring a is excellent if the completion map a is regular for example a henselian discrete valuation ring v is excellent if the completion map v induces a separable fraction field extension theorem artin let v be an excellent henselian discrete valuation ring and v hxi the ring of algebraic power series in x over v that is the algebraic closure of the polynomial ring v x in the formal power series ring v x then v hxi has the artin approximation property the proof used the so called the desingularization which says that an unramified extension v v of valuation rings inducing separable field extensions on the fraction and residue fields is a filtered inductive union of essentially finite type subextensions v a which are regular local rings even essentially smooth v of v desingularization is extended by the following theorem theorem general neron desingularization popescu teissier swan spivakovski let u a be a regular morphism of noetherian rings and b an of finite type then any v b factors through a smooth c that is v is a composite b c the smooth c given for b by the above theorem is called a general neron desingularization note that c is not uniquely associated to b and so we better speak about a general neron desingularization the above theorem gives a positive answer to a conjecture of artin theorem an excellent henselian local ring has the artin approximation property this paper is a survey on the artin approximation property the general neron desingularization and their applications it relies mainly on some lectures given by us within the special semester on artin approximation of the chaire jean morlet at cirm luminy spring see http artin approximation properties first we show how one recovers theorem from theorem indeed let f be a finite system of polynomial equations over a in y yn and a solution of f in set b a y f and let v b be the morphism given by y by theorem v factors through a smooth c that is v is a composite b c thus changing b by c we may reduce the problem to the case when b is smooth over a since is local b by bb for some b b v we may assume that g i mb for g gr from f and a r m of the jacobian matrix thus g and m is invertible by the implicit function theorem there exists y a such that y modulo the following consequence of theorem was noticed and hinted by radu to this was the origin of s interest to read our theorem and to write later corollary let u a be a regular morphism of noetherian rings then the differential module is flat for the proof note that by theorem it follows that is a filtered inductive limit of some smooth c and so is a filtered inductive limit of the last modules being free modules definition a noetherian local ring a m has the strong artin approximation property if for every finite system of polynomial equations f in y yn over a there exists a map n n with the following property if y an satisfies f y modulo c c n then there exists a solution y an of f with y y modulo mc greenberg proved that excellent henselian discrete valuation rings have the strong artin approximation property and is linear in this case theorem artin the algebraic power series ring over a field has the strong artin approximation property note that in general is not linear as it is showed in the following theorem was conjectured by artin in theorem let a be an excellent henselian discrete valuation ring and ahxi the ring of algebraic power series in x over a then ahxi has the strong artin approximation property theorem see also the noetherian complete local rings have the strong artin approximation property in particular a has the strong artin approximation property if it has the artin approximation property thus theorem follows from theorem and theorem gives that excellent henselian local rings have the strong artin approximation property an easy direct proof of this fact is given in using theorem and the ultrapower methods what about the converse implication in theorem it is clear that a is henselian if it has the artin approximation property on the other hand if a is reduced and it has the artin approximation property then is reduced too indeed if is nonzero and satisfies r then choosing c n such that mc we get a z a such that z r and z modulo mc it follows that z which contradicts our hypothesis it is easy to see that a local ring b which is finite as a module over a has the artin approximation property if a has it it follows that if a has the artin approximation property then it has so called reduced formal fibers in particular a must be a so called universally japanese ring using also the strong artin approximation property it is possible to prove that given a system of polynomial equations f a y r y yn and another one g a y z t z zs then the sentence la there exists y an such that f y and g y z for all z as holds in a if and only if holds in provided that a has the artin approximation property in this way it was proved in that if a has the artin approximation property then a is a normal domain if and only if is a normal domain too this was actually the starting point of the quoted paper later cipu and myself used this fact to show that the formal fibers of a are the so called geometrically normal domains if a has the artin approximation property finally rotthaus proved that a is excellent if a has the artin approximation property next let a m be an excellent henselian local ring its completion and mcm a resp mcm be the set of isomorphism classes of maximal cohen macaulay modules over a resp assume that a is an isolated singularity then a maximal module is free on the punctured spectrum since is also an isolated singularity we see that the map mcm a given by m m is surjective by a theorem of elkik theorem theorem is bijective proof let m n be two finite p we may suppose that m an u p n an v uk n ukj ej k t vr n vrj ej r p where ukj vrj a and ej is the canonical basis of an let f an an be an map defined by an invertible n xij with respect to ej then f induces a bijection m n if and only if f maps u onto v that is there exist ykr zrk a k t r p such that p f uk p ykr vr k t and p f t zrk uk vr r p note pthat are pequivalent to n uki xij p ykr vrj k t j n p p t zrk n uki xij vrj r p j n therefore if m n there exist in such that det m and p n uki p vrj k t j n p p t n uki vrj r p j n then by the artin approximation property there exists a solution of let us say xij ykr zrk in a such that xij ykr zrk modulo it follows that det xij det modulo and so m corollary in the hypothesis of the above theorem if m mcm a is indecomposable then m is indecomposable too proof assume that m then mcm and by the surjectivity of we get ni for some ni mcm a then m and the injectivity of gives m remark if a is not henselian then the above corollary is false for example let a c x y x y y x x then m x y a is indecomposable in mcm a but m is decomposable indeed for x we have m y y remark let a be the so called of a then induces also an inclusion a see remark it is known that mcm is finite if and only if is a simple singularity what about a complex unimodal singularity r certainly in this case mcm r is infinite but maybe there exists a special property which characterizes the unimodal singularities for this purpose it would be necessary to describe somehow mcm r at least in some special cases small attempts are done by andreas steenpass for most of the cases when we need the artin approximation property it is enough to apply artin s theorem sometimes we might need a special kind of artin approximation the so called artin approximation in nested subring condition namely the following result which was also considered as possible by artin in theorem theorem let k be a field a khxi x xm f fr khx y ir y yn and sn m c be some integers suppose that f has a solution in k x such that k xsi for all i then there exists a solution y yn of f in a such that yi xsi i for all i n and y mod x c k x corollary the weierstrass preparation theorem holds for the ring of algebraic power series over a field proof let f khxi x xm be an algebraic power series such that f xm by weierstrass preparation theorem f is associated in dip i visibility with a monic polynomial xpm xm k xm for some p n k thus the system p i f y xpm zi xm y u has a solution in k x such that k by theorem there exists a solution y u zi in khxi such that zi i and is congruent modulo x with the previous one thus p i y is invertible and f yg where g xpm zi xm i xm by the unicity of the formal weierstrass preparation theorem it follows that y and g now we see that theorem is useful to get algebraic versal deformations see let d khzi a kht z zs t tn and n fd a deformation of n over a is a p kht j a d t z h l such that l k n l is flat over a where above b h denotes the henselization of a local ring b the condition says that l has the form fd with fi kht zi fi fi modulo t and says that tora l k by the local flatness criterion since l is t ideal separated because p is local noetherian let pe pd p l be part of a free resolution of l over p where the map p d p is given by fd then says that tensorizing with k the above sequence we get an exact sequence d e d d d n because p is flat over a therefore is equivalent to p for all g d d with gi fi there exists g kht zid with g g p modulo t such that g modulo j im that is gi fi j we would like to construct a versal deformation l see pages that is for any u p d u z h and f a deformation of n to there exists a morphism a such that p l where the structural map of p over p is given by if we replace above the algebraic power series with formal power series then this problem is solved by schlessinger in the infinitesimal case followed by some theorems of elkik and artin set k t j d t z h we will assume that we have already l such that l is versal in the frame of complete local rings how to get the versal property for l in the frame of algebraic power series let p be as above since is versal in the frame of complete local rings there exists such that where the structure of as a is given by assume that is given by t u k u n then we have i j modulo j on the other hand we may suppose that induces an isomorphism which is given by t z for some u z k u z s with z modulo u z and the ideals f f of k u z coincide thus there exists an invertible d over k u z with p ii fj by theorem we may find t u khuin and z u z khu zis cij khu zi satisfying i ii and such that t z cij modulo u z note that det cij det modulo u z and so cij is invertible it follows that a given by t t is the wanted one that is p l where the structure of p as a p is given by next we give an idea of the proof of theorem in a particular but essential case proposition let k be a field a khxi x xm f fr khx y ir y yn and s m q n c be some integers suppose that f has a solution in k x such that k xs for all i q then there exists a solution y yn of f in a such that yi xs i for all i q and y mod x c k x proof note that b k xs xm i is excellent henselian and so it has the artin approximation property thus the system of polynomials f yn has a solution q in b with modulo x c now it is enough to apply the following lemma for a xs i lemma let a m be an excellent henselian local ring its completion a x h x xm x h be the henselizations of a x m x respectively x m x f fr a system of polynomials in y yn over a x h and q n c be some positive integers suppose that f has a solution in x h such that for all i q then there exists a solution y yn of f in a x h such that yi a for all i q and y mod mc x h proof x h is a union of etale neighborhoods of x take an etale neighborhood b of x m x such that b for all q i then b x t m x t for some monic polynomial in t over x with m x and m x let us say x x e t xk t j u for some u high enough and note that and changing if necessary u we may suppose that x x xk t j mod u for some q i actually we should take as a fraction but for an easier expression we will skip the denominator substitute yi q i n by x x yijk xk t j u in f and divide by the monic polynomial g te x x zjk xk t j u in x t yt yij zj where yijk zjk are new variables we get fp yq y x x fpjk yq yijk zjk xk t j mod g u p then is a solution of f in b if and only if is a solution of fpjk in as a has the artin approximation property we may choose a solution yq yijk zjk of fpjk in a which coincides modulo mc with the former one then x x yi yijk xk t j u q i n together with yi i q form a solution of f in the etale neighborhood b a x t g m x t e x x zjk xk t j u of a x m x which is contained in a x h clearly y is the wanted solution applications to the conjecture let r t t tn be a polynomial algebra in t over a regular local ring r m an extension of serre s problem proved by quillen and suslin is the following conjecture every finitely generated projective module over r t is free theorem lindel the conjecture holds if r is essentially of finite type over a field swan s unpublished notes on lindel s paper see proposition contain two interesting remarks lindel s proof works also when r is essentially of finite type over a dvr a such that its local parameter p the conjecture holds if r m is a regular local ring containing a field or p providing that the following question has a positive answer question swan is a regular local ring a filtered inductive limit of regular local rings essentially of finite type over z indeed suppose for example that r contains a field and r is a filtered inductive limit of regular local rings ri essentially of finite type over a prime field p a finitely generated projective r t m is an extension of a finitely generated projective ri t mi for some i that is m r t t mi by theorem we get mi free and so m is free too theorem swan s question holds for regular local rings r m k which are in one of the following cases r contains a field the characteristic p of k is not in r is excellent henselian proof suppose that r contains a field we may assume that k is the prime field of r and so a perfect field then the inclusion u k r is regular and by theorem it is a filtered inductive limit of smooth k morphisms k ri thus ri is a regular ring of finite type over k and so over z therefore r is a filtered inductive limit of regular local rings essentially of finite type over z similarly we may treat first assume that r is complete by the cohen structure theorem we may also assume that r is a factor of a complete local ring of type a z p xm for some prime integer by we see that a is a filtered inductive limit of regular local rings ai essentially of finite type over z since r a are regular local rings we see that r x for a part x of a regular system of parameters of a then there exists a system of elements of a certain ai which is mapped into x by the limit map ai a it follows that is part of a regular system of parameters of at for all t j for some j i and so rt at are regular local rings now it is enough to see that r is a filtered inductive limit of rt t j next assume that r is excellent henselian and let be its completion using or it is enough to show that given a finite type e of r the inclusion e r factors through a regular local ring e essentially of finite type over z that is there exists e r such that is the composite map e as above is a filtered inductive limit of regular local rings and so the composite map e r factors through a regular local ring f essentially of finite type over z we may choose a finite type d f such that f dq for some q spec d and the map e f factors through d d is an as d is excellent its regular locus reg d is open and so there exists d d q such that dd is a regular ring changing d by dd we may assume that d is regular let e z bn for some bi e r and let d z y h y yn for some polynomials since d is an we may write bi pi y modulo h i n for some polynomials pi e y r y note that there exists such that bi pi h because factors through as r has the artin approximation property by theorem there exists y rn such that bi pi y h y let d r be the map given by y y clearly factors through d and we may take e m more precisely we have the following diagram which is commutative except in the right square roo d f corollary the conjecture holds if r is a regular local ring in one of the cases of the above theorem remark theorem is not a complete answer to question but says that a positive answer is expected in general since there exists no result similar to lindel s saying that the conjecture holds for all regular local rings essentially of finite type over z we decided to wait with our further research so we have waited already years another problem is to replace in the conjecture the polynomial algebra r t by other the tool is given by the following theorem theorem vorst let a be a ring a x x xm a polynomial algebra i a x a monomial ideal and b a x then every finitely generated projective m is extended from a finitely generated projective n that is m b n if for all n n every finitely generated projective a t t tn is extended from a finitely generated projective corollary let r be a regular local ring in one of the cases of theorem i r x be a monomial ideal with x xm and b r x then any finitely generated projective is free for the proof apply the above theorem using corollary the conjecture could also hold when r is not regular as the following corollary shows corollary let r be a regular local ring in one of the cases of theorem i r x be a monomial ideal with x xm and b r x then every finitely generated projective b t t tn is free this result holds because b t is a factor of r x t by the monomial ideal ir x t remark if i is not monomial then the conjecture may fail when replacing r by b indeed if b r then there exist finitely generated projective b t of rank one which are not free see now let r m be a regular local ring and f m question quillen is free a finitely generated projective module over rf theorem quillen s question has a positive answer if r is essentially of finite type over a field theorem quillen s question has a positive answer if r contains a field this goes similarly to corollary using theorem instead of theorem remark the paper was not accepted for publication in many journals since the referees said that relies on a theorem that is theorem which is still not recognized by the mathematical community since our paper was quoted as an unpublished preprint in we published it later in the romanian bulletin and it was noticed and quoted by many people see for instance general neron desingularization using artin s methods from ploski gave the following theorem which is the first form of a possible extension of neron desingularization in dim theorem let c x x xm f fs be some convergent power series from c x y y yn and c x n with be a solution of f then the map v b c x y f c x given by y factors through an of type b c x z for some variables z zs that is v is a composite map b b c x using theorem one can get an extension of the above theorem theorem let a m be an excellent henselian local ring its completion b a finite type and v b an then v factors through an of type a z h for some variables z zs where a z h is the henselization of a z m z suppose that b a y y yn if f fr r n is a system of polynomials from i then by the ideal generated by all r denote of the jacobian matrix after elkik let be the radical of the x f i b where the sum is taken over all systems of polynomials f ideal f from i with r then bp p spec b is essentially smooth over a if and only if p by the jacobian criterion for smoothness thus measures the non smooth locus of b over a in the linear case we may easily get cases of theorem when dim a lemma let a be a ring and a weak regular sequence of a that is is a divisor of a and is a divisor of let be a flat and set b a f where f then is the radical of and any b factors through a polynomial in one variable proof note that all solutions of f in a are multiples of by flatness any solution of f in is a linear combinations of some solutions of f in a and so again a multiple of let h b be a map given by yi yi then z and so h factors through a z that is h is the composite map b a z the first map being given by z and the second one by z z pn proposition lemma let fi aij yj a yn i r be k k a system of linear homogeneous polynomials and y k yn k p be a complete system of solutions of f fr in a let b br ar and c a solution of f b in a let be a flat and b a yn f b then any b factors through a polynomial in p variables proof let h b be a map given by y y since is flat over a we see that y c is a linear combinations of y k that is there exists z zp p such that y h c zk h y k therefore h factors through a zp that is h is the b a zp where the first map is given by y c zk y k and the second one by z z another form of theorem is the following theorem which is a positive answer to a conjecture of artin theorem let u a be a regular morphism of noetherian rings b an of finite type v b an and d spec b the open smooth locus of b over a then there exist a smooth c and two t b c w c such that v wt and c is smooth over b at d spec c spec b being induced by there exists also a form of theorem recalling us the strong artin approximation property theorem let a m be a noetherian local ring with the completion map a regular b an of finite type and the artin function over associated to the system of polynomials f defining b then there exists a function n n such that for every positive integer c and every morphism v b c there exists a smooth c and two morphisms v t b c w c such that wt is the composite map b c sometimes we may find some information about and so about let a be a discrete valuation ring x a local parameter of a its completion and b a y y yn an of finite type if f fr r n is a system of polynomials from i then we consider a r m of the jacobian matrix let c suppose that there exists an v b and n f i such that v nm x c where for simplicity we write v nm instead of v nm i theorem theorem there exists a c which is smooth over a such that every v b with v v modulo that is v y v y modulo factors through corollary theorem in the assumptions and notation of corollary there exists a canonical bijection for some s v homa b v v modulo let k be a field and f a of finite type let us say f k u u un an arc spec k x spec f is given by a f k x assume that this happens for example when f is reduced and k is perfect set a k x x b a f let f fr r n be a system of polynomials from j and m a r of the jacobian matrix let c assume that there exists an g f and n f j such that g nm x c note that a induces a bijection homk f homa b by adjunction corollary corollary the set g homk f g g modulo is in bijection with an affine space over for some s next we give a possible extension of greenberg s result on the strong artin approximation property let a m be a local ring for example a reduced ring of dimension one the completion of a b a y y yn an of finite type and c e suppose that there exists f fr in i a m of the jacobian matrix n f i and an v b such that v mn me then we may construct a general neron desingularization in the idea of theorem which could be used to get the following theorem theorem popescu there exists an v b such that v v modulo mc that is v y i v y i modulo mc moreover if a is also excellent henselian there exists an v b a such that v v modulo mc remark the above theorem could be extended for noetherian local rings of dimension one see in this case the statement depends also on a reduced primary decomposition of in a using we end this section with an algorithmic attempt to explain the proof of theorem in the frame of noetherian local domains of dimension one let u a be a flat morphism of noetherian local domains of dimension suppose that a q and the maximal ideal m of a generates the maximal ideal of then u is a regular morphism moreover we suppose that there exist canonical inclusions k a k such that u k k if a is essentially of finite type over q then the ideal can be computed in singular by following its definition but it is easier to describe only the ideal p f i b defined above this is the case considered in our algorithmic part f let us say a k x for some variables x xm and the completion of a is k f when v is defined by polynomials y from k x then our problem is easy let l be the field obtained by adjoining to k all coefficients of y then r l x f is a subring of containing im v which is essentially smooth over a then we may take b as a standard smooth such that r is a localization of b consequently we suppose usually that y is not in k x we may suppose that v indeed if v then v induces an amorphism v b and we may replace b v by b v applying this trick several times we reduce to the case v however the fraction field of im v is essentially smooth over a by separability that is him and in the worst case our trick will change b by im v after several steps choose p f i for some system of polynomials f fr from i and v p a moreover we may choose p to be from then v p z v a m f i where m is a r of for some z set b z where z and let be the map of given by z z it follows that f i and then d p modulo i for p p z replace b and the jacobian matrix j will be now the new j given j by thus we reduce to the case when d a but how to get d with a computer if y is not polynomial defined over k then the algorithm is complicated because we are not able to tell the computer who is y and so how to get we may choose an element a m and find a minimal c n such that ac v m this is possible because dim a set ac it follows that v m v m and so v m that is v m z for some z certainly we can not find precisely z but later it is enough to know just a kind of truncation of it modulo thus we may suppose that there exist f fr r n a system of polynomials from i a r m of the jacobian matrix n f i such that d p mn modulo i we may assume that m det i r set u b clearly is a regular morphism of artinian local rings and it is easy to find a general neron desingularization in this frame thus there exists a c which is smooth over such that factors through moreover we may suppose that c u for some polynomials k u which are not in m u note that k a then d a u is smooth over a and u factors through usually v does not factor through d though factors through c n let y d be such that the composite map c is given by y y thus i y modulo we have d p modulo i and so p y d modulo thus p y ds for a certain s d with s modulo let h be the n obtained by adding down to as a border the block let be the adjoint matrix of h and g we have gh hg nmidn p idn and so dsidn p y idn g y h y set h s y y dg y t where t tn are new variables since s y y dg y t modulo h and f y f y x y yj j j modulo higher order terms in yj by taylor s formula we see that for p maxi deg fi we have sp f y sp f y dp y t q modulo h where q t d t r this is because g p idr we have f y b for some b dd r set gi sp bi sp ti qi i r then we may take b to be a localization of d y t i h g remark an algorithmic proof in the frame of all noetherian local rings of dimension one is given in references cinq exposes sur la desingularisation handwritten manuscript ecole polytechnique federale de lausanne artin on the solutions of analytic equations invent artin algebraic approximation of structures over complete local rings publ math ihes artin constructions techniques for algebraic spaces actes congres intern t artin versal deformations and algebraic stacks invent artin algebraic structure of power series rings contemp math ams artin denef smoothing of a ring homomorphism along a section arithmetic and geometry vol ii boston basarab nica popescu approximation properties and existential completeness for ring morphisms manuscripta math bhatwadeckar rao on a question of quillen trans amer math cipu popescu some extensions of neron s and approximation rev roum math pures et cipu popescu a desingularization theorem of neron type ann univ ferrara decker greuel pfister singular a computer algebra system for polynomial elkik solutions d equations a coefficients dans un anneaux henselien ann sci ecole normale greenberg rational points in henselian discrete valuation rings publ math ihes grothendieck dieudonne elements de geometrie algebrique iv part publ math ihes kashiwara vilonen microdifferential systems and the conjecture ann of kurke mostowski pfister popescu roczen die approximationseigenschaft lokaler ringe springer lect notes in york lam serre s conjecture springer lect notes in berlin lindel on the conjecture concerning projective modules over polynomial rings invent modeles minimaux des varietes abeliennes sur les corps locaux et globaux publ math ihes pfister popescu die strenge approximationseigenschaft lokaler ringe inventiones math pfister popescu constructive general neron desingularization for one dimensional local rings in preparation ploski note on a theorem of artin bull acad polon des xxii popescu popescu a method to compute the general neron desingularization in the frame of one dimensional local domains arxiv popescu a strong approximation theorem over discrete valuation rings rev roum math pures et popescu algebraically pure morphisms et popescu general neron desingularization nagoya math popescu general neron desingularization and approximation nagoya math popescu polynomial rings and their projective modules nagoya math popescu letter to the editor general neron desingularization and approximation nagoya math popescu artin approximation in handbook of algebra vol ed hazewinkel elsevier popescu variations on desingularization in sitzungsberichte der berliner mathematischen gesselschaft berlin popescu on a question of quillen bull math soc sci math roumanie popescu around general neron desingularization arxiv popescu roczen indecomposable modules and irreducible maps compositio math quillen projective modules over polynomial rings invent rond sur la de la fonction de artin ann sci ecole norm rotthaus rings with the property of approximation math spivakovski a new proof of popescu s theorem on smoothing of ring homomorphisms amer math steenpass algorithms in singular parallelization syzygies and singularities phd thesis kaiserslautern swan desingularization in algebra and geometry ed kang international press cambridge teissier resultats recents sur l approximation des morphismes en algebre commutative d apres artin popescu et spivakovski sem bourbaki vorst the serre problem for discrete hodge algebras math dorin popescu simion stoilow institute of mathematics of the romanian academy research unit university of bucharest bucharest romania address
| 0 |
rank three geometry and positive curvature jul fuquan fang karsten grove and gudlaugur thorbergsson abstract an axiomatic characterization of buildings of type due to tits is used to prove that any cohomogeneity two polar action of type on a positively curved simply connected manifold is equivariantly diffeomorphic to a polar action on a rank one symmetric space this includes two actions on the cayley plane whose associated type geometry is not covered by a building the rank or size of a coxeter matrix m coincides with the number of generators of its associated coxeter system the basic objects in tits local approach to buildings are the chamber systems c of type m see also ro indeed if any spherical residue subchamber system of c of rank is covered by a building so is c recall that a polar g action on a riemannian manifold m is an isometric action with a socalled section an immersed submanifold of m that meets all g orbits orthogonally since the action by the identity component of g is polar as well we assume throughout without stating it that g is connected it is a key observation of fgt that the study of polar g actions on positively curved manifolds m in essence is the study of a certain class of connected chamber systems c m g moreover when the universal tits cover of c m g is a building it has the structure of a compact spherical building in the sense of burns and spatzier bsp this was utilized in fgt to show theorem a any polar g action of cohomogeneity at least two on a simply connected closed positively curved manifold m is equivariantly diffeomorphic to a polar g action on a rank one symmetric space if the associated chamber system c m g is not of type we note here that when the action has no fixed points the rank of c m g is dim g one more than the cohomogeneity of the action in the above theorem the cayley plane emerges only in cohomogeneity two and when g has fixed points moreover there are indeed chamber systems with type m whose universal cover is not a building see ne fgt ly kl and below in our case a polar g action on m is of type if and only if its orbit space g is a geodesic with angles and our aim here is to take care of this exceptional case and prove theorem b any polar g action on a simply connected positively curved manifold m of type is equivariantly diffeomorphic to a polar action on a rank one symmetric space this the first author is supported in part by an nsfc grant and he is grateful to the university of notre dame for its hospitality the second author is supported in part by an nsf grant a research chair at the hausdorff center at the university of bonn and by a humboldt research award the third author is grateful to the university of notre dame and the capital normal university in beijing for their hospitality fuquan fang karsten grove and gudlaugur thorbergsson includes two actions on the cayley plane where the universal covers of the associated chamber systems are not buildings combining these results of course establishes the corollary any polar g action of cohomogeneity at least two on a simply connected closed positively curved manifold m is equivariantly diffeomorphic to a polar g action on a rank one symmetric space this is in stark contrast to the case of cohomogeneity one where in dimensions seven and thirteen there are infinitely many manifolds even up to homotopy the classification work in gwz also lead to the discovery and construction of a new example of a positively curved manifold see de and gvz by necessity as indicated above the proof of theorem b is entirely different from the proof of theorem a in general the geometric realization of our chamber systems c m g utilized in the proof of theorem a are not simplicial however in fgt it was proved that in fact theorem the geometric realization m g of a chamber system c m g of type or associated with a simply connected polar m is simplicial when the geometric realization of a chamber system of type m is simplicial it is called a tits geometry of type this allows us to use an axiomatic characterization of geometries that are buildings see proposition so rather than considering the universal cover m g directly we construct in all but two cases a suitable cover of c m g possibly c m g itself and prove that it satisfies the building axiom of tits the two cases where this methods fails are then recognized as being equivalent to two type polar actions on the cayley plane cf pth gk we note that since all our chamber systems c m g are homogeneous and those of type and are tits geometries an independent alternate proof of theorem b follows from kl preliminaries the purpose of this section is threefold while explaining the overall approaches to the strategies needed in the proof of theorem b we recall the basic concepts and establish notation throughout g denotes a compact connected lie group acting on a closed connected positively curved manifold m in a polar fashion and of type fix a chamber c in a section for the action then c is isometric to the orbit spaces g and where w is the reflection group of and w acts simply transitively on the chambers of since the action is of type c is a convex positively curved with geodesic sides faces and opposite its vertices r t and q with angles and respectively by the reconstruction theorem of gz recall that any polar g manifold m is completely determined by its polar data in our case this data consist of g and all its isotropy groups together with their inclusions along a chamber c cf also lemma in go we denote the principal isotropy group by h and the isotropy groups at vertices and opposite faces by gr gt gq and respectively what remains after removing g from this data will be referred to as the local data for the action rank three geometry and positive curvature with two exceptions it turns out that only partial data are needed to show that the action indeed is equivalent to a polar action on a rank one symmetric space since the data in the two exceptional cases coincide with those of the exceptional actions on the cayley plane this will then complete the proof of theorem a in addition it is worth noting that since the groups g derived from those data in and are maximal connected subgroups of the identity component of the isometry group of the cayley plane their actions are uniquely determined and turn out to be polar the proof of theorem a in all but the two exceptional cases is based on showing that the universal cover of the chamber system c c m g associated to the polar action is a spherical tits building fgt here the homogeneous chamber system c m g is the union g c of all chambers with three adjacency relations one for each face specifically c and c are i adjacent if their respective i faces are the same in this chamber system with the thin topology induced from the its path metric is a simplicial complex by theorem c and hence c m g is a geometry as indicated the fundamental theorem of tits used in fgt to show that is a building yields nothing for rank three chamber systems as well as rank three geometries instead we will show that c or a cover we construct of c is a building and hence simply connected by verifying an axiomatic incidence characterization see section of such buildings due also to tits the construction of chamber system covers we utilize is equivalent in our context to the principal bundle construction of gz theorem for coxter polar actions and manifolds specifically for our case given the data h g j i j t r q and g for m g the data for p l g consists of graphs j in l g of compatible homomorphisms from h g j i j t r q to in particular the local data for p l g are isomorphic to the local data for m g clearly l acts freely as a group of automorphisms and c p l g c m g cb m g c p l g is a chamber system covering of c m g in our case l will be or in one case basic tools and obstructions the aim of this section is to establish a number of properties and restrictions of the data to be used throughout unless otherwise stated g will be a compact connected lie group and m a closed simply connected positively curved manifold without any curvature assumptions we have the possibly well known lemma orbit equivalence let m be a simply connected polar g manifold then the slice representation of any isotropy group is orbit equivalent to that of its identity component proof recall that the slice representation of an isotropy group k g p g restricted to the orthogonal complement t of the fixed point set of k inside the normal space to the orbit g p is a polar representation clearly the finite group acts isometrically on the orbit space s t which is isometric to a chamber c of the polar action on the sphere s t since c is convex with boundary its soul point the unique point at maximal distance to fuquan fang karsten grove and gudlaugur thorbergsson the boundary is fixed by this soul point however corresponds to a principal orbit and hence to an exceptional k orbit unless acts trivially on however by theorem at there are no exceptional orbits of a polar action on a simply connected manifold because of this when subsequently talking casually about a slice representation we refer to the slice representation of its identity component unless otherwise stated using positive curvature the following basic fact was derived in fgt theorem lemma primitivity the group g is generated by the identity components of the face isotropy groups of any fixed chamber naturally the slice representations of gt gq and gr play a fundamental role we denote the respective kernels of these representations by kt kq and kr and their quotients by and since in particular the slice representation of gt is of type it follows that the multiplicity triple of the polar g manifold m the dimensions of the unit spheres in the normal slices along the edges is d d k where d or for the kernels kt and kq which are usually large groups we have lemma slice kernel let m be a simply connected polar of type if g acts effectively then the kernel kt respectively kq acts effectively on the slices t and t respectively t and t proof note that kt fixes all sections through t since kt acts trivially on the slice t we must prove that kt kq kt kr and kq kr we consider only kt kq since the arguments for the remaining cases are similar note that since g is assumed to act effectively on m and kt kq is contained in the principal isotropy group it suffices to prove that kt kq is normal in by the primitivity see g q pt i where pq gq is the quotient homomorphism and is the identity component of and similarly for pt thus it suffices to show that kt kq is normal in each of t and pq in each case assuming the effective vertex isotropy group is connected does not alter the proof only simplifies notation accordingly we proceed to assume that is connected and will show that kt kq is a normal subgroup of gt note that kt kq is a normal subgroup of kt acting trivially on both the slices t and t by assumption the quotient map gt is surjective when restricted to the identity component of gt a finite central cover of is isomorphic to the product t is locally isomorphic where is locally isomorphic to the identity component of kt and t covering where to in particular gt contains a connected and closed subgroup t t commutes with is the cover map moreover every element of the subgroup t the conjugation by h gives rise to an the elements in on the other hand for every h t element in the automorphism group aut kt since kt is normal hence defines a homomorphism aut k since has a trivial image in aut k under the forgetful t t t homomorphism aut kt aut the group is finite and hence trivial because is connected this implies that the elements of commute with the elements of t t rank three geometry and positive curvature i and k k is normal in k it then follows that k k is a normal kt since gt hkt t t q t t q subgroup of gt as mentioned above the same arguments show that kt kq is normal in t in case is not connected the same arguments also show that kt kq is normal in q remark it turns out that in all cases is connected in fact this is automatic whenever d since acts transitively on a projective plane up to local isomorphism its identity component is one of the groups so su sp or corresponding to d and respectively and the slice representation is its standard polar representation of type see also table in view of the transversality lemma below gt is connected whenever k in the case the connectedness of gr again by lemma implies that also in this case is connected see proposition the following simple topological consequence of transversality combined with the fact that the canonical deformation retraction of the orbit space triangle minus any side to its opposite vertex lifts to m or alternatively of the work wie will also be used frequently lemma transversality given a multiplicity triple d d m then the inclusion maps g gr m g gq m and g m are g m and g m are min d m connected and g gt m is recall here that a continuos map is said to be k connected if the induced map between the ith homotopy groups is an isomorphism for i k and a surjection for i another connectivity theorem theorem using positive curvature la synge is very powerful lemma wilking let m be a positively curved and n a totally geodesic closed codimension k submanifold then the inclusion map n m is n connected if in addition n is fixed by an isometric action of a compact lie group k with principal orbit of dimension m k then the inclusion map is n m k connected we conclude this section with two severe restrictions on g stemming from positive curvature the first follow from the well known synge type fact that an isometric tk action has orbits with dim in odd dimensions and in even dimensions when m has positive curvature cf su in particular since gq has maximal rank among the isotropy groups and the euler characteristic g gq if and only if rk g rk gq hs page we conclude lemma rank lemma the dimension of m is even if and only if rk g rk gq and otherwise rank rk g rk gq when adapting wilking s isotropy representation lemma from for positively curved g manifolds to polar manifolds of type we obtain lemma sphere transitive subrepresentations let li i q r t be a simple normal subgroup and u an irreducible isotropy subrepresentation of g then u li is isomorphic to a standard defining representation in particular li acts transitively on the sphere s u fuquan fang karsten grove and gudlaugur thorbergsson proof let u be an irreducible isotropy subrepresentation of g not isomorphic to a summand of the slice representation of li on t by u is isomorphic to a summand of the isotropy representation of where is a vertex isotropy group on the other hand the almost effective factor of is well understood cf the tables and which are all the standard defining representation the desired result follows the building axiom recall that tits has provided an axiomatic characterization of buildings of irreducible type m when the geometric realization c with the thin topology of the associated chamber system c is a simplicial complex this characterization is given in terms of the incidence geometry associated with c the purpose of this section is to describe this characterization when m and translate it to our context here by definition vertices x y are incident denoted x y if and only if x and y are contained in a closed chamber of clearly the incidence relation not an equivalence relation is preserved by the action of g in our case to describe the needed characterization we will use the following standard terminology the shadow of a vertex x on the set of vertices of type i i denoted shi x is the union of all vertices of type i incident to x following tits when m we call the vertices of type q r and t points lines and planes respectively we denote by q r and t the set of points lines and planes in c m g notice that g acts transitively on q r and t with this terminology the axiomatic characterization cf proposition and the proof of the case on alluded to above states theorem axiom a connected tits geometry of type is a building if and only if the following axiom holds ll if two lines are both incident to two different points they coincide equivalently if shq r shq has cardinality at least two then r or for any q q with q shr q shr has cardinality at most one in our case if r r and q q are incident ll is clearly equivalent to for any gq r r we have gr q q q or for any gr q q we have gq r r r we proceed to interpret ll in terms of the isotropy groups data this will be used either directly for c m g or for a suitably constructed cover m g as described at the end of section for notational simplicity we will describe it here only for c m g for the general case see remark below rank three geometry and positive curvature proposition if c m g is a building of type then the following holds for any pair of different points q q both incident to an r r we have gq grq where grq denotes the isotropy group of the unique edge between r and q cf theorem c proof note that every line in the orbit gq r is incident to both q and axiom ll implies that the orbit contains only one line r and hence gq gr since c m g is a building we have gr gq grq and gr the desired result follows we will see that the condition together with an assumption on a suitable reduction of the g action implies that c m g is a building of type to describe the reduction let r r be a line and let q be the normal sphere in the summand in the slice t then the shadow of r in q is exp q moreover the isotropy group gr acts transitively on q let kr q denote the identity component of the kernel of the transitive gr action on q it is clear that the fixed point connected component m kr q containing r is a cohomogeneity one kr q submanifold of m where kr q is the identity component of the normalizer n kr q of kr q in the corresponding chamber system denoted c m kr q is a subcomplex of c m c m g that inherits an incidence structure which gives rise to a tits geometry of rank lemma reduction the connected chamber system c m g of type is a building if for any r r the reduction c m kr q is a and holds proof if not by axiom ll there are two points q q which are both incident to two different lines r by we know that gq grq and gq q therefore the configuration rq q is contained in the fixed point set m gq since by definition clearly kr q is a subgroup of gq we have that m gq m kr q this implies that there is a length circuit in the building c m kr q a contradiction the following technical criterion will be more useful to us lemma building criterion the connected chamber system c m g is a building if for any r r the reduction c m kr q is a and the following property p holds p for any q sh q r and any lie group l with kr q l gq but l grq the normalizer n kr q l is not contained in grq either proof by the previous lemma it suffices to verify suppose is not true then there is an r r and a pair of points q both incident to r such that gq is not a subgroup of grq let l gq by assumption p there is an n kr q l so that grq however gr gq kr q grq kr q since m kr q is an building in particular gr and so there is a length circuit rq r r r in the building c m kr q a contradiction remark for an cover c p g of c m g constructed as above note that the property is inherited from m g likewise the group being the graph of the fuquan fang karsten grove and gudlaugur thorbergsson homomorphism gr to restricted to k kr q satisfies property p when k does for this note that by construction the local data for the reduction are isomorphic to the local data for m k it then follows as in the proofs above that if a component of the reduction c c p g is a then the corresponding component of will be a building covering c m g and our main result theorem from the fgt applies remark if q kr q k is a subgroup then the assumption of c m k being a building in the above criterion may be replaced by the fixed point component c m k c m k being a building or a rank building for the latter we notice that by cl theorem a rank spherical building is a cat space hence any two points of distance less than are joined by a unique geodesic this clearly excludes a length circuit in the above proof since its perimeter is remark note that clearly kt and similarly for the other kernels of vertex and edge isotropy groups in particular for the identity component of kt we have k where k kr q is the identity component of the kernel of gr acting on sd consequently the reduction m k is a cohomogeneity two manifold of type either or containing the cohomogeneity one manifold m k cf above classification outline and organization the subsequent sections are devoted to a proof of the following main result of the paper theorem let m be a compact simply connected positively curved polar with associated chamber system c m g of type then the universal cover of c m g is a building if and only if m g is not equivariantly diffeomorphic to one of the exceptional polar actions on by g su su or g so this combined with the main result of fgt proves theorem b in the introduction the purpose of this section is to describe how the proof is organized according to four types of scenarios driven by the possible compatible types of slice representations for gt and gq at the vertices t and q of a chamber the common feature in each scenario and all cases is the determination of all local data the basic input for this is indeed knowledge of the slice representations at the vertices t and q of a chamber c and lemma the local data identifies the desired k gr reduction m k with its cohomogeneity one action by n k referred to in the building criteria lemma with property p being essentially automatic the main difficulty is to establish that c m k c m g or the corresponding reduction in a cover which by construction has the same local data is a building the first step for this frequently uses the following consequence of the classification work on positively curved cohomogeneity one manifolds in gwz and ve lemma any simply connected positively curved cohomogeneity one manifold with multiplicity pair different from and is equivariantly diffeomorphic to a rank one symmetric space as already pointed out and used there are only four possible effective slice representations at t in particular forcing the codimensions of the orbit strata corresponding to and to rank three geometry and positive curvature be d d and k where d or in table respectively h are the singular respectively principal isotropy groups for the effective slice representation by restricted to the unit sphere and are the codimensions of the singular orbits n so psu h w s o o s o o ad s u u s u u sp sp sp sp sp sp spin spin spin table effective representations on sn similarly see table the identity component of possible effective type slice representations at q which are compatible with the multiplicity restrictions in table are known as well see table e of gwz in which we have corrected an error for the exceptional so spin representation see also gkk main theorem n k sp sp k even su su sp sp k su su k sp sp su k h sp sp k su k w su su k su k su k k odd su su k k k even u su zk u su so so u su k zk su su k so zk t zk s so so k so k so k k k so so k so k so k k odd so so k so so su su su so spin so su su so ad u so so su sp su su su u s sp s su su s su table effective representation on sn aside from a few exceptional representations they are the isotropy representations of the grassmannians k of in where k r c or the pairs of multiplicities fuquan fang karsten grove and gudlaugur thorbergsson that occur for the exceptional representations are corresponding to so spin so su u or so note that effectively there are only four exceptional gq slice representations corresponding to the last four rows of table however special situations occur also when the slice representation of is the isotropy representation of the real grassmann manifold when its multiplicity k happen to have k d or we will refer to these as flips as may be expected the low multiplicity cases and play important special roles the latter two are where the exceptional cayley plane emerges the only cases where complete information about the polar data are required accordingly we have organized the proof of into four sections depending on the type of slice representations we have along q three grassmann flips three grassmann series two non minimal two minimal grassmann representations and four exceptional representations grassmann flip gq slice representation this section will deal with the multiplicity cases d d with d and leaving d minimal and odd for section we have the following common features lemma the isotropy groups gq and gr are connected and the reducible slice representation on sd is the standard action by so so d for the kernels of the slice representations we have that kt kq and kr proof the transversality lemma implies that the orbits q g q and r g r are simply connected since m is in particular gq and gr are connected since g is the second claim follows since d is even cf appendix in fgt for a description of reducible polar representations since cf table as well as act effectively on the respective normal spheres d s we see that kq and kr also since kt we have kt kq kt but kt kq by the kernel lemma and hence kt recall that k is the identity component of the kernel of the gr action restricted to sd lemma clearly and k acts transitively on the corresponding normal sphere with kernel identity component of kr moreover k kq and hence k gq is injective the reduction m k is a positively curved irreducible cohomogeneity one k manifolds with multiplicity pair d proof note that k kq acts trivially on sd so k kq kr the second claim follows since kr kq since k is injective we see from table that n k gq n k s and hence m k is cohomogeneity one with multiplicity pair d to complete the proof assume by contradiction that the action is reducible that the action by k on m k is equivalent to the sum action of so so d on sd where the isotropy k q is so so d in all cases it is easy to see that the center of gq intersects the center of k in a nontrivial subgroup this together with primitivity implies that rank three geometry and positive curvature is in the center of notice that as a subgroup of gq can not be in kq because kq h and the factor so acts freely on the unit sphere of the slice t thus the fixed point set m s coincides with the orbit g q g gq from the classification of positively curved homogeneous spaces we get immediately that g is the product of or if d with one of a few orthogonal groups or unitary groups each of which is not big enough to contain the simple group gt the desired result follows although what remains is in spirit the same for all the flip cases we will cary out the arguments for each case individually beginning with d proposition in the flip case c m g or an covering is a building with the isotropy representation of as a linear model proof from lemma and tables and we obtain the following information about the local data gt spin h spin spin gq spin spin and gr spin also gr gq and from lemma and lemma we see that the corresponding reduction m k is or with the tensor product representation by so so of type or induced by it it is easily seen that the assumption p in lemma is satisfied as well in particular if m k the associated chamber system c m k is the a building of type and by lemma we conclude that c m g is a building for the latter two cases we will use the bundle construction for polar actions to obtain a free covering of c m g guided by our knowledge of the cohomogeneity one diagrams data for the cohomogeneity one manifolds or we proceed as follows note that since gt and are simple groups only the trivial homomorphism to exists now let be the graphs of the projection homomorphisms gq and gr we denote the total space of the corresponding principal bundle over m by then p is a polar g manifold and c p g covers c m g let be the graph of k in from and our choice of data in g it follows that m k is the hopf bundle if m k and the bundle if m k in the former case c is the building c so so and we are done by lemma via in the latter case the action on the reduction is not primitive so c is not connected however each connected component is the building c so so and hence by the corresponding component of c p is a building covering c m when combined with the previous section this in turn shows that m k can not be a lens space when m is simply connected proposition in the flip case c m g or an covering is a building with the isotropy representation of so u as a linear model proof from lemma and tables and we obtain the following information about the local data modulo a common kernel gt sp sp h sp sp sp sp gq spin sp spin sp and gr spin sp in this case gr sp gq and from lemma and lemma we see that the corresponding reduction m k is or with the linear tensor product fuquan fang karsten grove and gudlaugur thorbergsson representation by so so of type or induced by it it is easily seen that the assumption p in lemma is satisfied as well in particular if m k the associated chamber system c m k is the a building of type and by lemma we conclude that c m g is a building if m k or a lens space we proceed as above with an bundle construction again only the trivial homomorphism to exists from gt and and we choose to be the graphs of the projection homomorphisms gq and gr we denote the total space of the corresponding principal bundle over m by as above p is a polar g manifold and c p g covers c m g from and our choice of data in g it follows that m k is the hopf bundle if m k and the bundle if m k the proof is completed as above proposition in the flip case c m g or an covering is a building with the isotropy representation of su s u u as a linear model proof we begin by verifying our earlier claim see that is connected also in this case from we already know that gr and hence is connected and that its slice representation is the product action of so so on the singular isotropy group along away from origin is so hence the isotropy group so on the other hand suppose is not connected then by gt psu and s u u in particular the slice representation along is by psu acting on where acts by complex conjugation contradicting so the above and tables and yield the following information about the local data modulo the kernel gt su h s u u u s u u u gq u u moreover and gr u where the u factor in gr is the face isotropy group of here gr gq and from lemma and lemma we see that the corresponding reduction m k is or with the linear tensor product representation by so so of type or induced by it again the assumption p in lemma is easily checked to hold in particular if m k we conclude as above that c m g is a building for the latter two cases we are again guided by the reduction for our bundle construction for we have no choice but gt we let be the graph of the homomorphism u u defined by sending a b to det a det b and the graph of the projection homomorphism gr u this yields a compatible choice of data for a polar g action on a principal bundle p over m whose corresponding chamber system c p g is a free cover of c m g again from and our choice of data in g it follows that m k is the hopf bundle if m k and the bundle if m k and the proof is completed as above remark the tensor representation of su su on is not polar but it is polar on the projective space p on the other hand it is necessary in the above construction rank three geometry and positive curvature of the covering that both gq and gr have factors since the face isotropy groups u which are subgroups in gt su hence a compatible homomorphism to will be trivial on the face isotropy groups non minimal grassmann series for gq slice representation recall that there are three infinite families of cases k k k and k corresponding the real complex and quaternion grassmann series for the gq slice representation we point out that is special in two ways there are two scenarios one of them corresponding to the flip case of d not covered in the previous subsection the other being standard yet the standard does not appear as a reduction in any of the general cases k k for the case there are two scenarios as well both with the same local data one of them belonging to the family the other not moreover each of the cases with k admit a reduction to the flip case whereas does not for the reasons just provided this subsection will deal with the multiplicity cases k k k and k each of which has a uniform treatment although the case is significantly different from the other general cases to be treated here we begin by pointing out some common features for all the cases k k k and k including the case to describe the information we have about the local data in a uniform fashion we use gd k to denote so k su k and sp k k according to d and d with the exceptional convention that or depending on whether the center of kt is finite or not and also we use the symbol to mean isomorphic up to a finite connected covering lemma in all cases gt is connected as are gq and gr when d moreover kt gd k with the additional possibility that kt gd k when d for the q and r vertex isotropy groups we have gq gd gd k gd gr gd gd k gd moreover the normal subgroup k gr is gd k gd where gd k is a block subgroup of gd k gq and if d gd denotes a nontrivial extension in particular gq s o o k proof the connectedness claim is a direct consequence of transversality the proof follows the same strategy in all cases just simpler when all vertex isotropy groups are conneceted the two possibilities for gt when d correspond to the different rank possibilities for cf table for these reasons we only provide the proof in the most subtle case of d first notice that the effective slice representation so on t is of type with principal isotropy group hence h is an extension of by the kernel kt on the other hand so so k cf table and o o k up to a possible quotient by a diagonal in the center if k is even therefore h is also an extension of so k so k or so k by kq this together with lemma implies that kt so k and hence gt so so k in particular h so k we conclude that o so k and similarly o so k acting on the normal sphere with principal isotropy group thus so k kt since kq fuquan fang karsten grove and gudlaugur thorbergsson we get easily that kq or since kq kt on the other hand as a subgroup of gq o so k hence gq contains exactly two connected components whose identity component is so so k all in all it follows that gq s o o k the rest of the proof is straightforward note that kt contains gd k as a normal subgroup the fact that the reduction m gd k with the action by the identity component of its normalizer gd k will give a geometry of type or will play an important role in the d cases below cf in what follows we will consider the reduction m k by gd k gd k gd k gr rather than the one by lemma the cohomogeneity one n manifold m k has multiplicity pair d and the action is not equivalent to the reducible cohomogeneity one action on sd proof for simplicity we give a proof for d all other cases are the same first note that the orbit space of the cohomogeneity one n is rq and the two singular isotropy groups mod kernel are su and su respectively with principal isotropy group hence the multiplicity pair is to prove that it is not reducible we argue by contradiction indeed if m k is equivariantly diffeomorphic to with the product action of su u it follows that the normal subgroup su gq is also normal in n by primitivity g hgr gq i hn gq i and hence su is normal in on the other hand the face isotropy group gt contains a subgroup su which sits as su gq therefore the projection homomorphism p g su is an epimorphism on su however since it sits in su gt it must be trivial because any homomorphism from su to su is trivial a contradiction when d this is not immediately of much help since there are several positively curved irreducible cohomogeneity one manifolds with multiplicity pair cf tables a and e in gwz whose associated chamber system is not of type however when d respectively d corresponding to multiplicity pairs respectively we read off from the classification in gwz that corollary the universal covering of m k is equivariantly diffeomorphic to a linear action of type on or when d and on when d we are now ready to deal with each family individually beginning with d with the standard k case where the almost effective slice representation at q q is the defining tensor product representation of so so k proposition in the standard k case with k the associated chamber system c m g is a building with the isotropy representation of so k so so k as a linear model proof by lemma kt so k which is a normal subgroup of the principal isotropy group consider the reduction m kt with the action of its normalizer n kt once again a polar action with the same section by lemma it is clear that the identity component of n kt gq is hence the subaction by kt the identity component of n kt is of type with a rank three geometry and positive curvature right angle at q therefore from the classification of geometries cf section in fgt it is immediate that the universal cover of m kt is equivariantly difffeomorphic to with the linear action of so so in particular if the section then m kt and the chamber complex for the subaction is a building of type and we are done by remark since property p is clearly satisfied for k so k gr it remains to prove that m kt is simply connected consider the normal subgroup so gq and the fixed point component m so a homogeneous manifold of positive curvature with dimension at least two since m so m kt m kt is of dimension since the identity component of the isotropy group gq so so k we see that m so or according to m kt m so or equivalently according to m kt or we argue by contradiction if m so then the identity connected component of the normalizer n so acts transitively on it with principal isotropy group so o k gq hence gq so o k a contradiction since gq s o o k proposition in the standard case with k the chamber system c m g is covered by a building with the isotropy representation of u k u k u as a linear model proof first note that the reduction m su k where su k kt k is a positively curved cohomogeneity two manifold of type with multiplicity triple moreover su k is a block subgroup in k where su k su k gq and of course m k m su k we will prove that both reductions above are simply connected by appealing to the nectivity lemma of wilking to do this we now proceed to prove that codimm k m su k and codimm su k m by the spherical isotropy lemma every irreducible isotropy subrepresentation of su k is the defining representation from table b in gwz and the above fact that su k it follows that there is a simple normal subgroup l g such that su k g q projects to a block subgroup of l where l su n if k l su n or so n if k and finally l su n so n or one of the exceptional lie groups if k on the other hand by the flip proposition the normalizer n su k is either su su or u su modulo kt since su k kt is a block subgroup in this together with the above implies that in fact l su k for all k and only one such factor exist in particular the representation along contains exactly copies of one copy along the normal slice t and two copies along the orbit g therefore the codimension of m k in m is k and hence the codimension of m su k in m is by the connectivity lemma of wilking we conclude that m m su k for i by induction on in particular m su k is simply connected and hence if dim m is odd and if dim m is even by the flip proposition since assumption p in lemma is satisfied we conclude from that c m g is a building if dim m is odd it remains to prove that c m g is covered by a building if dim m is even in this case by the above we know that m m su k z on the other hand from the transversality lemma it follows that m g gt and hence gt contains at least an in its center su u k gt by lemma we get that both gq and gr have at least a factor and we are now in the same situation as in the proof of lemma above as a consequence we fuquan fang karsten grove and gudlaugur thorbergsson can proceed with the same construction of a principal bundle p over m and conclude that its associated chamber system is a building covering c m g proposition in the standard case where k the chamber system c m g is a building with the isotropy representation of sp k sp k sp as a linear model proof since the assumption p for sp k in lemma is easily seen to be satisfied it suffices by corollary to prove that m k is simply connected as in the proof of the general case above this is achieved via wilkings connectivity lemma consider the normal subgroup sp gq it is clear that m sp is a homogeneous space with a transitive action by the identity component of its normalizer sp with isotropy group gq by the classification of positively curved homogeneous spaces we get that m sp is either or moreover the universal cover sp is sp k sp sp and in particular has the same rank as g by the rank lemma on the other hand by lemma and table b in gwz it follows that g contains a normal subgroup isomorphic to sp n so that sp k sp k sp n is in a chain of block subgroups up to a finite cover we let g sp n on the other hand by corollary we know that sp sp this together with the information on sp implies that g sp k as in the proof of the case we see that the isotropy representation of along contains exactly three copies of one copy along the normal slice t and two copies along the orbit g in particular the codimension of m k in m is k recalling that the dimension of m k is it follows again by connectivity and induction on k as before that m k is simply connected minimal grassmann gq slice representation this section will deal with the multiplicity cases and including the appearance of an exceptional cayley plane action in all previous cases all reductions considered have been irreducible polar actions here however we will encounter reductions that are reducible cohomogeneity two actions and we will rely on the independent classification of such actions in sections and of fgt we begin with the d case where by we know that the universal covering k of the reduction m k is diffeomorphic to or the first two scenarios follow the outline of the general case whereas the latter is significantly different proposition in the case of multiplicities c m g is covered by a building with the isotropy representation of u u u as a linear model provided m k is not diffeomorphic to proof by lemma gt is either su or u depending on whether kt is finite or in the latter case the reduction m kt is a positively curved cohomogeneity two manifold of type with multiplicity triple as in the general case where k cf therefore kt su su or u su by the flip proposition the desired result follows as in the proof of proposition rank three geometry and positive curvature from now on we assume that up to finite kernel gt su and correspondingly gq u su and gr u su moreover su and from our assumption on the duction m k by corollary the normalizer n contains su su su as its semisimple part on the other hand by the rank lemma we know that rk g resp rk g if dim m is odd resp even in particular su su su is a maximal rank subgroup of g if rk g in this case it is immediate by borel and de siebenthal bs see the table on page that g is not a simple group of rank similarly we claim that g is not a simple group when its rank is indeed if so by lemma and table b in gwz it would follow that g su and su su gq is a block subgroup this however is not possible since then n would contain su thus g where are nontrivial lie groups without loss of generality we assume that the projection of su gq to has nontrivial image but then su must be contained in because otherwise the normalizer n would be much smaller than su su su by primitivity it is easy to see that gt is diagonally imbedded in since g hgt i hgt i in particular both and have rank at least two since the projections from gt are almost imbeddings i e have finite kernel if both and have rank two it is easy to see that su and where su or neither scenario is possible for the latter since by the primitivity g su i su su while for the former the semisimple part of n is therefore rk g and once again by lemma and table b in gwz g su su note that dimm and the principal orbit of in m is of dimension at least in lar it follows from wilkings connectivity that m k is simply connected thus as in the general case the desired result follows from lemma proposition in the case of multiplicities m is equivariantly diffeomorphic to the cayley plane with an isometric polar action by su su provided m k is diffeomorphic to proof recall that su by lemma and the slice representation of it follows that every irreducible subrepresentation of on the normal space to m k is the standard representation on in particular the codimension of m k is a multiple of and so m has dimension divisible by by the isotropy group gt su or u and correspondingly gq u su or u u and gr u su or u u by the rank lemma rk g rk gq or by lemma the isotropy representations of su gq as well as of su gt are spherical transitive by table b in gwz it follows that g can not be a simple group of rank and moreover g can not contain sp so and so as a normal subgroup since if so the semisimple part of would not be su a contradiction to our assumption on the reduction m k for which su on the other hand note that the identity component of the normalizer gt gt since gt is a maximal isotropy group and hence gt gt acts freely on the positively curved fixed point set m gt of even dimension therefore g can not contain su as a normal subgroup since otherwise gt would be a block subgroup in su and hence gt gt would not be trivial consequently g is not a simple group and moreover g where su gt is diagonally imbedded in in particular both and contain su as subgroups it is easy to see that su gq g is a subgroup in either or say in hence and n it follows that fuquan fang karsten grove and gudlaugur thorbergsson su furthermore can neither be a group of rank or since otherwise contains a rank semisimple group hence is su or u the latter however is impossible indeed in this case gt u and the center z g would be contained in kt and hence in every principal isotropy groups the center is invariant under conjugation thus m s in summary we have proved that g su su indeed a quotient group by with gt su diagonally imbedded in we claim that this combined with the above analysis of the isotropy groups modulo conjugation will force the polar data gt gq gr g noting that face isotropy groups are intersections of vertex isotropy groups to be gt gq gr su u su s u u where u su is the upper block subgroup in su and s u u su su is the product of the lower block subgroups in other words by the recognition theorem for polar actions gz there is at most one such polar action on the other hand the unique action by the maximal subgroup su su the isometry group of the cayley plane is indeed polar of type pth to prove the above claim by conjugation we may assume that gt su and gq u su as claimed moreover up to conjugation by an element of the face isotropy group gt gq we may further assume that gq is the lower block subgroup in the second factor su note that is a normal subgroup of gr indeed the second factor of su su gr su su since su gr it follows that su su gr is the product of the lower block subgroups since gr hsu su hi where h is the principal isotropy group the desired assertion follows next we deal with the case of multiplicity where there are two scenarios one is naturally viewed as part of the infinite family k whereas the other should be viewed as the flip case with d we point out that unlike all other cases an chamber system cover arises in the first case corresponding to a polar action of so so on proposition for the multiplicity case the chamber system c m g is covered by a building with the isotropy representation of either so so so or of sp u as a linear model proof recall that so and we first claim that the identity component gt so to see this recall that the kernel kt and is either so so or s o o the claim follows since if dim kt or gt then kt kq is nontrivial a contradiction to lemma from this we also conclude that gq is not since otherwise again kt kq is hence it is isomorphic to either so so the standard case or to the fold covering u of so so the flip case by the rank lemma it follows that rank g we start with the following observation let z be cyclic subgroup of the principal isotropy group h with image z then the action by z on the reduction m z is a reducible polar action of cohomogeneity to see this note that the type t orbit in the reduction is no longer a vertex indeed the normalizer of z so is o rank three geometry and positive curvature in addition note that the identity component of every face isotropy group is by the dual generation lemma in fgt we conclude that the semisimple part of z has rank at most one to proceed we will prove that a g is not a simple group of rank this is a direct consequence of combined with the following algebraic fact if g is a rank simple group one of so su so or sp up to center then the normalizer of any order subgroup so gt contains a semisimple subgroup of rank at least the algebraic fact is easily established by noticing that the inclusion map so g either can be lifted to a homomorphism into one of the four matrix groups or so sits in the quotient image of a diagonally imbedded su in one of the matrix groups next we are going to prove that b if g is a rank group then either m g su or so so up to equivariant diffeomorphism exactly as in case a we can exclude g being so since a subgroup gq will have a normalizer containing so we now exclude g being the exceptional group otherwise gq must be u and contained in either an so or an su by bs the center u is in kq for the same reason as above u is not in so finally if u su the q orbit g q in the reduction m contains su again by the dual generation lemma of fgt this is impossible since the identity component of the isotropy group of the face opposite of q is a circle which can not act transitively on the orbit g q therefore up to local isomorphism g is so so or su respectively one checks that the corresponding isotropy group data are given by gt so so so and gq o so so so respectively by gt so su inclusion induced by the field homomorphism and gq u su as a block subgroup the recognization theorem then yields b c now suppose g where li is a rank i lie group if acts freely on m then so or and acts on in a polar fashion of type hence is even dimensional and thus or by b in either case we know that the universal cover of the chamber system c is a building since c m g is a connected chamber system covering c it follows that is the universal cover of c m g now consider the remaining case where does not act freely on m and we let zm be a cyclic group such that m zm note that g can not be so since then gt and so gq would be the same simple group factor which is absurd in particular the part of g has rank at least two thus from now on we may assume that is a rank two group moreover by the argument in case b it is immediate that in fact is either so or su notice that if kt is not trivial then m kt kt is a polar manifold with the same section which is of type by the connectivity lemma it follows that m kt is simply connected hence from the classification of geometries m kt is diffeomorphic to and the chamber system of m kt kt is a building by c m g is a building fuquan fang karsten grove and gudlaugur thorbergsson therefore we may assume in the following that kt hence gt so it follows that gq is either s o o or u we split the rest of the proof according to abelian or not in either case note that the normalizer zm is from this we get immediately that zm h by appealing to ci g it suffices to prove that the action is free since then the situation reduces to the previous rank case note that zm is normal in from this and the above it follows that zm is neither in gt nor in to see this if zm then would contain a normal subgroup of contradicting table the proof in the other case is similar but simpler hence m zm is either the orbit g r or g q assuming m zm g gr it is immediate that su from the list of positively curved homogeneous spaces on the other hand notice that gr is not connected indeed gr and gr o it follows that g gr is not simply connected however g gr is a totally geodesic submanifold in m which has dimension a contradiction to wilking s connectivity lemma assuming m zm g gq corresponding to su or so the universal cover of g gq is a sphere of dimension either or the latter case is ruled out as follows if gq u then kq is in the center of u hence also in the center of this is impossible since kq h and g acts effectively on m by assumption if gq s o o there are no nontrivial homomorphisms to hence gq so which is impossible for the former case gq u and g u with action on g gq equivalent to the standard linear action on a spherical space form with zm in the kernel thus gq zm u a contradiction cii g where is a simple rank one group either or so we will show that in this case g so so with local data gq s o o g and gt so g forcing all data to coincide with those of the isotropy representation of so so so and hence m with the action of g is determined via recognition we first prove that so if not we start with an observation that so and moreover gt is a diagonally imbedded subgroup in indeed otherwise an order element z h gt will have a normalizer z which contains a rank semisimple subgroup contradicting for the same reason as above we see that gq u and hence gq s o o similarly by so gq must be diagonally embedded in this is impossible since then n so so is finite but gq n so finally given that so it follows as above that gq u hence gq s o o since gt so and o sits diagonally in gq it follows that gt sits diagonally in in particular so using the same arguments as above we see that so gq is in all together all isotropy data are determined exceptional gq slice representation this section will deal with the remaining cases all of which are exceptional with multiplicities and all but the latter will occur and the case of will include an exceptional action on the cayley plane rank three geometry and positive curvature proposition in the case of the multiplicities where the effective slice representation at t is the tensor representation of so on either m is equivariantly diffeomorphic to the cayley plane with an isometric polar action by so or c m g is a building with the tensor product representation of so spin on as a linear model proof by the transversality lemma we conclude that gt is connected since g gt is simply connected the kernel kt is a normal subgroup in gt as well as of the principal isotropy group h with quotients gt so and h respectively cf table by the slice lemma kt acts effectively on the combining this with table where so it follows that the identity components kt thus gt so or spin the latter however is impossible since then kt h where is the quaternion group of order on the other hand by table the slice representation at q is the natural tensor representation of o on where the center is in the kernel kq and so in kt kq a contradiction therefore gt so and consequently gq o gr o su and su by lemma we have rk g case i assume rk g by lemma again dimm is even by bs table on page so is not a subgroup in any rank simple group therefore g l where l is a rank one group by table the face isotropy group su so is diagonally embedded in so gq it follows that the composition homomorphism gt g l is nontrivial hence surjective onto l because gt so hence l so and g so since the only proper nontrivial normal lie subgroup of so is with quotient so by the above we already know that gt so is a diagonal subgroup given by an epimorphism so so and a monomorphism so it is clear that up to conjugation gq o g where o so is the standard upper block matrices subgroup as in the proof of proposition we now claim that there is at most one polar action with the data as above since we are dealing with a non classical lie group however we proceed as follows given another type polar action of g so with isomorphic local data along a chamber c with vertices without loss of generality we may assume that gq so and moreover gt since any two so subgroups in are conjugate moreover we can further assume that since the singular isotropy groups pair for the slice representation at q is unique up to conjugation in particular the principal isotropy groups h we prove now so su this clearly implies the assertion since gr is generated by and recall that gt so g its composition with the projection to g is a monomorphism so is the composition of gr so su to hence is a diagonal subgroup of gr whose projection to the factor su is injective hence it suffices to show that the projection images of and in su coincide on the other hand note that the projection image of in su is the normalizer in su where is the identity component of the principal isotropy group the above assertion follows fuquan fang karsten grove and gudlaugur thorbergsson as for existence we again note that so is a maximal subgroup of the isometry group of the cayley plane the corresponding unique isometric action is indeed polar as proved in gk and of type case ii assume rk g by lemma dimm is odd consider the reduction m with the action of the identity component of the normalizer note that this is also a type polar action but the multiplicity triple is by appealing to lemma the codimension of m is divisible by thus from the case it follows that the universal cover is and the identity component is either u or so so modulo kernel we are going to prove that m is simply connected it suffices to show that m m is this follows trivially by the connectivity lemma of wilking if the codimension of m is at most if gq is a normal subgroup of g then g l where l is a rank group then is isomorphic to l so hence l so it is easy to count the codimension to see that it is strictly less than if is not a normal subgroup by lemma the isotropy representation of su g is spherical transitive hence g contains a normal simple lie subgroup l such that spin l is spherical we claim that l spin if not l contains spin such that spin l is a block subgroup in spin and hence contains spin which contradicts the above this proves that g spin where is a rank group from this we get that the isotropy subrepresentation of g contains exactly three copies of the standard defining representation of su hence the desired estimate for the codimension in summary we conclude that m so so and hence from the multiplicity case the chamber system for the action of is a building of type by remark we conclude that c m g is a building proposition there is no polar action of type type with multiplicities where the effective slice representation at t is the tensor product representation of so spin on proof we will prove that if there is such a slice representation at q the chamber system c m g is a building the desired claim follows from the classification of buildings indeed there is no such a building to proceed note that from table so spin and the principal isotropy group su it follows that up to local isomorphism gt su so with kt su notice that the reduction m kt kt is of cohomogeneity with the same section it is clear that it is of type since the q vertex is a vertex with angle because kt gq is by the classification of geometries it follows that m kt is either or we claim that m kt and hence the chamber system for m kt kt is a building by appealing to it follows that c m g is a building to see the claim it suffices to prove that m kt is orientable and hence simply connected thanks to the positive curvature by the isotropy representation of kt su is the defining complex representation from this it is immediate that m kt m t and hence oriented where kt is a maximal torus rank three geometry and positive curvature proposition when the multiplicity triple is there are two scenarios in either case c m g is a building with linear model the adjoint polar representation of either so or of sp on proof by lemma we know that all vertex isotropy groups are connected notice that by table the slice representation at q is the adjoint representation of so on together with proposition up to local isomorphism the local isotropy group data are determined as follows gt u gq so and gr so u moreover h so so and so let so h consider the reduction m so n so it is once again a polar manifold with the same section for such a reduction notice that the face has multiplicity the face is exceptional with normal sphere and gq so so therefore the action of n so is reducible with fundamental chamber where is a reflection image of q and rq is of exceptional orbit type in particular the multiplicities at are hence the slice representation at for the n so is again the adjoint representation of so on this clearly implies that is a fixed point on the other hand notice that m so is orientable and hence simply connected therefore by theorem of fgt we know that m so since property p holds for so it follows from remark that c m g is a building remark we remark that in the above proof the chamber system of m so n so is a building of type but the one for m so so is not proposition in the case of the multiplicities the chamber system c m g is covered by a building with the isotropy representation of so u as a linear model proof by lemma we know that all isotropy groups are connected note that sp and su or u by lemma it is easy to see that if gt is semisimple then up to local isomorphism gt sp gr sp su gq su sp and su sp where sp kq is a subgroup of gt if gt is not semisimple then kt and all isotropy groups data are the product of with the corresponding data above we now prove that g contains su as a normal subgroup by lemma the isotropy representations of g sp and g su are both spherical where sp su are normal factors of face isotropy groups hence a normal factor l of g is either so n or su n by table b in gwz moreover the subgroup kq gt is contained in a block subgroup so l resp a block subgroup su l if l so n resp l su n since kq contains gq it follows that n resp n if l so n resp su n to rule out the former case consider the fixed point set m kq with the polar action of kq it is clearly a reducible cohomogeneity action with q a vertex of angle by the dual generation lemma of fgt it follows that kq is either gq the fixed point case or the product of su gq with the face isotropy group opposite to q in the reduction m kq kq from this it is immediate that l su note that if gt is semisimple or dim m is even then rank g by the rank lemma and hence g su for the remaining case dim m being odd and gt sp fuquan fang karsten grove and gudlaugur thorbergsson we now prove that g u up to local isomorphism indeed it is clear that rank g and hence g su where is a rank group it suffices to prove that let su it is clear that the projection g is trivial when restricted to either of sp gt and gq by primitivity g hgt i hgt i therefore gt and hence to complete the proof we split into two cases dim m being even or odd for the former kt and g su it is clear that gt sp is a subgroup of u su and gq su sp is the normalizer n sp in g where sp gt this forces all isotropy groups data to be the same as for the linear cohomogeneity polar action on induced from the isotropy representation of so u hence in particular the chamber system c m g is covered by a building for the latter g su or u depending on kt or the fixed point set m k is odd dimensional since the isotropy representation of is the defining complex representation note that su ti i and m k is equivariantly diffeomorphic to with a standard linear cohomogeneity one action of type hence by lemma c m g is a building references at alexandrino and singular riemannian foliations on simply connected spaces differential geom appl bs borel and j de siebenthal les fermes de rang maximum des groupes de lie clos comment math helv bsp burns and spatzier on topological tits buildings and their classification inst hautes sci publ math cl charney and lytchak metric characterizations of spherical and euclidean buildings geom topol de dearricott a with positive curvature duke math j eh eschenburg and heintze on the classification of polar representations math z fgt fang grove and thorbergsson tits geometry and positive curvature preprint gk gorodski and kollross some remarks on polar actions preprint go gozzi low dimensional polar actions arxiv gvz grove verdiani and ziller an exotic t with positive curvature geom funct anal gwz grove wilking and ziller positively curved cohomogeneity one manifolds and geometry j differential geom gz grove and ziller polar actions and manifolds the journal of fixed point theory and applications gkk knarr and kramer compact connected polygons ii geom dedicata hs hopf and samelson ein satz die geschlossener liescher gruppen comment math helv kl kramer and lytchak homogeneous compact geometries transform groups ly lytchak polar foliations of symmetric spaces geom funct anal ne neumaier some sporadic geometries related to pgl arch math pth and thorbergsson polar actions on symmetric spaces j differential geom ro ronan lectures on buildings perspectives in mathematics academic press boston ma rank three geometry and positive curvature su ve wie sugahara the isometry group and the diameter of a riemannian manifold with positive curvature math japon tits buildings of spherical type and finite lecture notes in mathematics springerverlag york tits a local approach to buildings the geometric vein the coxeter festschrift edited by davis and sherk springer new verdiani cohomogeneity one manifolds of even dimension with strictly positive sectional curvature j differential wiesendorf taut submanifolds and foliations j differential geom wilking nonnegatively and positively curved manifolds surveys in differential geometry vol xi surv differ geom int press somerville ma wilking positively curved manifolds with symmetry ann of math wilking torus actions on manifolds of positive sectional curvature acta math no department of mathematics capital normal university beijing china address fuquan fang department of mathematics university of notre dame notre dame in usa address mathematisches institut zu weyertal germany address gthorber
| 4 |
rejection and mitigation of time synchronization attacks on the global positioning system feb ali khalajmehrabadi student member ieee nikolaos gatsis member ieee david akopian senior member ieee and ahmad taha member ieee paper introduces the time synchronization attack rejection and mitigation tsarm technique for time synchronization attacks tsas over the global positioning system gps the technique estimates the clock bias and drift of the gps receiver along with the possible attack contrary to previous approaches having estimated the time instants of the attack the clock bias and drift of the receiver are corrected the proposed technique is computationally efficient and can be easily implemented in real time in a fashion complementary to standard algorithms for position velocity and time estimation in receivers the performance of this technique is evaluated on a set of collected data from a real gps receiver our method renders excellent time recovery consistent with the application requirements the numerical results demonstrate that the tsarm technique outperforms competing approaches in the literature index positioning system time synchronization attack spoofing detection i ntroduction i nfrastructures such as road tolling systems terrestrial digital video broadcasting cell phone and air traffic control towers industrial control systems and phasor measurement units pmus heavily rely on synchronized precise timing for consistent and accurate network communications to maintain records and ensure their traceability the global positioning system gps provides time reference of microsecond precision for these systems the systems use the civilian gps channels which are open to the public the unencrypted nature of these signals makes them vulnerable to unintentional interference and intentional attacks thus unauthorized manipulation of gps signals leads to disruption of correct readings of time references and thus is called time synchronization attack tsa to address the impact of malicious attacks for instance on pmu data the electric power research institute published a technical report that recognizes the vulnerability of pmus to gps spoofing under its scenario gps time signal compromise these attacks introduce erroneous time stamps which are eventually equivalent to inducing wrong phase angle in authors are with the electrical and computer engineering department university of texas at san antonio san antonio tx usa the pmu measurements the impact of tsas on generator trip control transmission line fault detection voltage stability monitoring disturbing event locationing and power system state estimation has been studied and evaluated both experimentally and through simulations intentional unauthorized manipulation of gps signals is commonly referred to as gps spoofing and can be categorized based on the spoofer mechanism as follows jamming blocking the spoofer sends high power signals to jam the normal operation of the receiver by disrupting the normal operation of the victim receiver often referred to as loosing lock then the victim receiver may lock onto the spoofer signal after jamming data level spoofing the spoofer manipulates the navigation data such as orbital parameters ephemerides that are used to compute satellite locations signal level spoofing the spoofer synthesizes signals that carry the same navigation data as concurrently broadcasted by the satellites attack the spoofer records the authentic gps signals and retransmits them with selected delays at higher power typically the spoofer starts from low power transmission and increases its power to force the receiver to lock onto the spoofed delayed signal the spoofer may change the transmitting signal properties such that the victim receiver miscalculates its estimates common gps receivers lack proper mechanisms to detect these attacks a group of studies have been directed towards evaluating the requirements for successful attacks theoretically and experimentally for instance the work in has designed a real spoofer as a software defined radio sdr that records authentic gps signals and retransmits fake signals it provides the option of manipulating various signal properties for spoofing spoofing detection techniques in the literature the first level of countermeasures to reduce the effect of malicious attacks on gps receivers typically relies on the receiver autonomous integrity monitoring raim gps receivers typically apply raim consistency checks to detect the anomalies exploiting measurement redundancies for example raim may evaluate the variance of gps solution residuals and consequently generate an alarm if it exceeds a predetermined threshold similar variance authentication techniques have been proposed in table i gps s poofing d etection t echniques d etection d omain and i mplementation a spects method ekf cusum attack detection domain gps navigation domain gps baseband signal domain attack not estimated not estimated ref gps baseband power grid domains not estimated spree ref ref ref ref tsarm gps baseband signal domain gps baseband signal domain gps navigation domain gps navigation domain gps navigation domain gps navigation domain not estimated not estimated not estimated not estimated not estimated estimated based on hypothesis testing on the kalman filter innovations however they are vulnerable to smarter attacks that pass raim checks or the innovation hypothesis testing a plethora of countermeasures have been designed to make the receivers robust against more sophisticated attacks vector tracking exploits the signals from all satellites jointly and feedbacks the predicted position velocity and time pvt to the internal lock loops if an attack occurs the lock loops become unstable which is an indication of attack cooperative gps receivers can perfrom authentication check by analyzing the integrity of measurements through communications also a quick sanity check for stationary time synchronization devices is to monitor the estimated location as the true location can be known a priori any large shift that exceeds the maximum allowable position estimation error can be an indication of attack the receiver receiver can be used as an indicator of spoofing attack in the difference between the ratios of two gps antennas has been proposed as a metric of pmu trustworthiness in addition some approaches compare the receiver s clock behavior against its statistics in normal operation b existing literature gaps as discussed above prior research studies addressed a breadth of problems related to gps spoofing however there are certain gaps that should still be addressed most of the works do not provide analytical models for different types of spoofing attacks the possible attacking procedure models are crucial for designing the countermeasures against the spoofing attacks although some countermeasures might be effective for a certain type of attack a comprehensive countermeasure development is still lacking for defending the gps receiver this is practically needed as the receiver can not predict the type of attack the main effort in the literature is in detection of possible spoofing attacks however even with the spoofing detection the gps receiver can not resume its normal operation especially in pmu applications where the network s normal operation can not be interrupted so the spoofing countermeasures should not only detect the attacks but also mitigate their effects so that the network can resume its normal operation there is a need for simpler solutions which can be integrated with current systems implementation aspects benchmark for most common gps receivers applies hypothesis testing on packets of received signal combines the statistics of ratio difference between two gps antennas applies auxiliary peak tracking in the correlators of receiver applies a vector tracking loop needs collaboration among multiple gps receivers applies an particle filter applies hypothesis testing on a gps clock signature applies a optimization technique relevant yes no no no no no yes yes contributions of this work this work addresses the previously mentioned gaps for stationary time synchronization systems to the best of our knowledge this is the first work that provides the following major contributions the new method is not a mere spoofing detector it also estimates the spoofing attack the spoofed signatures clock bias and drift are corrected using the estimated attack the new method detects the smartest attacks that maintain the consistency in the measurement set a descriptive comparison between our solution and representative works in the literature is provided in table i a review of the spoofing detection domain shows that most of the prior art operates at the baseband signal processing domain which necessitates manipulation of the receiver circuitry hence the approach in the present paper is compared only to those works whose detection methodology lies in navigation domain the proposed tsa detection and mitigation approach in this paper consists of two parts first a dynamical model is introduced which analytically models the attacks in the receiver s clock bias and drift through a proposed novel time synchronization attack rejection and mitigation tsarm approach the clock bias and drift are estimated along with the attack secondly the estimated clock bias and drift are modified based on the estimated attacks so that the receiver would be able to continue its normal operation with corrected timing for the application the proposed method detects and mitigates the effects of the smartest and most consistent reported attacks in which the position of the victim receiver is not altered and the attacks on the pseudoranges are consistent with the attacks on pseudorange rates different from outlier detection approaches in the proposed method detects the anomalous behavior of the spoofer even if the measurement integrity is preserved the spoofing mitigation scheme has the following desirable attributes it solves a small quadratic program which makes it applicable to commonly used devices it can be easily integrated into existing systems without changing the receiver s circuitry or necessitating mulitple gps receivers as opposed to it can run in parallel with current systems and provide an alert if spoofing has occurred without halting the normal operation of the system corrected timing estimates can be computed the proposed technique has been evaluated using a commercial gps receiver with measurements access these measurements have been perturbed with spoofing attacks specific to pmu operation applying the proposed technique shows that the clock bias of the receiver can be corrected within the maximum allowable error in the pmu ieee standard paper organization a brief description of the gps is described in section ii then we provide the models for possible spoofing attacks in section iii section iv elaborates on the proposed solution to detect and modify the effect of these attacks our solution is numerically evaluated in section v followed by the conclusions in section vi ii gps pvt e stimation in this section a brief overview of the gps position velocity and time pvt estimation is presented the main idea of localization and timing through gps is trilateration which relies on the known location of satellites as well as distance measurements between satellites and the gps receiver in particular the gps signal from satellite n contains a set of navigation data comprising the ephemeris and the almanac typically updated every hours and one week respectively together with the signal s time of transmission tn this data is used to compute the satellite s position pn xn tn yn tn zn tn t in earth centered earth fixed ecef coordinates through a function known to the gps receiver let tr denote the time that the signal arrives at the gps receiver the distance between the user gps receiver and satellite n can be found by multiplying the signal propagation time tr tn by the speed of light this quantity is called pseudorange c tr tn n n where n is the number of visible satellites the pseudorange is not the exact distance because the receiver and satellite clocks are both biased with respect to the absolute gps time let the receiver and satellite clock biases be denoted by bu and bn respectively therefore the time of reception tr and tn are related to their absolute values in gps time as follows bn n the bu tn tgps tr tgps n r bn s are computed from the received navigation data and are considered known however the bias bu must be estimated and should be subtracted from the measured tr to yield the receiver absolute gps time tgps r which can be used as a time reference used for synchronization synchronization systems time stamp their readings based on the coordinated universal time utc which has a known offset with the gps time as tutc tgps where is available r r let pu xu yu zu t be the coordinates of the gps receiver and dn its true range to satellite this distance is expressed via the locations pu pn and the times tgps r tgps as dn kpn pu c tgps tgps therefore the n n r measurement equation becomes kpn pu c bu bn where n n and represents the noise the unknowns in are xu yu zu bu and therefore measurements from at least four satellites are needed to estimate them https accessed furthermore the nominal carrier frequency fc mhz of the transmitted signals from the satellite experiences a doppler shift at the receiver due to the relative motion between the receiver and the satellite hence in addition to pseudoranges pseudorange rates are estimated from the doppler shift and are related to the relative satellite velocity vn and the user velocity vu via pn pu vn vu t kpn pu k where is the clock drift in most cases there are more than four visible satellites resulting in an overdetermined system of equations in and typical gps receivers use nonlinear weighted least squares wls to solve and and provide an estimate of the location velocity clock bias and clock drift of the receiver often referred to as pvt solution to additionally exploit the consecutive nature of the estimates a dynamical model is used the conventional dynamical model for stationary receivers is a random walk model chap xu l xu l yu l yu l zu l zu l w l bu l bu l l l where l is the time index is the time resolution typically sec and w is the noise the dynamical system and measurement equations and are the basis for estimating the user pvt using the extended kalman filter ekf previous works have shown that simple attacks are able to mislead the solutions of wls or ekf stationary gpsbased time synchronization systems are currently equipped with the mode option which can potentially detect an attack if the gps position differs from a known receiver location by a maximum allowed error this can be used as the first indication of attack but more advanced spoofers such as the ones developed in have the ability to manipulate the clock bias and drift estimates of the stationary receiver without altering its position and velocity the latter should be zero so even with ekf on the conventional dynamical models perturbations on the pseudoranges in and pseudorange rates in can be designed so that they directly result in clock bias and drift perturbations without altering the position and velocity of the receiver iii m odeling t ime s ynchronization attacks this section puts forth a general attack model that encompasses the attack types discussed in the literature this model is instrumental for designing the technique discussed in the next section while tsas have different physical mechanisms they manifest themselves as attacks on pseudorange and pseudorange rates these attacks can be modeled as direct perturbations on and as l l l l l l a a b time s b time s fig type i attack on a pseudorange and b pseudorange rate versus local observation time where and are the spoofing perturbations on pseudoranges and pseudorange rates respectively and and are respectively the spoofed pseudorange and pseudorange rates a typical spoofer follows practical considerations to introduce feasible attacks these considerations can be formulated as follows an attack is meaningful if it infringes the maximum allowed error defined in the system specification for instance in pmu applications the attack should exceed the maximum allowable error tolerance specified by the ieee standard which is total variation error tve equivalently expressed as phase angle error clock bias error or m of bias error on the other hand cdma cellular networks require timing accuracy of due to the peculiarities of the gps receivers the internal feedback loops may loose lock on the spoofed signal if the spoofer s signal properties change rapidly the designed spoofers have the ability to manipulate the clock drift by manipulating the doppler frequency and clock bias by manipulating the code delay these perturbations can be applied separately however the smartest attacks maintain the consistency of the spoofer s transmitted signal this means that the pertubations on pseudoranges are the integration of perturbations over pseudorange rates in here distinguishing between two attack procedures is advantageous as the literature includes very few research reports on the technical intricacies of the spoofer constraints type i the spoofer manipulates the authentic signal so that the bias abruptly changes in a very short time fig illustrates this attack the attack on the pseudoranges suddenly appears at t s and perturbs the pseudoranges by the equivalent attack on pseudorange rates is a dirac delta function type ii the spoofer gradually manipulates the authentic signals and changes the clock bias through time this attack can be modeled by l l l l l l http accessed fig type ii attack on a pseudorange and b pseudorange rate versus local observation time where and are respectively called distance equivalent velocity and distance equivalent acceleration of the attack to maintain the victim receiver lock on the spoofer s signals the attack should not exceed a certain distance equivalent velocity two such limiting numbers are reported in the literature namely in and in the acceleration to reach the maximum spoofing velocity is reported to be the spoofer acceleration can be random which makes type ii attack quite general the distance equivalent velocity can be converted to the equivalent bias change rate in through dividing the velocity by the speed of light fig illustrates this attack the attack on the pseudoranges starts at t s and perturbs the pseudoranges gradually with distance equivalent velocity not exceeding and maximum distance equivalent random acceleration satisfying the introduced attack models are quite general and can mathematically capture most attacks on the victim receiver s measurements pseudoranges and pseudorange rates discussed in section i in another words type i and type ii attacks can be the result of data level spoofing signal level spoofing attack or a combination of the aformentioned attacks the main difference between type i and type ii attacks is the spoofing speed the speed of the attack depends on the capabilities of the spoofer with respect to manipulating various features of the gps signals indeed attacks of different speeds have been reported in the literature provided earlier in the present section this work does not deal with jamming which disrupts the navigation functionality completely whereas spoofing misleads it in the next section a dynamical model for the clock bias and drift is introduced which incorporates these attacks based on this dynamical model an optimization problem to estimate these attacks along with the clock bias and drift is proposed iv dynamical m odel tsa r ejection and m itigation this section introduces a dynamical model to accommodate the spoofing attack and a method to estimate the attack afterwards a procedure for approximately nullifing the effects of the attack on the clock bias and drift is introduced a novel dynamical model modeling of the attack on pseudoranges and pseudorange rates is motivated by the attack types discussed in the previous section these attacks do not alter the position or velocity but only the clock bias and clock drift our model does not follow the conventional dynamical model for stationary receivers which allows the position of the receiver to follow a random walk model instead the known position and velocity of the victim receiver are exploited jointly the state vector contains the clock bias and clock drift and the attacks are explicitly modeled on these components leading to the following dynamical model cbu l cbu l csb l cwb l l l l l z z z z z xl sl wl f where sb and are the attacks on clock bias and clock drift and wb and are colored gaussian noise samples with covariance function defined in chap here both sides are multiplied with c which is a typically adopted convention the state noise covariance matrix ql is particular to the crystal oscillator of the device similarly define l l l t and l l l t the measurement equation can be as cbu l l l l z z z yl xl h l pu l k l l l kp l p l k n u n l l vu l t l l c l l l l k pn l l t l l vn l vu l kpn l l k z z cl explicit modeling of pu and vu in cl indicates that the dynamical model benefits from using the stationary victim receiver s known position and velocity the latter is zero the measurement noise covariance matrix rl is obtained through the measurements in the receiver detailed explanation of how to obtain the state and measurement covariance matrices ql and rl is provided in section it should be noted that the state covariance ql only depends on the victim receiver s clock behavior and does not change under spoofing however the measurement covariance matrix rl experiences contraction the reason is that to ensure that the victim receiver maintains lock to the fake signals the spoofer typically applies a power advantage over the real incoming gps signals at the victim receiver s front end comparing and tsas which do not alter the position and velocity transfer the attack on pseudoranges and pseudorange rates directly to clock bias and clock drift thus it holds that csb and b attack detection let l k k l define the time index within the observation window of length l where k is the running time index the solution to the dynamical model of and is obtained through stacking l measurements and forming the following optimization problem x kyl hxl cl argmin l x s x x fxl sl l t where x mx t are the estimated states t are the estimated attacks is a regularization coefficient and d is an l total variation matrix which forms the variation of the signal over time as the first term is the weighted residuals in the measurement equation and the second term is the weighted residuals of the state equation the last regularization term promotes sparsity over the total variation of the estimated attack in the clock bias and clock drift are estimated jointly with the attack here the model of the two introduced attacks should be considered in type i attack a step attack is applied over the pseudoranges the solution to the clock bias equivalently p experiences a step at the attack time the kdsl b l sb l l l indicates a rise as it tracks the significant differences between two subsequent time instants if the magnitude of the estimated attack in two adjacent times does not change significantly the total variation of the attack is close to zero otherwise in the presence of an attack the total variation of the attack includes a spike at the attack time in type ii attack the total variation of the attack does not show significant changes as the attack magnitude is small at the beginning and the sparsity is not evident initially although we explained why it is meaningful to expect only few nonzero entries in the total variation of the attacks in general this is not a necessary condition for capturing the attacks during initial small total variation magnitudes this means that explicit modeling of the attacks in and estimation through does not require the attacks to exhibit sparsity over the total variation furthermore when the bias and bias drift are corrected using the estimated attack we will provide one mechanism in section sparsity over the total variation appears for subsequent time instants in these time instants the attack appears to be more prominent and in effect the low dynamic behavior of the attack is magnified a fact that facilitates the attack detection and will also be verified numerically this effect is a direct consequence of and the correction scheme discussed in the next section the optimization problem of boils down to solving a simple quadratic program specifically the epigraph trick in convex optimization can be used to transform the into linear constraints the observation window l slides for a lag time tlag l which can be set to tlag for realtime operation the next section details the sliding window operation of the algorithm and elaborates on how to use the solution of in order to provide corrected bias and drift state correction in observation window of length l the estimated attack is used to compensate the impact of the attack on the clock bias clock drift and measurements revisiting the attack model in the bias at time l depends on the clock bias and clock drift at time this dependence successively traces back to the initial time therefore any attack on the bias that occurred in the past is accumulated through time a similar observation is valid for the clock drift the clock bias at time l is therefore contaminated by the cumulative effect of the attack on both the clock bias and clock drift in the previous times the correction method takes into account the previously mentioned effect and modifies the bias and drift by subtracting the cumulative outcome of the clock bias and drift attacks as follows l x x l l l l l l l x l l l l l where and are respectively the corrected clock bias and are respectively the corrected pseudorange clock drift and and pseudorange rates and is an all one vector of length n in l l for the first observation window k and k l tlag l k l for the observation windows afterwards this ensures that the measurements and states are not doubly corrected the corrected measurements are used for solving for the next observation window the overall attack detection and modification procedure is illustrated in algorithm after the receiver collects l measurements problem is solved based on the estimated attack the clock bias and clock drift are cleaned using this process is repeated for a sliding window and only the clock bias and drift of the time instants that have not been cleaned previously are corrected in another words there is no duplication of modification over the states the proposed technique boils down to solving a simple quadratic program with only few variables and can thus be performed in real time for example efficient implementations of quadratic programming solvers are readily available in lowlevel programming languages the implementation of this technique in gps receivers and electronic devices is thus straightforward and does not necessitate creating new libraries n umerical r esults we first describe the data collection device and then assess three representative detection schemes in the literature that fail to detect the tsa attacks these attacks mislead the clock bias and clock drift while maintaining correct location and velocity estimates the performance of our detection and modification technique over these attacks is illustrated afterwards algorithm tsa rejection and mitigation tsarm set k while true do batch yl k k l construct h cl f k k l compute ql and rl details provided in section v estimate via assign l m m and l m m k k l assign l m m and l m m k k l modify l l l and l via l for the first window and k l tlag l k l windows afterwards l set yl k k l l output tutc l tr l l t c r k k l to the user for time stamping slide the observation window by setting k k tlag end while gps data collection device a set of real gps signals has been recorded with a google nexus tablet at the university of texas at san antonio on june the ground truth of the position is obtained through taking the median of the wls position estimates for a stationary device this device has been recently equipped with a gps chipset that provides raw gps measurements an android application called gnss logger has been released along with the matlab codes by the google android location team of interest here are the two classes of the package the provides the gps receiver clock properties and the provides the measurements from the gps signals both with accuracies to obtain the pseudorange measurements the transmission time is subtracted from the time of reception the function getreceivedsvtimenanos provides the transmission time of the signal which is with respect to the current gps week midnight the signal reception time is available using the function gettimenanos to translate the receiver s time to the gps time and gps time of week the package provides the difference between the device clock time and gps time through the function getfullbiasnanos the receiver clock s covariance matrix ql is dependent on the statistics of the device clock oscillator the following our data is available at https gps https accessed https accessed bias m ekf normal ekf spoofed pf time s drift ekf normal ekf spoofed time s fig the effect of type ii attack on the ekf and the particle filter on a clock bias and b clock drift the attack started at t panel b does not include the drift no attack consistent attack inconsistent attack a attack started b attack started c time s fig performance of hypothesis testing based on statistic under type i attack for different false alarm probabilities a no attack b inconsistent attack c consistent attack model is typically adopted ql where and and we select and chap for calculating the measurement covariance matrix rl the uncertainty of the pseuodrange and pseudorange rates are used these uncertainties are available from the device together with the respective in the experiments we set because the distance magnitudes are in tens of thousands of meters the estimated clock bias and drift through ekf in normal operation is considered as the ground truth for the subsequent analysis in what follows reported times are local b failure of prior work in detecting consistent attacks this section demonstrates that three relevant approaches from table i may fail to detect consistent attacks that is attacks where is the integral of in the performance of the ekf and the particle filter of subject to a type ii attack is reported first the perturbations over gps measurements are the same as in fig and are used as input to the ekf and the particle filter the attack starts at t fig depicts the effect of attack on the clock bias and drift the ekf on the dynamical model in and blindly follows the attack after a short settling time the particle filter only estimates the clock bias and assumes the clock drift is known from wls similarly to the ekf the particle filter is not able to detect the consistent spoofing attack the maximum difference between the receiver estimated position obtained from the ekf on under type ii attack and under normal operation is xdiff m ydiff m and zdiff the position estimate has thus not been considerably altered by the attack the third approach to be evaluated has been proposed in and monitors the statistics of the receiver clock as a typical spoofing detection technique considering that gps receivers compute the bias at regular intervals a particular approach is to estimate the gps time after k time epochs and confirm that the time elapsed is indeed to this hend the following statistic can bepformulated id k k s s tgp k tgp k k the r r test statistic d is normally distributed with mean zero when there is no attack and may have nonzero mean depending on the attack as will be demonstrated shortly its variance needs to be estimated from a few samples under normal operation the detection procedure relies on statistical hypothesis testing for this a false alarm probability pf a is defined each pf a corresponds to a threshold to which d k is compared against chap if k the receiver is considered to be under attack the result of this method is shown in fig for different false alarm probabilities fig a depicts d k when the system is not under attack the time signature lies between the thresholds only for low false alarm probabilities the system can detect the attack in case of an inconsistent type i attack in which is not the integration of perturbations over pseudorange rates and only pseudoranges are attacked fig b shows that the attack is detected right away however for smart attacks where the spoofer maintains the consistency between the pseudorange and pseudorange rates fig c illustrates that the signature d k fails to detect the attack this example shows that the statistical behavior of the clock can remain untouched under smart spoofing attacks in addition even if an attack is detected the previous methods can not provide an estimate of the attack spoofing detection on type i attack fig shows the result of solving using the gps measurements perturbed by the type i attack of fig the spoofer has the capability to attack the signal in a very short time so that the clock bias experiences a jump at t the estimated total variation of bias attack renders a spike right at the attack time the modification procedure of corrects the clock bias using the estimated attack bias spoofed bias normal a attack started b attack started c attack started normal attacked normal attacked a normal attacked attack started normal attacked normal attacked normal attacked b attack started attack started bias modified bias normal fig the result of attack detection and modification over type ii attack for t s through t the attack started at t from top to bottom a normal clock bias blue and spoofed bias red b estimated bias attack c total variation of the estimated bias attack and d true bias blue and modified bias magenta b time s fig comparison of a normal pseudorange change k and spoofed pseudoranges change k and b normal pseudorange rates and spoofed pseudorange rates under type ii attack for some of the visible satellites the attack started at t bias m normal attacked attack started mbias m drift tv bias tv m normal attacked attack started d time s fig the result of attack detection and modification over a type i attack that started at t from top to bottom a normal clock bias blue and spoofed bias red b total variation of the estimated bias attack c total variation of the estimated drift attack and d true bias blue and modified bias magenta d time s a c bias modified bias normal attack started bias spoofed bias normal bias tv m drift tv mbias m drift tv bias tv m bias m attack started mbias m bias m attack started bias spoofed bias normal attack started a b attack started c attack started bias modified bias normal d time s spoofing detection on type ii attack fig the result of attack detection and modification over type ii attack for t s through t from top to bottom a normal clock bias blue and spoofed bias red b estimated bias attack c total variation of the estimated bias attack and d true bias blue and modified bias magenta the impact of type ii attack on the pseudoranges and pseuodrange rates is shown in fig specifically fig a illustrates the normal and spoofed pseudorange changes with respect to their initial value at t s for some of the visible satellites in the receiver s view fig b depicts the corresponding pseudorange rates the tag at the end of each line indicates the satellite id and whether the pseudorange or pseudorange rate corresponds to normal operation or operation under attack the spoofed pseudoranges diverge quadratically starting at t s following the type ii attack for the type ii attack algorithm is implemented for an sliding window with l s with tlag fig shows the attacked clock bias starting at t since the attack magnitude is small at initial times of the spoofing neither the estimated attack nor the total variation do not show significant values the procedure of sliding window is to correct the current clock bias and clock drift for all the times that have not been modified previously hence at the first run the estimates of the whole window are modified fig shows the estimated attack and its corresponding total variation after one tlag as is obvious from the figure the modification of the previous clock biases transforms the low dynamic behavior of the spoofer to a large jump at t s which facilitates the detection of attack through the total variation component in the clock bias and drift have been modified for the previous time instants and need to be cleaned only for t s in the present work the set of gps signals are obtained from an actual gps receiver in a real environment but the attacks are simulated based on the characteristics of real spoofers reported in the literature experimentation on the behavior of the proposed detection and mitigation approach under real spoofing scenarios is the subject of future research rmse m r eferences l fig the rmse of tsarm for various values of l and tlag analysis of the results let k be the total length of the observation time in this experiment k the q root mean square error rmse is introduced rmse kc k k which shows the average error between the clock bias that is output from the spoofing detection technique and the estimated clock bias from ekf under normal operation which is considered as the ground truth comparing the results of the estimated spoofed bias from the ekf and the normal bias shows that rmseekf this error for the antispoofing particle filter is rmsepf having applied tsarm the clock bias has been modified with a maximum error of rmsetsarm fig illustrates the rmse of tsarm for a range of values for the window size l and the lag time tlag when the observation window is smaller fewer measurements are used for state estimation on the other hand when l exceeds s the number of states to be estimated grows although more measurements are employed for estimation the numerical results illustrate that models the clock bias and drift attacks effectively which are subsequently estimated using and corrected through vi c oncluding r emarks and f uture w ork this work discussed the research issue of time synchronization attacks on devices that rely on gps for time tagging their measurements two principal types of attacks are discussed and a dynamical model that specifically models these attacks is introduced the attack detection technique solves an optimization problem to estimate the attacks on the clock bias and clock drift the spoofer manipulated clock bias and drift are corrected using the estimated attacks the proposed method detects the behavior of spoofer even if the measurements integrity is preserved the numerical results demonstrate that the attack can be largely rejected and the bias can be estimated within of its true value which lies within the standardized accuracy in pmu and cdma applications the proposed method can be implemented for operation office of electricity delivery energy reliability http accessed official government information about the global positioning system gps and related topics http parkinson spilker axelrad and enge global positioning system theory and applications american institute of aeronautics and astronautics vol i global positioning system theory and applications american institute of aeronautics and astronautics vol ii misra and enge global positioning system signals measurements and performance ed press lincoln ma yaesh and shaked hinf inity estimation and its application to improved tracking in gps receivers ieee trans ind vol no pp mar li and xu a reliable fusion positioning strategy for land vehicles in environments based on sensors ieee trans ind vol pp apr electric sector failure scenarios and impact analyses version electric power research institute tech schmidt radke camtepe foo and ren a survey and analysis of the gnss spoofing threat and countermeasures acm computer survey vol no pp may moussa debbabi and assi security assessment of time synchronization mechanisms for the smart grid ieee commun surveys vol no pp thirdquarter shepard humphreys and a fansler evaluation of the vulnerability of phasor measurement units to gps spoofing attacks int crit infrastruct vol pp zhang gong dimitrovski and li time synchronization attack in smart grid impact and analysis ieee trans smart grid vol no pp mar jiang zhang harding makela and spoofing gps receiver clock offset of phasor measurement units ieee trans power systems vol pp risbud gatsis and taha assessing power system state estimation accuracy with pmu measurements in ieee trans smart grid to be published nighswander ledvina diamond brumley and brumley gps software attacks in proc of the acm conf on comput and commun security pp tippenhauer rasmussen and on the requirements for successful gps spoofing attacks in proc of the acm conf on comput and commun security pp wesson gross humphreys and evans gnss signal authentication via power and distortion monitoring ieee trans on aeros and elect systems vol pp no pp psiaki and humphreys gnss spoofing and detection proc of the ieee vol no pp june papadimitratos and jovanovic positioning attacks and countermeasures in proc of the ieee military commun san diego ca usa pp zeng li and qian gps spoofing attack on time synchronization in wireless networks and detection scheme design in ieee military communications conference pp y fan zhang trinkle dimitrovski j b song and li a defense mechanism against gps spoofing attacks on pmus in smart grids ieee trans smart grid vol no pp ranganathan and capkun spree a spoofing resistant gps receiver in proc of the annual int conf on mobile comput and pp chou heng and gao robust timing for phasor measurement units a approach in proc of the int tech meeting of the sat division of the institute of navigation pp ng and gao advanced vector tracking for robust gps time transfer to pmus in proc of the institute of navigation conf ion pp yu ranganathan locher and basin short paper detection of gps spoofing attacks in power grids in proc of the acm conf on security and privacy in wireless mobile pp jansen tippenhauer and gps spoofing detection error models and realization in proc of the annual conf on comput security pp han luo meng and li a novel method based on particle filter for gnss in proc of the ieee int conf on commun icc june pp zhu youssef and hamouda detection techniques for datalevel spoofing in phasor measurement units in proc of the int conf on selected topics in mobile wireless netw mownet apr pp shepard and humphreys characterization of receiver response to a spoofing attacks in proc of the int tech meeting of the sat division of the institute of navigation ion gnss portland or pp humphreys ledvina psiaki o hanlon and kintner assessing the spoofing threat development of a portable gps civilian spoofer in proc of the int tech meeting of the sat division of the institute of navigation ion gnss savannah ga pp motella pini fantino mulassano nicola fortunyguasch wildemeersch and symeonidis performance assessment of low cost gps receivers under civilian spoofing attacks in esa workshop on sat nav tech and european workshop on gnss signals and signal pp teunissen quality control in integrated navigation systems ieee aeros and elect sys magazine vol pp july heng makela bobba sanders and gao reliable timing for power systems a architecture in power and energy conf at illinois peci pp heng b work and gao gps signal authentication from cooperative peers ieee trans on intell transportation vol no pp radin swaszek seals and hartnett gnss spoof detection based on pseudoranges from multiple receivers in proceedings of the international technical meeting of the institute of navigation pp masreliez and martin robust bayesian estimation for the linear model and robustifying the kalman filter ieee trans autom control vol no pp june farahmand giannakis and angelosante doubly robust smoothing of dynamical processes via outlier sparsity constraints ieee trans signal vol pp android gnss https accessed ieee standard for synchrophasor measurements for power systems ieee std revision of ieee std pp model gps satellite clock ns http php accessed zhang gong dimitrovski and li time synchronization attack in smart grid impact and analysis ieee trans on smart grid vol no pp march karahanoglu bayram and ville a signal processing approach to generalized total variation ieee trans signal vol no pp boyd and vandenberghe convex optimization cambridge university press brown and hwang introduction to random signals and applied kalman filtering with matlab exercises and solutions ed new york ny wiley kay fundamentals of statistical signal processing volume ii detection theory inc ali khalajmehrabadi s received the degree from the babol noshirvani university of technology iran in and the degree from university technology malaysia malaysia in where he was awarded the best graduate student award he is currently pursuing the degree with the department of electrical and computer engineering university of texas at san antonio his research interests include indoor localization and navigation systems collaborative localization and global navigation satellite system he is a student member of the institute of navigation and the ieee nikolaos gatsis s received the diploma with hons degree in electrical and computer engineering from the university of patras greece in and the degree in electrical engineering and the degree in electrical engineering with minor in mathematics from the university of minnesota in and respectively he is currently an assistant professor with the department of electrical and computer engineering university of texas at san antonio his research interests lie in the areas of smart power grids communication networks and cyberphysical systems with an emphasis on optimal resource management and statistical signal processing he has symposia in the area of smart grids in ieee globalsip and ieee globalsip he has also served as a editor for a special issue of the ieee journal on selected topics in signal processing on critical infrastructures david akopian m received the degree in electrical engineering in he is a professor with the university of texas at san antonio he was a senior research engineer and a specialist with nokia corporation from to from to he was a researcher and an instructor with the tampere university of technology finland he has authored and coauthored over patents and publications his current research interests include digital signal processing algorithms for communication and navigation receivers positioning dedicated hardware architectures and platforms for software defined radio and communication technologies for healthcare applications he served in organizing and program committees of many ieee conferences and annual spie multimedia on mobile devices conferences his research has been supported by the national science foundation national institutes of health usaf navy and texas foundations ahmad taha s received the and degrees in electrical and computer engineering from the american university of beirut lebanon in and purdue university west lafayette indiana in in summer summer and spring he was a visiting scholar at mit university of toronto and argonne national laboratory currently he is an assistant professor with the department of electrical and computer engineering at the university of texas san antonio taha is interested in understanding how complex systems operate behave and misbehave his research focus includes optimization and control of power system observer design and dynamic state estimation and
| 3 |
information capacity of direct detection optical transmission systems nov antonio mecozzi fellow osa fellow ieee and mark shtaif fellow osa fellow ieee show that the spectral efficiency of a direct detection transmission system is at most less than the spectral efficiency of a system employing coherent detection with the same modulation format correspondingly the capacity per complex degree of freedom in systems using direct detection is lower by at most bit a tx noise propagation channel rx index capacity optical detection modulation i ntroduction r ecently the field of optical communications is witnessing a revival of interest in direct detection receivers which are often viewed as a promising alternative to their expensive coherent counterparts this process stimulates an interesting fundamental question to whose answer the present paper is dedicated what is the difference between the information capacity of a direct detection system and that of a system using coherent detection in order to answer this question we consider the channel schematic illustrated in fig which consists of a transmitter that is capable of generating any desirable complex waveform whose spectrum is contained within a bandwidth b a noise source of arbitrary spectrum and statistics a propagation channel and a receiver although the linearity of the channel and the additivity of the noise are immaterial to our analysis we will assume these properties in the beginning while postponing the generalization of our discussion to sec iv the direct detection receiver in our definition is one that recovers the communicated data from the intensity absolute square value of the received electric field while using a single photodiode as illustrated in fig it consists of a square optical filter of width b that rejects out of band noise a whose output current is proportional to the received optical and a processing unit that recovers the information the benchmark to which we compare the direct detection receiver is the coherent receiver in whose case the received optical field is reconstructed intuitively it is tempting to conclude that since a direct detection receiver ignores one of the two degrees of freedom that are necessary for uniquely characterizing the electric field its capacity should be close to half of the capacity of a coherent system surprisingly this notion turns out to be incorrect and as we show in this paper the capacity per complex degree of freedom in systems using direct detection is lower by not more mecozzi is with the department of physical and chemical sciences university of l aquila l aquila italy shtaif is with the department of physical electronics tel aviv university tel aviv israel for simplicity in what follows we will assume that the proportionality coefficient is b obpf pd processing and data recovery fig a the setup considered in this paper it consists of an transmitter tx that can generate any complex waveform whose spectrum is contained in a bandwidth b a stationary noise source of arbitrary spectrum and distribution a propagation channel and a receiver rx b the schematic of a direct detection receiver the incoming optical field is filtered with an optical filter obpf to reject out of band noise and square law detected without any manipulation of the field by a single photodiode pd the photodiode and subsequent electronics are assumed to have bandwidth of at least so as to accommodate the bandwidth of the intensity waveform than bit than that of fully coherent systems correspondingly the loss in terms of spectral efficiency is limited to be no greater than throughout the paper in order to simplify the notation we will assume that the transmitted field is scalar this assumption does not affect the generality of the results as the transmission of orthogonal polarization components through linear channels is independent ii r elation to prior work our result stating that a direct detection channel is characterized by almost the same capacity as a coherent channel requires some clarification in view of its being in an apparent contradiction to prior work where the capacity of a seemingly similar channel was found to be lower by approximately a factor of two this work consists of ref published by the authors of the current manuscript as well as a number of more recent works the most relevant of which are contained in refs in order to avoid confusion we will adopt the terminology of and refer to the channels studied in those papers as various flavors of the intensity channel whereas the term direct detection channel will only be used in reference to the channel that we study here the reason for this apparent contradiction boils down to the fact that all the versions of the intensity channel assume that the information is encoded at a given rate b directly onto the intensity of the transmitted optical signal and then it is recovered by sampling the received signal s intensity at exactly the same rate in all cases the channel is assumed to be memoryless and the optical bandwidth and hence also the spectral efficiency do not play any role the studies in can be given practical justification when considering very or very old optical systems that used a optical source such as a diode indeed with such sources the optical phase is far too noisy to be used for transmitting information and the source linewidth is so much greater than the modulation bandwidth that relating to spectral efficiency in the modern sense is not meaningful in contrast to the above the direct detection channel is inspired by modern communications systems the vast majority of which relies on a highly coherent laser source one whose linewidth is substantially smaller than the bandwidth of the modulation this is the reason for our assumption that the transmitter in the direct detection channel can encode information into any complex waveform with the only constraint being that its spectrum is contained in some bandwidth b in addition since the process of involves frequency doubling the spectrum of the measured intensity is contained in a bandwidth of and hence sampling at the rate of is imperative in order to extract the information present in the current in order to further clarify the difference between the direct detection channel and the intensity channel we denote the field received after optical filtering by e t since the spectrum of e t is contained in a bandwidth b it can be rigorously expressed as e t x ek sinc bt k where sinc x sin x and where ek e t are the samples carrying the transmitted information the detected is proportional to i t if this photocurrent were to be sampled at a rate of b as in the samples at t would have been equal to in and the phase information would have been lost in this case the drop in the amount of extracted information and hence the capacity would have been roughly a factor of similarly to the results obtained in yet in the direct detection channel the sampling of the is done at a rate of so that the samples that are taken at t n are also obtained these middle samples are given by x sinc n m k n k em ek and they are clearly affected by the phase differences between the various samples in fact as our final result indicates knowledge of all intensity samples in and for all n allows one to collect almost all of the information contained in the complex optical field we note that the idea of increasing the information rate by sampling the received analog signal at a rate that is higher than b has been considered previously this is a natural idea in cases where the receiver contains a nonlinear element that expands the analog bandwidth of the received waveform so that sampling at b is no longer sufficient in order to collect all the information from the analog waveform in our case the nonlinearity is that of detection and it expands the analog bandwidth by exactly a factor of hence unlike the case studied in where the doubling of the sampling rate produced only minuscule benefits here sampling at is sufficient in order to extract all the information contained in the analog intensity waveform and there is no benefit in increasing the sampling rate farther finally it is instructive to relate to the most widespread example where the additive noise of fig is white gaussian in this case our theory implies that the capacity of the direct detection channel is within bit of log snr where the snr is the ratio between the average power of the information carrying signal and the variance of the filtered noise summed over both quadratures conversely as demonstrated in and the capacity of the intensity channel one that samples the received intensity at the rate b in the limit of high snr is log roughly half of the direct detection channel s iii t he information capacity of a direct detection receiver a the definition of distinguishable waveforms usually in engineering practice two waveforms t and t are said to be distinguishable when the energy of the difference between them is greater than z t t dt in the context of optical communications this definition is too restrictive because in all cases of interest optical receivers including coherent receivers are unable to distinguish between waveforms that differ only by a constant time independent owing to this reality we define waveforms to be distinguishable only when they can be told apart by an ideal coherent receiver formally this means that t and t are distinguishable only when they remain distinguishable according to eq even if one of them is rotated in the complex plane by some arbitrary constant phase z t t dt in ref the capacity in the high snr limit is written as log but the difference is only in the snr definition which relates to the noise variance in one quadrature in order to overcome this limitation the transmitter and reciever would have to share an exact on the scale of a fraction of a single optical cycle in principle this can be achieved by means of an atomic clock however the costs of such a solution on the one hand and the minuscule potential benefit in terms of information rate on the other hand ensure that this solution isn t deployed where m b fk m z the fourier coefficients are given by e t e m x dt en ei m m with en e t and where the second equality in takes advantage of the relation between the fourier series coefficients and the discrete fourier transform in periodic signals we now assign a z m x fk z k to be the of the fourier coefficients fk m clearly e t a exp and hence special attention needs to be paid to the cases where the value of z is on the unit circle when fm a z is an m degree polynomial which admits m zeros and it can be expressed as a z c m y z zk with c qm consider now the functions uk z fig an example of different waveforms in the case of m so that all to b with b and all having the intensity shown in a in b the phases t of these eight waveforms are plotted as explained in the text this is the largest number of distinguishable waveforms that are to b and have the same intensity we stress that waveforms that only differ by a constant phase are not counted as distinguishable in our definition for all values of notice that distinguishability by means of a coherent receiver doesn t necessarily imply distinguishability by means of a direct detection receiver the gap between the two is the subject of the subsection that follows b the multiplicity of complex waveforms having the same intensity we consider a complex signal e t whose spectrum is contained within a bandwidth b and which is periodic in time with a period with m being an integer the assumption of periodicity is not a limiting factor in our arguments as once the main results are established the case can be addressed by assuming the limit of m a direct detection receiver can only exploit the intensity i t t in order to extract the transmitted data our first claim which is key to proving the main arguments of this paper is that there are at most distinguishable legitimate waveforms ej t with j whose intensities t are equal to i t an illustration of this idea in the case of m can be found in figure in order to formally prove our statements we express e t as a fourier series having at most m elements e t m x fk z zk one for each zero of a z since exp functions have the property that the action of uk exp on e t produces a pure phase modulation and hence they can be considered as the dual of filters where time and frequency are interchanged when any combination of these functions multiplies a z it does not change the degree of the resulting polynomial as the corresponding zeros of a z are simply reflected with respect to the unit circle as illustrated below for example if we multiply a z by the product of z z and z the zeros and are replaced by and respectively yet the modulus of the product a z z z z remains identical to the modulus of a z when z is on the unit circle and in particular when z exp since there is a total of m functions uk z there are functions aj z that have the same modulus on the unit circle thus end up with time waveforms ej t aj exp whose intensity is i t identical to the intensity of e t note that uk z m are the only functions applying a pure phase modulation to e t that also preserve the number of elements in the the degree of the polynomial and consequently the spectral width of the resulting time m waveforms for this reason ej t are the only temporal waveforms whose intensity is i t and whose spectrum is fully contained within the bandwidth b a further discussion m of the uniqueness of the waveforms ej t is provided in the appendix prior to concluding this section it is interesting to stress that while is the highest possible number of distinguishable waveforms whose intensity equals i t the actual number of such waveforms is where is the number of zeros that are not located on the unit circle that is because when a zero zl falls on the unit circle ul z can be easily verified to be a constant phasefactor whose application to a z does not produce a new waveform note also that in this situation e tl with tl arg zl the implications to capacity we now prove the following relation between cd the information capacity of the direct detection channel and the capacity cc of a system using coherent detection cc cd cc where in all cases we are referring to the capacity per complex degree of we denote by x the input alphabet of our channel and by y the output alphabet available to a coherent receiver the output alphabet that is available to a direct detection receiver is denoted by y since no constraints are imposed on the transmitter the alphabet x contains all complex waveforms without restriction the alphabet y on the other hand contains only those complex waveforms that are to b whereas y contains all waveforms that can be obtained by squaring the absolute value of the waveforms contained in y communication requires that a probability px x is prescribed to the transmission of each individual waveform x the effect of the communications channel noise distortions etc is characterized by the conditional probabilities of detecting a given element y y in the case of coherent detection or y y in the case of direct detection given that a particular element x x was transmitted these conditional probabilities are denoted by y and respectively the mutual information per complex degree of freedom between the transmitter and each of the two receivers equals h y h y m i x y h y h y m where the entropy h y and the conditional entropy h y are given x py y py y h y i x y y x x h y px x x y and where the corresponding equations h y and h y are obtained by replacing y with y in all places the capacities cc and cd are obtained by maximizing the mutual information interestingly when e t equals exactly m times within the time period then it is also the only waveform that is to b and has that particular intensity the number of complex degrees of freedom is m which is the product of the temporal duration of the signal and the bandwidth b since x is an element of the alphabet x it represents a time dependent waveform nonetheless in order to keep the notation simple we avoid writing x t leaving the time dependence of x implicit additionally in order to avoid the notation we denote the probability distribution of x simply by px x a similar practice is used with the elements of y and y in line with our simplified notation summation over x and y should be p interpreted in a generalized sense in addition py y x px x values of eqs and with respect to the transmitted distribution px x in order to derive eq we take advantage of the relation i x y i x y y i x y i x y m i x y m where the first equality follows from the fact that i x y and the second equality follows from the relations h x h y i x y y m i x y h x h m i x y h h y m the last inequality is true because y can take no more than functional values for any given y in the limit of large m eq reduces to i x y i x y i x y note that expressions and hold for any distribution of the transmitted alphabet px x this means that for any modulation format the information per complex degree of freedom that can be extracted when using a direct detection receiver is at most one bit less than the information per channel use that can be extracted with coherent detection when px x is set to be the distribution that maximizes i x y we arrive at cc ip x y cc where ip x y is the mutual information i x y that corresponds to the distribution px x for which cc is attained clearly cd ip x y and hence cd cc nonetheless cd remains smaller or equal to cc as follows from the rightside inequality of eq this concludes the proof of eq finally we note that the capacity per complex degree of freedom which we have evaluated in sec is identical to the spectral efficiency which is the more commonly used term in the context of fiber communications hence the spectral efficiency of a direct detection system is at most smaller than that of a system using coherent detection in order to see that the two are exactly the same note that b is both the bandwidth of the optical signal as well as the number of complex degrees of freedom that are transmitted per second iv e xtension to n onlinear systems or to non additive noise communications are often affected by the nonlinear propagation phenomena taking place in optical fibers their effect is not only to distort the signal itself but also to cause a nonlinear interaction between the signal and noise in which case the noise can no longer be modeled as additive from the standpoint of our current study the only difficulty that is imposed by this situation is that it is impossible to relate to the spectrum occupied by the signal as a constant and hence the definition of spectral efficiency becomes problematic nonetheless it must be stressed that our analysis of the received waveforms in sec iii did not explicitly assume anything about the type of noise or propagation therefore our results with respect to the capacity of the optically filtered signal in fig remain perfectly valid in other words after filtering the information per degree of freedom that is contained in the received complex optical signal is at most one bit larger than the information contained in its intensity with this said it must be noted that we do not claim that positioning of a square filter in front of the receiver is an optimal practice in the nonlinear case nonetheless in practical situations encountered in fiber communications the inclusion of such a filter is practically unavoidable d iscussion while eq corresponds to the only relevant case of m the opposite limit of m which can be deduced from eq may challenge one s physical intuition as it predicts equality between the mutual information values corresponding to direct and coherent detection for this reason the discussion of this special case is interesting in spite of the fact that it is of no practical importance whatsoever in order to resolve this apparent conundrum note that the case m represents a situation in which the complex field e is time independent in particular the phase difference between any two possible fields is also time independent implying that the fields are distinguishable only provided that their intensities differ hence in this artificial situation the coherent receiver has no advantage over the direct detection receiver and therefore their capacities are identical another curious point related to the assumption of periodicity is that it is not the only convenient choice for arriving at the result of sec since e t is band limited and its spectrum is contained within b it can be written as e t x en sinc bt n where as noted earlier en e t and where sinc x sin x if we impose the requirement that en for n and for n m we end up with a bandlimited but e t nonetheless the number of waveforms ej t whose intensity equals that of e t remains at most in order to see that consider a time interval of m that contains the interval at its center assume also that m m so that the tails of the various sinc functions decay to the extent that the signal within m can be extended periodically without introducing any bandwidth broadening we may now apply the reasoning of sec iii to the signal in the interval m according to which the number of equal intensity waveforms is to the power of the number of zeros in a z that do not coincide with the unit circle evidently the number of such zeros is at most m because there are at least m m zeros that fall on the unit circle these are the zeros of e t at the times tl with l being outside of the range of to m which correspond to zeros in a z at zl exp on the unit circle finally it is important to stress the consequences of our definition of direct detection which requires that the incoming optical signal is detected by a single per polarization and without any manipulation of the signal prior to this definition excludes not only the use of a local oscillator as in coherent detection but also all selfcoherent schemes such as the ones proposed in and schemes of the kind considered in vi acknowledgement mecozzi acknowledges financial support from the italian government under project incipict shtaif acknowledges financial support from israel science foundation grant a ppendix a our discussion in sec iii involved the statement that the number of distinguishable complex waveforms that are characterized by a bandwidth b and a period can not be greater than one justification for this claim is that the only functions in that produce pure phase modulation and do not increase the order of the polynomial a z and hence they do not increase the bandwidth of e t are of the form ck z zk where zk is one of the zeros of a z and ck a constant in order for these functions not to change the intensity waveform of e t one must have and so that the amplitude of the function s transfer function on the unit circle is this means that the only functions that can be applied to a z without changing neither the order of the polynomial nor the intensity waveform are the m functions specified in eq indeed the number of such function combinations does not exceed here we also present an alternative proof that is based on the uniqueness of functions given a periodic waveform we have shown that by reflecting any of its zeros with respect to the unit circle as described in sec one obtains a different waveform having the same intensity bandwidth and timeperiod if we look at an arbitrary given waveform t p i t exp t of the above specified characteristics and identify all of its zeros we can chose to reflect only the zeros that are inside thep unit circle thereby producing a new waveform em t i t exp t having the same intensity i t but a different phase since the spectrum of em t is contained between and b and since all of its zeros in are outside the unit circle it belongs to a special class of functions that is famously known as functions of one of the most well known properties of such functions is that up to an immaterial constant their the requirement that there is no manipulation of the signal prior to photodetection can be replaced by the requirement that no manipulation other than filtering dispersion is applied prior to the reason is that all pass filtering can also be done at the transmitter and hence it does not affect the assumption of this work phase is uniquely determined by their intensity means h by i of p the the hilbert transform namely t h log i t c where h designates the hilbert transform and where c is an unknown constant since waveforms differing only by a constant phase are indistinguishable in our definition see sec we conclude that the minimum phase function that corresponds to a given intensity profile is unique the uniqueness of the function implies that each waveform in the set of distinguishable equal intensity waveforms that are to b and periodic in can be obtained from any other waveform in the set by means of functions of the form given in eq whose effect is to reflect the zeros of the waveforms that it acts upon in had it not been so different waveforms in the set would have produced different minimum phase functions therefore the total number of distinguishable waveforms in the set can not exceed r eferences randel breyer lee and walewski advanced modulation schemes for optical communications ieee sel topics quantum electron takahara tanaka nishihara kai li tao and rasmussen discrete for optical access networks in optical fiber communication conference osa technical digest online optical society of america paper weiss yeredor and shtaif iterative symbol recovery for power efficient dc biased optical ofdm systems ieee of lightwave technol lowery and armstrong multiplexing for dispersion compensation of optical systems opt express schmidt lowery and armstrong experimental demonstrations of electronic dispersion compensation for transmission using optical ofdm lightwave technol li che chen and shieh spectrally efficient optical transmission based on stokes vector direct detection opt express randel pilori chandrasekhar raybon and winzer transmission over ssmf using modulation with novel scheme proc of european conference of optical communications valencia spain paper schuster randel bunge lee breyer spinnler and petermann spectrally efficient compatible modulation for ofdm transmission with direct detection ieee photon technol letters mecozzi antonelli and shtaif kk coherent receiver optica antonelli mecozzi and shtaif pam transceiver in optical fiber communication conference osa technical digest online optical society of america paper chen antonelli chandrasekhar raybon sinsky mecozzi shtaif and winzer singlepolarization transmission over of standard singlemode fiber using detection in optical fiber communication conference osa technical digest online optical society of america post deadline paper li erkilinc shi sillekens galdino thomsen bayvel and killey ssbi mitigation and scheme in transmission with electronic dispersion compensation lightwave technol mecozzi and shtaif on the capacity of intensity modulated systems using optical amplifiers ieee photon technol lett lapidoth on phase noise channels at high snr in proc ieee information theory workshop itw bangalore india pp hranilovic and kschischang capacity bounds for and optical intensity channels corrupted by gaussian noise ieee transactions on information theory katz and shamai on the distribution of the noncoherent and partially coherent awgn channels ieee transactions on information theory lapidoth moser and wigger on the capacity of freespace optical intensity channels ieee transactions on information theory gilbert increased information rate by oversampling ieee transactions on information theory shannon mathematical theory of communications bell system technical journal pp july october xiang liu chandrasekhar and andreas leven digital detection opt express shechtman eldar cohen chapman miao and segev phase retrieval with application to optical imaging a contemporary overview ieee signal processing magazine gang wang georgios giannakis yonina eldar solving systems of random quadratic equations via truncated amplitude flow available at https alan oppenheim and ronald schafer signal processing ed upper saddle river nj signal processing series
| 7 |
extracting three dimensional surface model of human kidney from the visible human data set using free software kirana kumara p centre for product design and manufacturing indian institute of science bangalore india kiranakumarap corresponding author phone fax abstract three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney buying a three dimensional computer model of a representative human kidney or reconstructing a human kidney from an image sequence using commercial software both involve sometimes significant amount of money in this paper author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the visible human data set and a few free software packages imagej and meshlab in particular images from the visible human data set and the software packages used here both do not cost anything hence the practice of extracting the geometry of a representative human kidney for free as illustrated in the present work could be a free alternative to the use of expensive commercial software or to the purchase of a digital model keywords visible human data set kidney surface model free introduction laparoscopic surgery is often a substitute for a traditional open surgery when human kidney is the organ that is to be operated upon choosing laparoscopic surgery over an open surgery reduces trauma and shortens the recovery time for the patient but since laparoscopic surgery needs highly skilled surgeons it is preferable to use a surgical simulator for training and evaluating surgeons a surgical simulator that can simulate a laparoscopic surgery over a human kidney needs a virtual kidney a computer digital three dimensional model of a representative human kidney currently mainly two approaches are being practiced to obtain the geometry of a representative human kidney the first approach is to buy a readily available model of a human kidney from an online store the second approach is to use commercial software packages such as mimics amira to reconstruct a geometry of a kidney from a two dimensional image sequence one can see that both of these approaches cost sometimes significant amount of money present work shows that it is possible to obtain a three dimensional surface model of a representative human kidney completely for free present approach is to make use of a few free software packages to extract geometry of human kidney from images from the visible human data set vhd also known as the visible human project image data set or the visible human project data sets the free software packages used are imagej and meshlab one can note that images from vhd may be downloaded for free after obtaining a free license from national library of medicine nlm which is a part of national institutes of health nih vhd is a part of the more ambitious visible human project vhp approaches similar to the approach presented in the present paper may be found in the present author s previous works although the free software packages used in those works are the same as the ones used in the present work the images were not from vhd images used in and were downloaded from the images are no longer accessible as of now but images were downloadable sometime back also and discuss the reconstruction of a pig liver while the present work deals with the reconstruction of a human kidney upon conducting literature review one can see that there are authors who have used vhd together with commercial software packages also there are authors who have used images from sources other than vhd and have used commercial software packages to perform reconstruction of biological organs also there are authors who have used free software packages to extract geometry of biological organs but the present author could not find any source in the literature where the three free software packages imagej and meshlab were used to obtain surface model of human kidney from images from the vhd the practice of extracting the geometry of a representative human kidney for free as presented in the present work could be a free alternative to the use of expensive commercial software or to the purchase of a digital model material and method for the present work images from vhd and the three free and open source software packages imagej and meshlab form the material as far as method is concerned the three software packages are used to reconstruct models of human kidney from the images from the vhd vhd contains ct mri and cryosection images in this work only normal images of visible human male and female are used present work uses images in the png format since this is the format recommended by vhp file size of images is small and the images are good enough for reconstructing a model of whole kidney inner or finer details of kidney are not present reconstructed model represents just the outer surface of kidney one can easily identify human kidney in the images of vhd in the present work imagej is used to form an image stack which contains kidney version is used for segmentation and reconstruction to the correct scale meshlab is used to control the level of detail in the reconstructed model it also serves as a tool to smoothen the model and reduce its file size now the method is explained in a bit detail in the following subsections using imagej to form an image stack images for visible human male and female are available from head to toe out of these images one has to identify the images which belong to kidney upon viewing individual images in imagej and upon consulting and one can conclude that for visible human male both left are right kidneys are contained between the images and images in total similarly for visible human female both left and right kidneys are contained between the images and images in total now these images for male and images for female have to be copied into two separate empty folders now to form an image stack for male select the menu item file import image browse to the location of the folder containing images and select the first image in the folder and follow the prompts with default options all images are now displayed in imagej as an image stack now select the menu item file save as raw to save the image stack in the format where is the name given similar procedure may be followed to obtain an image stack for the female using to perform segmentation and reconstruction does the segmentation and reconstruction to the correct scale hence header information for the images in the image stack is essential vhd contains header information for each of the images in its database upon going through the header files of each of the images of male one can note that the following header information is identical for all the images image matrix size x image matrix size y image dimension x mm image dimension mm image pixel size x image pixel size y screen format bit spacing between scans mm similarly the following header information is identical for all the female images image matrix size x image matrix size y image dimension x mm image dimension mm image pixel size x image pixel size y screen format bit spacing between scans mm now the method of reconstructing the left kidney of the male is explained in detail with illustrations the same method may be employed to reconstruct the right kidney of the male and the left and right kidneys of the female select the menu item file open greyscale browse to the location of the image stack for male follow the prompts and supply the header information as noted in the first paragraph of this subsection the missing header information to be supplied for the image stack for male is image dimensions x y z voxel spacing x y z voxel representation bit unsigned once the header information is supplied image stack is displayed in one can browse through all the images in the image stack for illustration purposes image and image in the image stack and in the vhd are shown in figure and figure respectively also the left and right kidneys are identified in figure and figure by making use of illustrations from and right kidney left kidney figure the image in the image stack right kidney left kidney figure the image in the image stack now the task is to do the segmentation select polygon tool from the iris toolbox for manual segmentation select continuous radio button under polygon tool now click and drag the mouse cursor along the edge of the left kidney as seen in the axial view window carefully this draws the contour of the edge of the left kidney now right click on the image and select the accept button to create the segmentation for the image on display this process has to be repeated for all images in the image stack which contain pixels that belong to the left kidney for illustration purposes image and image in the image stack after segmentation are shown in figure and figure respectively segmented left kidney figure the image in the image stack after segmentation segmented left kidney figure the image in the image stack after segmentation once the segmentation is over reconstruction is to be carried out this is accomplished by the menu item segmentation save as following prompts browsing to the location where the reconstructed model is to be stored and giving a name in the format for the file that represents the reconstructed model where path is the complete path c and is any file name now a reconstruction of the left kidney for the visible human male is over similar process may be followed to reconstruct the right kidney of the visible human male and the left and right kidneys of the visible human female using meshlab to reduce the total number of faces describing the model the model of kidney obtained through the use of typically is of very large size and typically is described by a very large number of surface triangles meshlab could be very helpful in reducing the total number of surface triangles that are needed to describe the model satisfactorily it also serves as a tool to smoothen the reconstructed geometry after using smoothing features provided by meshlab it may be necessary to scale the reconstructed models to the correct dimensions if the original dimensions are to be strictly retained meshlab can also improve the triangle quality of surface triangles of the model it can also reduce the file size the models of kidney after undergoing processing with meshlab are shown in the next section results reconstructed left kidney of the male after undergoing processing through meshlab is shown in figure similarly reconstructed right kidney of the male after undergoing processing through meshlab is shown in figure reconstructed left kidney of the female is shown in figure reconstructed right kidney of the female is shown in figure all the four models are made up of surface triangles while obtaining these four models job of meshlab is to smoothen the models reconstructed through and to reduce the total number of surface triangles to figure reconstructed left kidney of male figure reconstructed right kidney of male figure reconstructed left kidney of female figure reconstructed right kidney of female discussion in this work model of human kidney is extracted from images from the vhd using free software packages the free software packages used are imagej itksnap meshlab the organs reconstructed are left kidney of visible human male right kidney of visible human male left kidney of visible human female right kidney of visible human female all the four models are in stl format use of free software packages together with images that may be obtained for free as has been done in the present work makes it possible to obtain the geometry of a representative human kidney completely for free buying a model of a human kidney or using a commercial software package to extract models from image sequences cost sometimes significant amount of money also in the present approach user can control how finely the geometry should be described using the free software package meshlab since meshlab can improve the quality of the surface mesh that describes a reconstructed model the reconstructed model that has undergone processing with meshlab can be used in a finite element analysis after converting the surface model to a solid model using software packages like rhinoceros also the method used to extract the geometry of a kidney as illustrated in the present work may possibly be used to extract other whole biological organs from vhd it may be noted that the method given here to obtain the models of human kidney need not be followed rigidly it is good to read the documentation for the software packages used here and one can experiment with the various options provided by the software packages instead of rigidly following the method illustrated in this work for example instead of tracing the boundary of the kidney in each of the images through the mouse pointer the paintbrush tool provided by can be tried out to carry out the segmentation itksnap also provides a tool that can do segmentation as to the limitations present work uses only images although these are found to be sufficient to obtain the geometry of a whole kidney whenever the reconstructed geometry should include the finer details of the kidney or whenever some other organ is to be extracted from vhd there is a possibility that other types of images mri images are more suited in some cases also multiple software packages need to be downloaded installed and used here future work is to extract other biological organs from vhd using free software packages aim is to reconstruct biological organs with inner details not obtaining just the outer surface of the organs and to use other types of images like mri cryosection images from the vhd if need be conclusion it is possible to obtain the surface model of a representative human kidney from images from the vhd using free software packages only the free software packages needed are imagej and meshlab the practice of extracting the geometry of a representative human kidney completely for free as illustrated in the present work could be a free alternative to the use of expensive commercial software packages or to the purchase of a digital model acknowledgements author is grateful to the robotics lab department of mechanical engineering centre for product design and manufacturing indian institute of science bangalore india for providing the necessary infrastructure to carry out this work author acknowledges ashitava ghosal robotics lab department of mechanical engineering centre for product design and manufacturing indian institute of science bangalore india for providing the images from the visible human data set vhd author acknowledges national library of medicine nlm and visible human project vhp for providing free access to the visible human data set vhd to ashitava ghosal visible human data set vhd is an anatomical data set developed under a contract from the national library of medicine nlm by the departments of cellular and structural biology and radiology university of colorado school of medicine references jay bishoff louis kavoussi online laparoscopic surgery of the kidney available at http accessed july issenberg sb mcgaghie wc hart ir mayer jw felner jm petrusa er waugh ra brown dd safford rr gessner ih gordon dl ewy simulation technology for health care professional skills training and assessment the journal of the american medical association http accessed july http accessed july http accessed july http accessed july http accessed july http accessed july rasband imagej national institutes of health bethesda maryland usa http abramoff magelhaes ram image processing with imagej biophotonics international volume issue pp http accessed july paul yushkevich joseph piven heather cody hazlett rachel gimpel smith sean ho james gee guido gerig active contour segmentation of anatomical structures significantly improved efficiency and reliability neuroimage http accessed july meshlab visual computing lab isti cnr http accessed july http accessed july http accessed july http accessed july kirana kumara p ashitava ghosal a procedure for the reconstruction of biological organs from image sequences proceedings of beats international conference on biomedical engineering and assistive technologies beats dr b r ambedkar national institute of technology jalandhar india kirana kumara p online reconstructing solid model from scanned images of biological organs for finite element simulation available at http accessed july http accessed july aimee sergovich marjorie johnson timothy wilson explorable threedimensional digital model of the female pelvis pelvic contents and perineum for anatomical education anatomical sciences education dong sun shin jin seo park shin min suk chung surface models of the male urogenital organs built from the visible korean using popular software anatomy cell biology amy elizabeth kerdok characterizing the nonlinear mechanical response of liver to surgical manipulation thesis the division of engineering and applied sciences harvard university li lou shu wei liu zhen mei zhao pheng ann heng yu chun tang zheng ping li yong ming xie yim pan chui segmentation and reconstruction of hepatic veins and intrahepatic portal vein based on the coronal sectional anatomic dataset surgical and radiologic anatomy chen g li xc wu gq zhang sx xiong xf tan lw yang rg li k yang sz dong reconstruction of digitized human liver based on chinese visible human chinese medical journal gao reconstruction of liver slice images based on mitk framework the international conference on bioinformatics and biomedical engineering icbbe doi http accessed july henry gray anatomy of the human body philadelphia lea febiger
| 5 |
face rings of cycles associahedra and standard young tableaux aug anton dochtermann abstract we show that jn the ideal of the has a free resolution supported on the n simplicial associahedron an this resolution is not minimal for n in this case the betti numbers of jn are strictly smaller than the f of an we show that in fact the betti numbers of jn are in bijection with the number of standard young tableaux of shape d this complements the fact that the number of d faces of an are given by the number of standard young tableaux of super shape d d a bijective proof of this result was first provided by stanley an application of discrete morse theory yields a cellular resolution of jn that we show is minimal at the first syzygy we furthermore exhibit a simple involution on the set of associahedron tableaux with fixed points given by the betti tableaux suggesting a morse matching and in particular a poset structure on these objects introduction in this paper we study some intriguing connections between basic objects from commutative algebra and combinatorics for k an arbitrary field we let r k xn denote the polynomial ring in n variables we let jn denote the edge ideal of the complement of the cn by definition jn is the ideal generated by the degree monomials corresponding to the diagonals of cn one can also realize jn as the ideal of the cycle cn now thought of as a simplicial complex figure i the ideals jn are of course very simple algebraic objects and their homological properties are one can verify that is a gorenstein ring the dimension of is and hence the projective dimension of is n in fact a minimal free resolution can be described explicitly and cellular realizations have been provided by biermann and more recently by sturgeon date august we wish to further investigate the combinatorics involved in the resolutions of jn our original interest in cellular resolutions of jn came from the fact that the ideal jn has an almost linear resolution in the sense that the nonzero entries in the differentials of its minimal resolution are linear forms from r except at the last syzygy where the nonzero entries are all degree recent work in combinatorial commutative algebra has seen considerable interest in cellular resolutions of monomial and binomial ideals see for example but in almost all cases the ideals under consideration have linear resolutions here we seek to extend some of these constructions in the construction of any cellular resolution one must construct a cw with faces labeled by monomials that generate the ideal in the case of jn there is a well known geometric object whose vertices are labeled by the diagonals of an namely the simplicial associahedron an by definition an is the simplicial complex with vertex set given by diagonals of an with faces given by collections of diagonals that are the facets of an are triangulations of the of which there are a catalan number many it is well known that an is spherical and in fact can be realized as the boundary of a convex polytope in addition there is a natural way to associate a monomial to each face of an and in the first part of the paper we show that this labeled facial structure of an considered as a polytope with a single interior cell encodes the syzygies of jn theorem with its natural monomial labeling the complex an supports a free resolution of the ideal jn the resolution of jn supported on the associahedron an is not minimal for n and in particular in this case we have faces f g with the same monomial labeling the f of an is completely understood a closed form can be written down and in fact the number of d faces of an is equal to the number of standard young tableaux of shape d d a bijective proof of this was first provided by stanley since a resolution of jn is supported on an we know that the f of an provides an upper bound on the betti numbers with equality in the case of an in the second part of the paper we show that the betti numbers of are given by standard young tableaux on a set of subpartitions involved in the stanley bijection theorem the total betti numbers of the module are given by the number of standard young tableaux of shape d this bijection along with an application of the hook formula leads to a closed form expression for the betti numbers of in addition the fact that the partition d is conjugate to provides a nice combinatorial interpretation of the palindromic property for the betti numbers of the gorenstein ring the fact that we can in theory identify the betti numbers of with certain faces of an suggests that it may be possible to collapse away faces of an to obtain a minimal resolution of jn employing an algebraic version of morse theory due to batzies and welker indeed certain geometric properties of any subdivision of an along with the almost linearity of jn imply that certain faces must be matched away for d we are able to write down a morse matching involving the edges and of an such that the number of unmatched critical cells is precisely corresponding to the first syzygy module of see proposition this leads to minimal resolutions of jn for the cases n in addition our identification of both the betti numbers of and the faces of an with standard young tableaux leads us to consider a partial matching on the set of associahedron tableaux such that the unmatched elements correspond to the betti numbers the hope would be to import a poset structure from the face poset of an to extend this matching to a morse matching the trouble with this last step is that the stanley bijection does not give us an explicit labeling of the faces of an by standard young tableaux there are choices involved and the bijection itself is recursively defined however we can define a very simple partial matching on the set of standard young tableaux of shape d d such that the unmatched elements can naturally be thought of as standard young tableaux of shape d by deleting the largest entries see proposition this suggests a poset structure on the set of standard young tableax that extends this covering relation the rest of the paper is organized as follows we begin in section with some basics regarding the commutative algebra involved in our study in section we discuss associahedra and their role in resolutions of jn we turn to standard young tableaux in section and here establish our results regarding the betti numbers of in section we discuss our applications of discrete morse theory and related matchings of stand young tableaux we end with some open questions some commutative algebra as above we let jn denote the ideal of the by definition the ideal in r k xn generated by degree monomials corresponding to the diagonals we are interested in combinatorial interpretations of certain homological invariants of jn and in particular the combinatorial structure of its minimal free resolution recall that a free resolution of an m is an exact sequence of m fp where each fd r j is free and the differential maps are graded the resolution is minimal if each of the j are minimum among all resolutions in which case the j are called the graded betti numbers of m also in this case the number p length of the minimal resolution is called the projective dimension of m our main tool in calculating betti numbers will be hochster s formula see for example which gives a formula for the betti numbers of the ring associated to a simplicial complex theorem hochster s formula for a simplicial complex on vertex set n we let denote its ring then for d the betti numbers j of are given by x j dimk w k n w j here w denotes the simplicial complex induced on the vertex set w a cellular resolution of m is a x with a monomial labeling of its faces such that the algebraic chain complex computing the cellular homology of x supports a resolution of m we refer to section for details and more precise definitions we next collect some easy observations regarding the betti numbers of jn since jn is the ideal of a triangulated sphere we see that is gorenstein and has krull dimension the formula then implies that the projective dimension of is n which says that j whenever d n an easy application of hochster s formula also implies that a minimal resolution of jn is linear until the last nonzero term by which we mean if d n then j for all j d also we have n and j for j in this sense the ideals jn have an almost linear resolution as mentioned in the introduction convention since for any d we have j for at most one value of j we we will without loss of generality sometimes drop the j and use j to denote the betti numbers of jn asssociahedra for each n we let an denote the dual associahedron the n simplicial complex whose vertices are given by diagonals of a labeled regular with facets given by triangulations collections of diagonals that do not intersect in their interior it is well known that an is homeomorphic to a sphere and in fact is polytopal and several embeddings most often of the dual simple polytope are described throughout the literature see for a good account of the history from here on we will use an to denote the n simplicial polytope including the interior we wish to describe a monomial labeling of the faces of an recall that each vertex of an corresponds to some diagonal i j of an so we simply label that vertex with the monomial xi xj we label the faces of an with the least common multiple of the vertices contained in that face we wish to show that with this simple labeling the associahedron an supports a resolution of jn figure the complex with its monomial labeling partially indicated let us first clarify our terms to simplify notation we will associate to any monomial xinn r the vector in nn and will freely move between notations we define a labeled polyhedral complex to be a polyhedral complex x together with an assignment af nn to each face f x such that for all i n we have af i max ag i g f if x is a labeled polyhedral complex we can consider the ideal m mx k xn generated by the monomials corresponding to its vertices as usual we identify an element nn as the exponent vector of a monomial the topological space underlying x with a chosen orientation has an associated chain complex fx of spaces that computes cellular homology since x has monomial labels on each of its cells we can homogenize the differentials with respect to this basis and in this way fx becomes a complex of free modules over the polynomial ring r k xn we say that the polyhedral complex x supports a resolution of the ideal m if fx is in fact a graded free resolution of m for more details and examples of cellular resolutions we refer to for any nn we let denote the subcomplex of x consisting of faces f for which af componentwise we then have the following criteria also from lemma let x be a labeled polyhedral complex and let m mx k xn denote the associated monomial ideal generated by the vertices then x supports a cellular resolution of m if and only if the complex is or empty for all nn futhermore the resolution is minimal if and only if af ag for any pair of faces f with this criteria in place we can establish the following theorem for each n the associahedron an with the monomial labeling described above supports a cellular resolution of the edge ideal jn proof let an denote the simplicial associahedron with this monomial labeling by construction the vertices of an correspond to the generators of jn to show that an supports a resolution of jn according to lemma it is enough to show that for any nn we have that the subcomplex an is let nn and let an denote the subcomplex of an consisting of all faces with a monomial labeling that divides as usual thinking of as the exponent vector of the monomial in particular a face f an is an element of an if and only if for every diagonal xi xj f we have and we claim that an is contractible and hence note that since jn is squarefree we may assume has entries and hence we can identify with a subset of n also if for all i so that n then we have an an which is a convex polytope and hence contractible if has fewer than nonzero entries then an is empty without loss of generality we may then assume that and let j be the largest integer such that j and now since j n and j we see that the diagonal j is a vertex of the simplicial complex an in fact j is an element of every facet of an since no other diagonal picked up by the elements of intersects j we conclude that an is a cone and hence contractible for n one can check that this resolution is in fact minimal but for n this is no longer the case in particular for n we have faces f g in an with the same monomial label standard young tableaux it turns out that the number of faces of the associahedron an the entries of the face vector of an are given by the number of standard young tableau syt of certain shapes recall that if is a partition of n a standard young tableaux of shape is a filling of the young diagram of with distinct entries n such that rows and columns are increasing see example for d n we let f n d denote the number of ways to choose d diagonals in a convex such that no two diagonals intersect in their interior we see that f n d is precisely the number of d faces of the polytope an a result attributed to cayley according to asserts that f n d d using the hook length formula one can see that this number is also the number of standard young tableaux of shape d d where as usual z denotes a sequence of n d entries with value this fact was apparently first observed by o hara and zelevinsky unpublished and a simple bijection was given by stanley example if we take d n we obtain n n f n n n n n the n nd catalan number example if n and d the f shape are given by standard young tableaux of these correspond to the diagonals of a it turns out the betti numbers of the rings are also counted by the number of standard young tableaux of certain related sub shapes to establish this result we will employ hochster s formula theorem from above recall that the ring can be recovered as the ring of the thought of as a simplicial complex note that when n the only nonzero contribution to equation comes from reduced homology the number of connected components of the induced complex on w minus one n for n let j denote the betti numbers of the ring equation implies that n for d we have j unless d n and j n or d n and j d n another application of equation gives cases we have the following result n n n and n for the remaining theorem for all n and d n the betti numbers of are given by n the number of standard young tableau of shape d proof we will establish the equality in equation by showing that for n and d n both sides of the equation satisfy the recursion f n d f n d f n d for the betti numbers the left hand side we use hochster s formula for each d the n via equation involves subcomplexes given by subsets w of n computation of of size d first suppose we have chosen w n with n w then we recover the contribution to equation from the homology of induced subsets of size d in the cycle on the vertices n namely d however if and n are both not in w then we get an additional contribution given by the isolated point there are such instances d next suppose n w then again we recover contribution from the homology of induced subsets of size d in the cycle n this quantity is given by in this case we have an additional contribution coming from the subsets w including both and n since as subsets of the these will be disconnected there are of these putting this together we have n d d d d recovering equation we next consider the right hand side of equation namely the number of standard young tableaux of shape d recall that the fillings involve picking entries one each from the set n if n is an entry in the first row necessarily in the last column then we recover all such fillings from standard young tableaux of shape d d if n is the only entry in the last row then we recover all such fillings from standard young tableaux of shape d d with these counts we miss the standard tableaux with n as the entry in the second row necessarily in the second column in this case we must have as the entry in the first row first column but are free to choose any increasing sequence of length d to fill the remaining entries of the first row with the rest of the entries determined there are d such choices adding these three counts gives us the desired recursion from equation we next check the initial conditions for n hochster s formula again gives us one can check see example that there are precisely standard young tableau of shape and of its conjugate shape n for arbitrary n and d we have n given by the number of generators of jn on the other hand in a standard young tableau of shape we can have any pair i j with i j occupy the second row except n or hence the number of such fillings is also given by similarly for arbitrary n and d hochster s formula implies that the betti numbers n are given by all choices of n vertices of the corresponding to complements of diagonals since these remaining pair of vertices with be disconnected hence again n n n this also follows from the fact that the ring is gorenstein and therefore has a palindromic sequence of betti numbers in terms of tableaux we see that the shape n is conjugate to and hence both shapes have the same number of fillings remark an application of the hook length formula gives an explicit value for the betti numbers of n d n d n after a version this paper was posted on the arxiv it was pointed out to the author that this formula had previously been established in with a combinatorial proof given in remark as we have seen the rings are gorenstein and hence the betti numbers of are palindromic in the sense that n the realization of the betti numbers of in terms of standard young tableaux theorem provides a nice combinatorial interpretation of this property the partition d is conjugate to the partition and hence they have the same number of fillings example for n the resolution of can be represented as r r in each homological degree we have a basis for the free module given by all standard young tableaux of the indicated shape note that is conjugate to discrete morse theory and matchings as we have seen the associahedron an with the monomial labeling described above supports a resolution of the ideal jn we have also seen that the resolution is not minimal and in particular the labeling of an produces distinct faces f g with the same monomial labeling in fact as n increases the resolution becomes further and further from minimal in n the sense that the number of facets of an a catalan number on the order of dominates the dimension of the second highest syzygy module of which is on the order of example face numbers versus betti numbers for n are indicated below here f n j refers to the number of faces in the associahedron an f d f d f d f d morse matchings and first syzygies batzies and welker and others see and have developed a theory of algebraic morse theory that allows one to match faces of a labeled complex in order to produce resolutions that become closer to minimal in the usual combinatorial description of this theory one must match elements in the face poset of the labeled complex that have the same monomial labeling the matching must also satisfy a certain acyclic condition described below we refer to for further details a closer analysis of our monomial labeling of an reveals certain faces that must be matched away in any minimal resolution in the sense that the associated monomial has the wrong degree in particular since we know that has an almost linear resolution as described above it must be the case that in any minimal cellular resolution x each face of x is labeled by a monomial of degree j for j n our labeling of an has the property that the monomial m associated to a face f is given by the product of the variables involved in the choice of diagonals and in particular a properly labeled face corresponds to a subdivision of cn with j diagonals involving precisely j vertices this motivates the following definition suppose s is a subdivision of the cn by which we mean a collection of d diagonals we will say that s is proper if the set of endpoints of the diagonals has exactly d elements as vertices of cn we will say that s is superproper if uses more than d vertices and subproper if it uses less figure for n the three superproper subdivisions with d and the two subproper subdivisions with d all other subdivisions of the are proper in fact we can explicitly describe a partial morse matching on the face poset of an that is perfect for rank d a superproper subdivision of an with d is simply a pair of disjoint diagonals say e ij k with i j k and i in the face poset of an we match this with the f where ij k j if j k f ij k i otherwise a subproper subdivision with d is an inscribed triangle say with diagonals ij ik jk i j we match this face with the proper ij jk recall that the hasse diagram of the face poset of an is a graph with vertices given by all faces of an and with edges given by all cover relations x y it is easy to that our association is a matching of the hasse diagram of the face poset of an and it is clearly algebraic in the sense that matched faces have the same monomial labeling as is typical we think of the hasse diagram as a directed graph with the orientation on a matched edge pointing up increasing dimension and with all unmatched edges pointing down the collection of faces not involved in the matching are called the critical cells they form a subposet of the original poset the main theorem of algebraic discrete morse theory says that if we have an acyclic algebraic matching on the hasse diagram of a cellular resolution then the critical cells form a that also supports a cellular resolution in this way one can obtains a resolution that is closer to being minimal in our case we have the following result proposition for all n the matching on the monomial labeled face poset of an described above is acyclic furthermore the number of unmatched critical edges is given by proof we first make the simple observation that if f is any of an corresponding to a subproper subdivision in other words an inscribed triangle there for any e with e f we must have that e is a path of length a proper subdivision with d similarly if e is a path of length and e f is an upward oriented edge then it must be the case that f is an inscribed triangle with the same vertex set as this implies that there can not be any cycles in the oriented face poset involving proper subdivisions with d paths of length next suppose e f is an upward oriented edge in the face poset of an where e consists of two disjoint diagonals a superproper subdivision with d then according to our matching it must be the case that f is a path of length to form a cycle in the face poset there must be some downward edge from f to e with e f but then according to our matching it must be the case that e is a path of length hence our observation from the previous paragraph implies that no cycles exist we conclude that the matching is acyclic we next count the unmatched edges first observe that the number of proper subdivisions of an with diagonals is given by n to see this note that the diagonals involved in such a subdivision must form a path of length once we designate the middle vertex in this path of which there are n choices we have choices for the remaining two vertices next we claim that the the number of subproper subdivisions of an with d diagonals necessarily forming an inscribed triangle is given by n to see this we first count inscribed triangles with ordered vertex set we are free to choose the first vertex from among the n nodes of the cycle for the second vertex we have two cases if we choose from among the two vertices that are distance from we are left with n choices for if we choose from among the vertices more than distance from of which there are n choices we are then left with n choices for in total there are n n n n n n n inscribed triangles with the ordered vertex set dividing out by to forget the ordering gives us the desired count as described above we match each of the d superproper subdivisions with a d proper subdivision and we match each of the d subproper subdivisions with a d proper subdivision hence after matching the number of critical edges is given by n n n n n n n n which is precisely see remark this completes the proof hence our simple matching leaves precisely the number of critical that we require the rank of the first free module in the resulting cellular resolution will be equal to the rank of the first syzygy module of example for n this matching in fact leads to a minimal resolution of in this case we have three superproper subdivisions with d namely and and two subproper subdivisions with d namely and figure the monomial labeled with five pairs of faces matched the shaded faces are the improper subdivisions the resulting complex on the right supports a minimal resolution of we remark that the procedure described above can be extended to the case n we leave the details to the reader but point out that in this case we have superproper subdivisions with d pairs of disjoint diagonals corresponding to edges of that we match with subproper subdivisions with d inscribed triangles in a corresponding to seven that each get matched with an edge superproper subdivisions with d forests consisting of three edges and two components corresponding to seven that get matched to a subproper subdivisions with d inscribed triangles with a pendant edge corresponding to fourteen that get matched down to a the resulting has edges faces and faces as desired unfortunately we do not know how to extend this matching procedure in general see the next section for some comments regarding this an involution of the associahedron tableaux recall that the faces of the associahedron an are counted by standard young tableaux of certain shapes while the betti numbers of jn are counted by standard young tableaux of certain subshapes again motivated by discrete morse theory this leads to ask whether we can find a matching on the set of associahedron tableaux such that the unmatched elements correspond to the betti numbers of jn this matching should have the property that two matched tableaux differ in cardinality by one let us emphasize that since we do not have a poset structure on these elements we are not at this pointing searching for a m orse matching let us first fix some notation definition for fixed n and d n we call the collection of standard young tableaux of shape d d the associahedron tableaux denoted by an and the standard s young tableaux the syzygy tableaux denoted by sn s of shape d let a an and s sn note that an element of an has n d boxes whereas an element of sn has n boxes if x an is an associahedron tableux with largest entries in the second row in the positions n n n d then it naturally becomes a syzygy tableau by just removing those boxes in particular we say that these particular associahedron tableau restrict to syzygy tableaux and in this way we have a natural inclusion sn an example the associahedron tableau on the left restricts to a syzygy tableau whereas the associahedron tableau on the right does not here n and d we next describe an involution on the set a such that the fixed elements are precisely the elements that restrict to if x is a standard young tableau we use to denote the number of boxes in the underlying partition proposition there exists an involution on the set a such that the fixed point set of is precisely the set of tableaux in a that restrict to furthermore if x a such that x x we have x proof suppose x an is an associahedron tableau if x restricts to a syzygy tableau we set x x otherwise some element of n n n d is not in the second row of x let i be the largest element with this property then i must be the last element of the first row or else the bottom element in the first column in the latter case i is the bottom most element of first column we bring that element i to the first row and add the element n d to the end of the second row this defines x in the former case i is the last element of the first row we obtain x by bringing that element down to the bottom of the first column and deleting the last element of the second row which must be n d it is clear that x x example an example of the involution matching an associahedron tableau of shape with one of shape is given by the following further questions we end with a number of questions that arise from our study as we have seen in section the number f n d of dissections of an using d diagonals is well understood and is given by the number of standard young tableaux of shape d d in the context of enumerating the betti numbers of the ideal jn we were interested in subdivisions that involved a fixed number of vertices define f n d j to be the number of ways to choose d diagonals in a convex such that the set of endpoints consists of precisely j vertices of the question is there a nice formula for f n d j can it be related to the standard young tableaux of shape d d we note that if we take d n then varying j gives a refinement of the catalan numbers which as far as we know has not appeared elsewhere the first few refinements are a related question would be to consider those subdivisions for which the collection of diagonals forms a connected tree since this is likely the more relevant property in the context of syzygies for n it so happens that the proper subdivisions correspond to those collections of diagonals that form a tree however for n there exist proper subdivisions that are not trees for example if d we can take diagonals to form a triangle with vertices along with one disconnected diagonal in total using vertices of the question how many dissections of an with d diagonals have the property that the set of diagonals forms a tree in our quest for a morse matching on the monomial labeled face poset of the associahedron an we were unable to employ stanley s bijection between faces of an and standard young tableaux as mentioned above the difficulty arises as the bijection given in is recursively defined and involves certain choices however the fact that the face poset of an is labeled by standard young tableaux suggests that there might be a meaningful poset structure on the set of all standard young tableaux or at least the set of associahedron tableaux the hope would be that this poset structure extends the partial order given by the involution on a described in the proof of proposition hence the poset should be graded by the number of boxes in the underlying partition but will not restrict to young s lattice if one forgets the fillings we refer to example for a example of a cover relation between two standard young tableaux such that the underlying partitions are not related in young s lattice question does there exist a meaningful poset structure on the set of standard young tableaux consistent with the conditions described above finally we see in figure that a minimal resolution of is supported on a polytope as we mentioned the construction there was a bit ad hoc but it does lead us to following question does the ideal jn have a minimal cellular resolution supported on a necessarily n polytope work in this direction along with some further generalizations is currently being pursued by and linusson acknowledgements we thank ken baker for his assistance with the figures and alex jakob jonsson and michelle wachs for helpful conversations alex and i first realized the potential connection to standard young tableaux after inputting the betti numbers of jn into oeis some years ago thanks also to the anonymous referee for a careful reading references batzies welker discrete morse theory for cellular resolutions reine angew math bayer sturmfels cellular resolutions of monomial modules reine angew math biermann cellular structure on the minimal resolution of the edge ideal of the complement of the submitted braun browder and klee cellular resolutions of ideals defined by nondegenerate simplicial homomorphisms israel j math bruns hibi partially ordered sets with pure resolutions european combin ceballos santos ziegler many realizations of the associahedron combinatorica choi kim a combinatorial proof of a formula for betti numbers of a stacked polytope electron j research paper dochtermann cellular resolutions of cointerval ideals math z no dochtermann mohammadi cellular resolutions from mapping cones combin theory ser a linusson personal communication francisco mermin schweig catalan numbers binary trees and pointed pseudotriangulations european combin goodarzi cellular structure for the resolution algebr comb mermin the resolution is cellular commut algebra no miller sturmfels combinatorial commutative algebra graduate texts in mathematics vol springer new york nagel reiner betti numbers of monomial ideals and shifted skew shapes electron combin no special volume in honor of anders research paper pp the encyclopedia of integer sequences published electronically at http sinefakopoulos on borel fixed ideals generated in one degree algebra no morse theory from an algebraic viewpoint trans amer math soc stanley polygon dissections and standard young tableaux combin theory ser a sturgeon personal communication
| 0 |
apr dsp implementation of a direct adaptive feedfoward control algorithm for rejecting repeatable runout in hard disk drives jinwen pan prateek shah roberto horowitz department of mechanical engineering department of mechanical engineering department of mechanical engineering university of california berkeley university of california berkeley university of california berkeley berkeley california berkeley california berkeley california email jinwen email prateekshah email horowitz abstract a direct adaptive feedforward control method for tracking repeatable runout rro in bit patterned media recording bpmr hard disk drives hdd is proposed the technique estimates the system parameters and the residual rro simultaneously and constructs a feedforward signal based on a known regressor an improved version of the proposed algorithm to avoid matrix inversion and reduce computation complexity is given results for both matlab simulation and digital signal processor dsp implementation are provided to verify the effectiveness of the proposed algorithm are briefly listed here rro profile is unknown rro frequency spectrum can spread beyond the bandwidth of servo system therefore it will be amplified by the feedback controller rro spectrum contains many harmonics of the spindle frequency harmonics that should be attenuated which increases the computational burden in the controller rro profile is changing from track to track it is varying on the radial direction hdd servo dynamics changes from drive to drive and by temperature the remainder of this paper is organized as follows section presents our direct adaptive feedforward control algorithm and section shows the real time dsp implementation results introduction data bits are ideally written on concentric circular tracks in conventional hdds that use magnetic disks with continuous media this process is different in bit patterned media recording since data should be written on tracks with predetermined shapes which are created by lithography on the disk as shown in fig the trajectories that are required to be followed by the servo system in bpmr are servo tracks which are characterized by the servo sectors written on the disk deviation of a servo track from an ideal circular shape is called rro therefore the servo controller in bpmr has to follow the rro which is unknown in the time of design and as a result the servo control methodologies used for conventional drives can not be applied to bpmr directly in our prior works we proposed indirect adaptive control methods for mechatronic devices to compensate for unknown disturbances such as rro and dynamics mismatches in this paper we propose a direct adaptive control method to address challenges specific to bpmr which servo tracks conventional media data tracks media figure servo track dotted blue and data track solid red in conventional and media control design the architecture that is considered for the servo control system is shown in fig an feedforward controller is designed for hdd without loss of generality we chose vcm as an example here r is the transfer function from vcm input to pes submitted to asme conference on information storage and processing systems copyright c by asme uf e r ua ua ue r r with the estiwhere k b k q mate of nb since a q r k k with an unknown vector and k a nonzero vector from eq we have e k k k k where k can be formed based on the magnitude and phase of i nr with nr the number of frequencies to cancel the updating law for k is figure control architecture ue is an exogenous excitation signal ua is the feedforward signal r is the unknown rro with known frequencies and e is the pes we aim to design an adaptive controller that generates ua in order to fade the frequency contents of the error signal e at selective frequencies which correspond to the harmonics of spindle frequency and its harmonics in our case k k k k the inverse of k involves inverting the estimated magnitudes that might be very small in transition especially when is initialized by zeros in that case any small fluctuation of can cause large transient error a smoothing on magnitude and phase of has to be designed to relax transient errors the basic direct adaptive feedforward control algorithm is summarized in table basic direct adaptive feedforward control from fig the pes can be written as e k r ue k ua k r k where r b and can be expanded as t e k e k b ue k k where a b bnb and the residual error t k b ua k a r k where k is the regressor for rro with known frequencies in regressor form pes is initialize the regressors k k and k apply ue k and ua k to vcm subtract k from pes to determine the estimate error k update the parameters k using eq update the matrix k from k and compute its inverse update k using eq and compute ua k from eq table basic direct adaptive feedforward control t e k k k k and its estimation k k k improved direct adaptive feedforward control as mentioned earlier computational complexity of inverting k grows as the number of frequencies increases which is a crucial burden in dsp implementation in this section we will provide an improved version to avoid matrix inversion by applying swapping lemma to we have t k k k k where ana k bnb and k are regressors for e k and ue k and are the estimates of and and the updating law is k k k k k k t k k t k t where k e k k k k k t k t k t and k k is a decreasing gain k k r eq indicates that both of the system and the residual rro are estimated simultaneously the feedforward control signal is constructed using the same regressor as the rro yielding t k k k k a r k therefore the updating law for k is t k k k k k note that in no matrix inverse is required the improved direct adaptive feedforward control algorithm is summarized in table where the first three steps are the same as those in table to be noted here the proposed direct adaptive feedforward control algorithm and its improved version can be directly extended to the actuator which is ma responsible for high frequency rro ua k k k in eq using k and instead of and b approximately we have t k k a r k k k copyright c by asme acknowledgment am plitude spectrum nm adaptive controller off adaptive controller on financial support for this study was provided by a grant from the advanced storage technology consortium astc references shahsavari keikha zhang and horowitz adaptive repetitive control design with online secondary path modeling and application to media recording magnetics ieee transactions on pp keikha shahsavari and horowitz a probabilistic approach to robust controller design for a servo system with irregular sampling in control and automation icca ieee international conference on ieee pp kempf messner tomizuka and horowitz comparison of four repetitive control algorithms ieee control systems magazine pp shahsavari keikha zhang and horowitz repeatable runout following in bit patterned media recording in asme conference on information storage and processing systems american society of mechanical engineers pp shahsavari keikha zhang and horowitz adaptive repetitive control using a modified filteredx lms algorithm in asme dynamic systems and control conference american society of mechanical engineers pp shahsavari pan and horowitz adaptive rejection of periodic disturbances acting on linear systems with unknown dynamics arxiv preprint zhang keikha shahsavari and horowitz adaptive mismatch compensation for vibratory gyroscopes in inertial sensors and systems isiss international symposium on ieee pp zhang keikha shahsavari and horowitz adaptive mismatch compensation for rate integrating vibratory gyroscopes with improved convergence rate in asme dynamic systems and control conference american society of mechanical engineers pp bagherieh shahsavari and horowitz online identification of system uncertainties using coprime factorizations with application to hard disk drives in asme dynamic systems and control conference american society of mechanical engineers pp ha rm o nic vcm inp v figure spectrum comparison ma inp v step step figure feedforward signal for vcm and ma construct the matrix k k and compute residual t k k error r update k using eq and compute ua k from eq table improved dreict adaptive feedforward control experiment results and conclusion we implement both of the two algorithms in matlab simulation and the real time experiment setup on hdd in simulation r and rro together with nrro are modeled from real system measurement data since the simulation and experiment results were very close only experiment results using the improved version are shown in fig where rro is reduced to nrro level in simulation as well as in experiments vcm was responsible for the low frequency rro harmonics up to while ma was responsible for the high frequency rro harmonics from to as a result the feedforward control signal in one disk revolution shown in fig for the vcm consists of low frequency contents while for the ma it has high frequency components copyright c by asme
| 3 |
c this is the author s version of the work it is posted here by permission of for your personal use not for redistribution the final publication is published in the proceedings of the conference on principles of security and trust post and is available at jan information flow control in webkit s javascript bytecode abhishek vineet deepak and christian saarland university germany germany abstract websites today routinely combine javascript from multiple sources both trusted and untrusted hence javascript security is of paramount importance a specific interesting problem is information flow control ifc for javascript in this paper we develop formalize and implement a dynamic ifc mechanism for the javascript engine of a production web browser specifically safari s webkit engine our ifc mechanism works at the level of javascript bytecode and hence leverages years of industrial effort on optimizing both the source to bytecode compiler and the bytecode interpreter we track both explicit and implicit flows and observe only moderate overhead working with bytecode results in new challenges including the extensive use of unstructured control flow in bytecode which complicates lowering of program context taints unstructured exceptions which complicate the matter further and the need to make ifc analysis permissive we explain how we address these challenges formally model the javascript bytecode semantics and our instrumentation prove the standard property of terminationinsensitive and present experimental results on an optimized prototype keywords dynamic information flow control javascript bytecode taint tracking control flow graphs immediate analysis introduction javascript js is an indispensable part of the modern web more than of all websites use js for computation in web applications aggregator websites news portals integrate content from various mutually untrusted sources online mailboxes display advertisements all these components are glued together with js the dynamic nature of js permits easy inclusion of external libraries and code and encourages a variety of code injection attacks which may lead to integrity violations confidentiality violations like information stealing are possible wherever code is loaded directly into another web page loading code into separate iframes protects the main frame by the policy but hinders interaction that mashup pages crucially rely on and does not guarantee absence of attacks information flow control ifc is an elegant solution for such problems it ensures security even in the presence of untrusted and buggy code ifc for js differs from traditional ifc as js is extremely dynamic which makes sound static analysis difficult therefore research on ifc for js has focused on dynamic techniques these techniques may be grouped into four broad categories first one may build an custom interpreter for js source this turns out to be extremely slow and requires additional code annotations to handle control flow like exceptions break and continue second we could use a technique wherein an js interpreter is wrapped in a monitor this is nontrivial but doable with only moderate overhead and has been implemented in secure sme however because sme is a technique it is not clear how it can be generalized beyond to handle declassification third some variant of inline reference monitoring irm might inline taint tracking with the client code existing security systems for js with irm require subsetting the language in order to prevent dynamic features that can invalidate the monitoring process finally it is possible to instrument the runtime system of an existing js engine either an interpreter or a compiler jit to monitor the program while this requires adapting the respective runtime it incurs only moderate overhead because it retains other optimizations within the runtime and is resilient to subversion attacks in this work we opt for the last approach we instrument a production js engine to track taints dynamically and enforce noninterference specifically we instrument the bytecode interpreter in webkit the js engine used in safari and other browsers the major benefit of working in the bytecode interpreter as opposed to source is that we retain the benefits of these years of engineering efforts in optimizing the production interpreter and the source to bytecode compiler we describe the key challenges that arise in dynamic ifc for js bytecode as opposed to js source present our formal model of the bytecode the webkit js interpreter and our instrumentation present our correctness theorem and list experimental results from a preliminary evaluation with an optimized prototype running in safari in doing so our work significantly advances the in ifc for js our main contributions are we formally model webkit s bytecode syntax and semantics our instrumentation for ifc analysis and prove as far as we are aware this is the first formal model of bytecode of an js engine this is a nontrivial task because webkit s bytecode language is large bytecodes and we built the model through a careful and thorough understanding of approximately lines of actual interpreter unlike some prior work we are not interested in modeling semantics of js specified by the ecmascript standard our goal is to remain faithful to the production bytecode interpreter our formalization is based on webkit build which was the last build when we started our work using ideas from prior work we use static analysis of immediate to restrict overtainting even with bytecode s pervasive unstructured conditional jumps we extend the prior work to deal with exceptions our technique covers all unstructured control flow in js including break and continue without requiring additional code annotations of prior work and improves permissiveness to make ifc execution more permissive we propose and implement a variant of the check we implement our complete ifc mechanism in webkit and observe moderate overheads limitations we list some limitations of our work to clarify its scope although our instrumentation covers all webkit bytecodes we have not yet instrumented or modeled native js methods including those that manipulate the document object model dom this is ongoing work beyond the scope of this paper like some prior work our sequential theorem covers only single invocations of the js interpreter in reality js is reactive the interpreter is invoked every time an event like a mouse click with a handler occurs and these invocations share state through the dom we expect that generalizing to reactive will not require any instrumentation beyond what we already plan to do for the dom finally we do not handle as it is considerably more engineering effort jit can be handled by inlining our ifc mechanism through a bytecode transformation due to lack of space several proofs and details of the model have been omitted from this paper they can be found in the technical appendix section related work three classes of research are closely related to our work formalization of js semantics ifc for dynamic languages and formal models of web browsers maffeis et al present a formal semantics for the entire specification the foundation for js guha et al present the semantics of a core language which models the essence of js and argue that all of js can be translated to that core extends to include accessors and eval our work goes one step further and formalizes the core language of a production js engine webkit which is generated by the compiler included in webkit recent work by bodin et al presents a coq formalization of ecmascript edition along with an extracted executable interpreter for it this is a formalization of the english ecmascript specification whereas we formalize the js bytecode implemented in a real web browser information flow control is an active area of security research with the widespread use of js research in dynamic techniques for ifc has regained momentum nonetheless static analyses are not completely futile guarnieri et al present a static abstract interpretation for tracking taints in js however the omnipresent eval construct is not supported and this approach does not take implicit flows into account chugh et al propose a staged information flow approach for js they perform static policy checks on statically available code and generate residual that must be applied to dynamically loaded code this approach is limited to certain js constructs excluding dynamic features like dynamic field access or the with construct austin and flanagan propose purely dynamic ifc for languages like js they use the nsu check to handle implicit flows their strategy is more permissive than nsu but retains we build on the strategy just et al present dynamic ifc for js bytecode with static analysis to determine implicit flows precisely even in the presence of control flow like break and continue again nsu is leveraged to prevent implicit flows our overall ideas for dealing with unstructured control flow are based on this work in contrast to this paper there was no formalization of the bytecodes no proof of correctness and implicit flow due to exceptions was ignored hedin and sabelfeld propose a dynamic ifc approach for a language which models the core features of js but they ignore js s constructs for control flow like break and continue their approach leverages a dynamic type system for js source to improve permissiveness their subsequent work uses testing it detects security violations due to branches that have not been executed and injects annotations to prevent these in subsequent runs a further extension introduces annotations to deal with control flow our approach relies on analyzing cfgs and does not require annotations secure sme is another approach to enforcing noninterference at runtime conceptually one executes the same code once for each security level like low and high with the following constraints high inputs are replaced by default values for the low execution and low outputs are permitted only in the low execution this modification of the semantics forces even unsafe scripts to adhere to flowfox demonstrates sme in the context of web browsers executing a script multiple times can be prohibitive for a security lattice with multiple levels further all writes to the dom are considered publicly visible output while tainting allows persisting a security label on dom elements it is also unclear how declassification may be integrated into sme austin and flanagan introduce a notion of faceted values to simulate multiple executions in one run they keep n values for every variable corresponding to n security levels all the values are used for computation as the program proceeds but the mechanism enforces by restricting the leak of high values to low observers browsers work reactively input is fed to an event queue that is processed over time input to one event can produce output that influences the input to a subsequent event bohannon et al present a formalization of a reactive system and compare several definitions of reactive bielova et al extend reactive to a browser model based on sme this is currently the only approach that supports reactive for js we will extend our work to the reactive setting as the next step finally featherweight firefox presents a formal model of a browser based on a reactive model that resembles that of bohannon et al it instantiates the consumer and producer states in the model with actual browser objects like window page cookie store mode connection etc our current work entirely focuses on the formalization of the js engine and taint tracking to monitor information leaks we believe these two approaches complement each other and plan to integrate such a model into our future holistic enforcement mechanism spanning js the dom and other browser components background we provide a brief overview of basic concepts in dynamic enforcement of information flow control ifc in dynamic ifc a language runtime is instrumented to carry a security label or taint with every value the taint is an element of a lattice and is an upper bound on the security levels of all entities that have influenced the computation that led to the value for simplicity of exposition we use throughout this paper a lattice l h l low or public h high or secret partially leaked secret with l v h v for now readers may ignore our instrumentation works over a more general powerset lattice whose individual elements are web domains we write r for a value r tagged with label information flows can be categorized as explicit and implicit explicit flows arise as a result of variables being assigned to others or through primitive operations for instance the statement x y z causes an explicit flow from values in both z and y to x explicit flows are handled in the runtime by updating the label of the computed value x in our example with the least upper bound of the labels of the operands in the computation y z in our example implicit flows arise from control dependencies for example in the program l if h l there is an implicit flow from h to the final value of l that value is iff h is to handle implicit flows dynamic ifc systems maintain the pc label label which is an upper bound on the labels of values that have influenced the control flow thus far in our last example if the value in h has label h then pc will be h within the if branch after l is executed the final value of l inherits not only the label of which is l but also of the pc hence that label is also this alone does not prevent information leaks when h l ends with when h l ends with since and can be distinguished by a public attacker this program leaks the value of h despite correct propagation of implicit taints formally the instrumented semantics so far fail the standard property of this problem can be resolved through the nsu check which prohibits assignment to a variable when pc is high this recovers if the adversary can not observe program termination in our example when h the program terminates with l when h the instruction l gets stuck due to nsu these two outcomes are deemed observationally equivalent for the low adversary who can not determine whether or not the program has terminated in the second case hence the program is deemed secure roughly a program is if any two terminating runs of the program starting from heaps heaps that look equivalent to the adversary end in heaps like all sound dynamic ifc approaches our instrumentation renders any js program at the cost of modifying semantics of programs that leak information design challenges insights and solutions we implement dynamic ifc for js in the widely used webkit engine by instrumenting webkit s bytecode interpreter in webkit bytecode is generated by a compiler our goal is to not modify the compiler but we are forced to make slight changes to it to make it compliant with our instrumentation the modification is explained in section nonetheless almost all our work is limited to the bytecode interpreter webkit s bytecode interpreter is a rather standard stack machine with several additional data structures for features like scope chains variable environments prototype chains and function objects local variables are held in registers on the call stack our instrumentation adds a label to all data structures including registers object properties and scope chain pointers adds code to propagate explicit and implicit taints and implements a more permissive variant of the nsu check our label is a word size currently bits each bit in the represents taint from a distinct domain like join on labels is simply bitwise or unlike the ecmascript specification of js semantics the actual implementation does not treat scope chains or variable environments like ordinary objects consequently we model and instrument taint propagation on all these data structures separately working at the of the bytecode also leads to several interesting conceptual and implementation issues in taint propagation as well as interesting questions about the threat model all of which we explain in this section some of the issues are quite general and apply beyond js for example we combine our dynamic analysis with a bit of static analysis to handle unstructured control flow and exceptions threat model and compiler assumptions we explain our threat model following standard practice our adversary may observe all values in the heap more generally an adversary at level in a lattice can observe all heap values with labels however we do not allow the adversary to directly observe internal data structures like the call stack or scope chains this is consistent with actual interfaces in a browser that scripts can access in our proofs we must also show of these internal data structures across two runs to get the right induction invariants but assuming that they are inaccessible to the adversary allows more permissive program execution which we explain in section the bytecode interpreter executes in a shared space with other browser components so we assume that those components do not leak information over side channels they do not copy heap data from secret to public locations this also applies to the compiler but we do not assume that the compiler is functionally correct trivial errors in the compiler omitting a bytecode could result in a leaky program even when the source code has no information leaks because our ifc works on the compiler s output such compiler errors are not a concern formally we assume that the compiler is an unspecified deterministic function of the program to compile and of the call stack but not of the heap this assumption also matches how the compiler works within webkit it needs access to the call stack and scope chain to optimize generated bytecode however the compiler never needs access to the heap we ignore information leaks due to other side channels like timing challenges and solutions ifc for js is known to be difficult due to js s highly dynamic nature working with bytecode instead of source code makes ifc harder nonetheless solutions to many ifc concerns proposed in earlier work also apply to our instrumentation sometimes in slightly modified form for example in js every object has a fixed parent called a prototype which is looked up when a property does not exist in the child this can lead to implicit flows if an object is created in a high context when the pc is high and a field missing from it but present in the prototype is accessed later in a low context then there is an implicit leak from the high pc this problem is avoided in both and analysis in the same way the prototype pointer from the child to the parent is labeled with the pc where the child is created and the label of any value read from the parent after traversing the pointer is joined with this label other potential information flow problems whose solutions remain unchanged between and analysis include implicit leaks through function pointers and handling of eval working with bytecode both leads to some interesting insights which are in some cases even applicable to source code analysis and other languages and poses new challenges we discuss some of these challenges and insights unstructured control flow and cfgs to avoid overtainting pc labels an important goal in implicit flow tracking is to determine when the influence of a control construct has ended for control flow limited to if and while commands this is straightforward the effect of a control construct ends with its lexical scope in if h l l h influences the control flow at l but not at l this leads to a straightforward pc upgrading and downgrading strategy one maintains a stack of pc labels the effective pc is the top one when entering a control flow construct like if or while a new pc label equal to the join of labels of all values on which the construct s guard depends with the previous effective pc is pushed when exiting the construct the label is popped unfortunately it is unclear how to extend this simple strategy to control flow constructs such as exceptions break continue and for functions all of which occur in js for example consider the program l while if h break l break with h labeled this program leaks the value of h into l but no assignment to l appears in a guarded by indeed the pc upgrading and downgrading strategy just described is ineffective for this program prior work on source code ifc either omits some of these constructs or introduces additional classes of labels to address these problems a label for exceptions a label for each loop containing break or continue and a label for each function these labels are more restrictive than needed the code indicated by dots in the example above is executed irrespective of the condition h in the first iteration and thus there is no need to raise the pc before checking that condition further these labels are programmer annotations which we can not support as we do not wish to modify the compiler importantly unstructured control flow is a very serious concern for us because webkit s bytecode has completely unstructured branches like in fact all control flow except function calls is unstructured in bytecode to solve this problem we adopt a solution based on static analysis of generated bytecode we maintain a control flow graph cfg of known bytecodes and for each branch node compute its immediate ipd the ipd of a node is the first instruction that will definitely be executed no matter which branch is taken our pc upgrading and downgrading strategy now extends to arbitrary control flow when executing a branch node we push a new pc label on the stack along with the node s ipd when we actually reach the ipd we pop the pc label in the authors prove that the ipd marks the end of the scope of an operation and hence the security context of the operation so our strategy is sound in our earlier example the ipd of if h is the end of the while loop because of the first break statement so when h the assignment l fails due to the nsu check and the program is secure js requires dynamic code compilation we are forced to extend the cfg and to compute ipds whenever code for either a function or an eval is compiled fortunately the ipd of a node in the cfg lies either in the same function as the node or some function earlier in the the latter may happen due to exceptions so extending the cfg does not affect computation of ipds of earlier nodes this also relies on the fact that code generated from eval can not alter the cfg of earlier functions in the call stack in the actual implementation we optimize the calculation of ipds further by working only as explained below at the end our solution works for all forms of unstructured control flow including unstructured branches in the bytecode and break continue and exceptions in the source code exceptions and synthetic exit nodes maintaining a cfg in the presence of exceptions is expensive an node in a function that does not catch that exception should have an outgoing control flow edge to the next exception handler in the this means that a the cfg is in general and b edges going out of a function depend on its calling context so ipds of nodes in the function must be computed every time the function is called moreover in the case of recursive functions the nodes must be replicated for every call this is rather expensive ideally we would like to build the function s cfg once when the function is compiled and work as we would had there been no exceptions we explain how we attain this goal in the in our design every function that may throw an unhandled exception has a special synthetic exit node sen which is placed after the regular return node s of the function every node whose exception will not be caught within the function has an outgoing edge to the sen which is traversed when the exception is thrown the semantics of sen described below correctly transfer control to the appropriate exception handler by doing this we eliminate all edges and our cfgs become the cfg of a function can be computed when the function is compiled and is never updated in our implementation we build two variants of the cfg depending on whether or not there is an exception handler in the call stack this improves efficiency as we explain later control flows to the sen when the function returns normally or when an exception is thrown but not handled within the function if no unhandled exception occurred within the function then the sen transfers control to the caller we record whether or not an unhandled exception occurred if an unhandled exception occurred then the sen triggers a special mechanism that searches the call stack backward for the first appropriate exception handler and transfers control to it in js exceptions are indistinguishable so we need to find only the first exception handler importantly we pop the up to the frame that contains the first exception handler but do not pop the which ensures that all code up to the exception handler s ipd executes with the same pc as the sen which is indeed the semantics one would expect if we had a cfg with edges for exceptions this prevents information leaks if a function does not handle a possible exception but there is an exception handler on the call stack then all bytecodes that could potentially throw an exception have the sen as one successor in the cfg any branching bytecode will thus need to push to the according to the security label of its condition however we do not push a new entry if the ipd of the current node is the same as the ipd on the top of the this is just this problem and our solution are not particular to js they apply to dynamic ifc analysis in all languages with exceptions and functions an optimization or if the ipd of the current node is the sen as in this case the real ipd which is outside of this method is already on the these semantics emulate the effect of having exception edges for illustration consider the following two functions f and the at the end of g denotes its sen note that there is an edge from throw to because throw is not handled within denotes the ipd of the handler catch e l function g function f if h throw l return try g catch e l return l it should be clear that in the absence of instrumentation when f is invoked with pc l the two functions together leak the value of h which is assumed to have label h into the return value of we show how our sen mechanism prevents this leak when invoking g we do not know if there will be an exception in this function depending on the outcome of this method call we will either jump to the exception handler or continue at based on that branch we push the current pc and ipd l on the when executing the condition if h we do not push again but merely update the top element to h if h control reaches without an exception but with pc h because the ipd of if h is at this point returns control to f thus pc h but at pc is lowered to l so f ends with the return value if h control reaches with an unhandled exception at this point following the semantics of sen we find the exception handler catch e l and invoke it with the same pc as the point of exception consequently nsu prevents the assignment l which makes the program because we do not wish to replicate the cfg of a function every time it is called recursively we need a method to distinguish the same node corresponding to two different recursive calls on the for this when pushing an ipd onto the we pair it with a pointer to the current since the pointer is unique for each recursive call the cfg node paired with the identifies a unique merge point in the real control flow graph in practice even the cfg is quite dense because many js bytecodes can potentially throw exceptions and hence have edges to the to avoid overtainting we perform a crucial optimization when there is no exception handler on the call stack we do not create the sen and the corresponding edges from potentially bytecodes at all this is safe as a potentially thrown exception can only terminate the program instantly which satisfies if we ensure that the exception message is not visible to the attacker whether or not an exception handler exists is easily tracked using a stack of booleans that mirrors the in our design we overlay this stack on the by adding an extra boolean field to each entry of the in summary each entry of our is a quadruple containing a security label a node in the intraprocedural cfg a pointer and a boolean value in combination with sens this design allows us to work only with intraprocedural cfgs that are computed when a function is compiled this improves efficiency check with changes the standard nsu check halts program execution whenever an attempt is made to assign a variable with a value in a high pc in our earlier example l if h l assuming that h stores a value program execution is halted at the command l as austin and flanagan af in the sequel observe this may be overly restrictive when l will not in fact have observable effects l may be overwritten by a constant immediately after if h l so they propose propagating a special taint called into l at the instruction l and halting a program when it tries to use a value labeled in a way that will be observable af call this special taint p for partially leaked this idea called the check allows more program execution than nsu would so we adopt it in fact this additional permissiveness is absolutely essential for us because the webkit compiler often generates dead assignments within branches so execution would pointlessly halt if standard nsu were used we differ from af in what constitutes a use of a value labeled as expected af treat occurrence of in the guard of a branch as a use thus the program l if h l if l l is halted at the command if l when h because l obtains taint at the assignment l if the program is not halted it leaks h through l however they allow values to flow into the heap consider the program l if h l this program is insecure in our model the heap location which is accessible to the adversary ends with when h and with when h af deem the program secure by assuming that any value with label is to any other value in particular and are however this definition of for dynamic analysis is virtually impossible to enforce if the adversary has access to the heap outside the language after writing to for h a dynamic analysis can not determine that the alternate execution of the program for h would have written a value and hence can not prevent the adversary from seeing consequently in our design we use a modified check which we call the deferred nsu check wherein a program is halted at any construct that may potentially flow a value into the heap this includes all branches whose guard contains a value and any assignments whose target is a heap location and whose source is however we do not constrain flow of values in data structures that are invisible to the adversary in our model local registers and variable environments this design critically relies on treating internal data structures differently from ordinary js objects which is not the case for instance in the ecmascript specification formal model and ifc we formally model webkit s js bytecode and the semantics of its bytecode interpreter with our instrumentation of dynamic ifc we prove ins prim dst r r r mov dst r src r jfalse cond r target offset r r target offset typeof dst r src r instanceof dst r value r cprot r enter ret result r end result r call func r args n res r func r args n dst r dst r func f dst r construct func r args n dst r dst r dst r base r prop id base r prop id value r direct b dst r base r prop id dst r base r i n size n breaktarget offset dst r base r i n size n iter n target offset base r prop id getter r setter r resolve dst r prop id dst r prop id skip n dst r prop id dst r prop id isstrict bool bdst r pdst r prop id dst r index n skip n index n skip n value r scope r count n target offset throw ex r catch ex r fig instructions insensitive for programs executed through our instrumented interpreter we do not model the construction of the cfg or computation of ipds these are standard to keep presentation accessible we present our formal model at a somewhat of abstraction details are resolved in our technical appendix bytecode and data structures the version of webkit we model uses a total of bytecodes or instructions of which we model the remaining bytecodes are redundant from the perspective of formal modeling because they are specializations or wrappers on other bytecodes to improve efficiency the syntax of the bytecodes we model is shown in fig the bytecode prim abstractly represents primitive binary and unary with just the first two arguments operations all of which behave similarly for convenience we divide the bytecodes into primitive instructions instructions related to objects and prototype chains instructions related to functions instructions related to scope chains and instructions related to exceptions a bytecode has the form the arguments to the instruction are of the form hvari htypei where var is the variable name and type is one of the following r n bool id prop and offset for register constant integer constant boolean identifier property name and jump offset value respectively in webkit bytecode is organized into code blocks each code block is a sequence of bytecodes with line numbers and corresponds to the instructions for a function or an eval statement a code block is generated when a function is created or an eval is executed in our instrumentation we perform control flow analysis on a code block when it is created and in our formal model we abstractly represent a code block as a cfg written formally a cfg is a directed graph whose nodes are bytecodes and whose edges represent possible control flows there are no edges a cfg also records the ipd of each node ipds are computed using an algorithm by lengauer and tarjan when the cfg is created if the cfg contains uncaught exceptions we also create a for a cfg and a node succ denotes s unique successor for a conditional branching node left and right denote successors when the condition is true and false respectively the bytecode interpreter is a standard stack machine with support for js features like scope chains and prototype chains the state of the machine with our instrumentation is a quadruple where represents the current node that is being executed represents the heap represents the and is the we assume an abstract countable set a a b of heap locations which are references to objects the heap is a partial map from locations to objects an object o may be an ordinary js object n pi vi a p s containing properties named pn that map to labeled values vn a prototype field that points to a parent at heap location a and two labels p and s p records the pc where the object was created s is the structure label which is an upper bound on all pcs that have influenced which fields exist in the a function object f n where n is an ordinary object is a cfg which corresponds to the the function stored in the object and is the scope chain closing context of the function a labeled value v r is a value r paired with a security label a value r in our model may be a heap location a or a js primitive value n which includes integers booleans regular expressions arrays strings and the special js values undefined and null the contains one for each incomplete function call a contains an array of registers for local variables a cfg for the function represented by the the return address a node in the cfg of the previous frame and a pointer to a that allows access to variables in outer scopes additionally each has an exception table which maps each potentially bytecode in the function to the exception handler within the function that surrounds the bytecode when no such exception handler exists it points to the sen of the function we conservatively assume that any unknown code may throw an exception so bytecodes call and eval are for this purpose denotes the size of the and its top frame each register contains a labeled value a scope chain is a sequence of scope chain nodes scns denoted s paired with labels in webkit a scope chain node s may either be an object or a variable environment v which is an array of labeled values thus sn n and s o v and v vn the field is the parent of the object it is not the same as the prototype field of a function object which is an ordinary property also in our actual model fields pi map to more general property descriptors that also contain attributes along with labeled values we elide attributes here to keep the presentation simple each entry of the is a triple p where is a security label is a node in a cfg and p is a pointer to some on the call stack for simplicity we ignore a fourth boolean field described in section in this presentation when we enter a new control context we push the new pc together with the ipd of the entry point of the control context and a pointer p to current the pair p uniquely identifies where the control of the context ends p is necessary to distinguish the same branch point in different recursive calls of the function in our semantics we use the isipd to pop the stack it takes the current instruction the current and the call stack and returns a new if isipd otherwise as explained in section as an optimization we push a new node onto only when the ipd differs from the corresponding pair on the top of the stack and to handle exceptions correctly we also require that not be the sen otherwise we just join with the label on the top of the stack this is formalized in the function whose obvious definition we elide if x is a pair of any syntactic entity and a security label we write x for the entity and x for the label in particular for v r v r and v semantics and ifc with cfgs we now present the semantics which faithfully models our implementation using cfgs with sens the semantics is defined as a set of state transition rules that define the judgment i fig shows rules for selected bytecodes for reasons of space we omit rules for other bytecodes and formal descriptions of some like opcall that are used in the rules c a b is shorthand for a if c then a else b prim reads the values from two registers and performs a binary operation generically denoted by on the values and writes the result into the register dst dst is assigned the join of the labels in and the head of the to implement deferred nsu section the existing label in dst is compared with the current pc if the label is lower than the pc then the label of dst is joined with note that the premise isipd pops an entry from the if its ipd matches the new program node this premise occurs in all semantic rules jfalse is a conditional jump it skips offset number of successive nodes in the cfg if the register cond contains false else it to the next node formally the node it branches to is either right or left where is the cfg in in accordance with deferred nsu the operation is performed only if cond is not labeled jfalse also starts a new control context so a new node is pushed on the top of the with a label that is the join of cond and the current label on the top of the stack unless the ipd of the branch point is already on top of the stack or it is the sen in which case we join the new dst r r r l t t v dst h l l prim dst dst succ isipd cond r target offset cond l cond t cond false left right l ipd cf isipd jfalse scope r pushscope scope succ isipd func r args n func f opcall func args l f t func t l ipd cf isipd call ret base r prop id value r direct b value direct true putdirect base prop value putindirect base prop value succ isipd res r opret res isipd throw ex r excvalue ex throwexception isipd fig semantics selected rules label with the previous traversed from bottom to top the always has monotonically labels updates the property prop in the object pointed to by register base as explained in section we allow this only if the value to be written is not labeled the flag direct states whether or not to traverse the prototype chain in finding the property it is set by the compiler as an optimization if the flag is true then the chain is not traversed putdirect handles this case if direct is false then the chain is traversed putindirect importantly when the chain is traversed the resulting value is labeled with the join of prototype labels p and structure labels s of all traversed objects this is standard and necessary to prevent implicit leaks through the pointers and structure changes to objects which corresponds to the start of the js construct with obj pushes the object pointed to by the register scope into the scope chain because pushing an object into the scope chain can implicitly leak information from the program context later we also label all nodes in the with the pc s at which they were added to the chain further deferred nsu applies to the scope chain pointer in the as it does to all other registers call invokes a function of the target object stored in the register func due to deferred nsu the call proceeds only if func is not the call creates a new and initializes arguments the scope chain pointer initialized with the function object s field cfg and the return node in the new frame the cfg in the is copied from the function object pointed to by func all this is formalized in the opcall whose details we omit here call is a branch instruction and it pushes a new label on the which is the join of the current pc func and the structure label f of the function object unless the ipd of the current node is the sen or already on the top of the in which case we join the new with the previous call also initializes the new registers labels to the new pc a separate bytecode not shown here and executed first in the called function sets register values to undefined eval is similar to call but the code to be executed is also compiled ret exits a function it returns control to the caller as formalized in the opret the return value is written to an interpreter variable throw throws an exception passing the value in register ex as argument to the exception handler our push semantics ensure that the exception handler if any is present in the pointed to by the top of the the throwexception pops the up to this and transfers control to the exception handler by looking it up in the exception table of the the exception value in the register ex is transferred to the handler through an interpreter variable the semantics of other bytecodes have been described in section correctness of ifc we prove that our ifc analysis guarantees terminationinsensitive intuitively this means that if a program is run twice from two states that are observationally equivalent for the adversary and both executions terminate then the two final states are also equivalent for the adversary to state the theorem formally we formalize equivalence for various data structures in our model the only nonstandard data structure we use is the cfg but graph equality suffices for it a complication is that low heap locations allocated in the two runs need not be identical we adopt the standard solution of parametrizing our definitions of equivalence with a partial bijection between heap locations the idea is that two heap locations are related in the partial bijection if they were created by corresponding allocations in the two runs we then define a rather standard relation i i which means that the states on the left and right are equivalent to an observer at level up to the bijection on heap locations the details have been presented in section theorem suppose i i i hend i and i hend i then such that implementation we instrumented webkit s js engine javascriptcore to implement the ifc semantics of the previous section before a function starts executing we generate its cfg and calculate ipds of its nodes by static analysis of its bytecode we modify the compiler to emit a slightly different but functionally equivalent bytecode sequence for finally blocks this is needed for accurate computation of ipds for evaluation purposes we label each source script with the script s domain of origin each seen domain is dynamically allocated a bit in our label in general our instrumentation terminates a script that violates normalized js time interpreter jit basic op mized access bitops crypto date math regexp string sunspider tests fig overheads of basic and optimized ifc in sunspider benchmarks ifc however for the purpose of evaluating overhead of our instrumentation we ignore ifc violations in all experiments described here we also implement and evaluate a variant of sparse labeling which optimizes the common case of computations that mostly use local variables registers in the bytecode until a function reads a value from the heap with a label different from the pc we propagate taints only on but not on computations until that point all registers are assumed to be implicitly tainted with pc this simple optimization reduces the overhead incurred by taint tracking significantly in microbenchmarks for both the basic and optimized version our instrumentation adds approximately lines of code to webkit our baseline for evaluation is the uninstrumented interpreter with jit disabled for comparison we also include measurements with jit enabled our experiments are based on webkit build running in safari the machine has a intel xeon processor with ram and runs mac os x version microbenchmark we executed the standard sunspider js benchmark suite on the uninstrumented interpreter with jit disabled and jit enabled and on the basic and the optimized ifc instrumentations with jit disabled results are shown in figure the ranges over sunspider tests and the shows the average execution time normalized to our baseline uninstrumented interpreter with jit disabled and averaged across runs error bars are standard deviations although the overheads of ifc vary from test to test the average overheads over our baseline are and for basic ifc and optimized ifc respectively the test regexp has almost zero overhead because it spends most time in native code which we have not yet instrumented we also note that as expected the configuration performs extremely well on the sunspider benchmarks normalized javascript time interpreter jit basic instrumentagon opgmized instrumentagon google yahoo amazon wikipedia ebay websites bing linkedin live twi er fig overheads of basic and optimized ifc in real websites macrobenchmarks we measured the execution time of the intial js on popular english language websites we load each website in safari and measure the total time taken to execute the js code without user interaction this excludes time for network communication and internal browser events and establishes a very conservative baseline the results normalized to our baseline are shown in fig our overheads are all less than with an average of around in both instrumentations interestingly we observe that our optimization is less effective on real websites indicating that real js accesses the heap more often than the sunspider tests when compared to the amount of time it takes to fetch a page over the network and to render it these overheads are negligible enabling jit worsens performance compared to our baseline indicating that for the code executed here jit is not useful we also experimented with jsbench a sophisticated benchmark derived from js code in the wild the average overhead on all jsbench tests a total iterations is approximately for both instrumentations the average time for running the benchmark tests on the uninstrumented interpreter with jit disabled was about with a standard deviation of about of the mean the average time for running the same benchmark tests on the instrumented interpreter and the optimized version was about and respectively with a standard deviation of about and of the mean in the two cases conclusion and future work we have explored dynamic information flow control for js bytecode in webkit a production js engine we formally model the bytecode its semantics our instrumentation and prove the latter correct we identify challenges largely arising from pervasive use of unstructured control flow in bytecode and resolve them using very limited static analysis our evaluation indicates only moderate overheads in practice in ongoing work we are instrumenting the dom and other native js methods we also plan to generalize our model and theorem to take into account the reactive nature of web browsers going beyond noninterference the design and implementation of a policy language for representing allowed information flows looks necessary acknowledgments this work was funded in part by the deutsche forschungsgemeinschaft dfg grant information flow control for browser clients under the priority program reliably secure software systems and the german federal ministry of education and research bmbf within the centre for privacy and accountability cispa at saarland university references richards hammer burg vitek the eval that men do a study of the use of eval in javascript applications in mezzini ed ecoop volume of lncs jang jhala lerner shacham an empirical study of privacyviolating information flows in javascript web applications in proc acm conference on computer and communications security richards hammer zappa nardelli jagannathan vitek flexible access control for javascript in proc acm sigplan international conference on object oriented programming systems languages applications oopsla hedin sabelfeld security for a core of javascript in proc ieee computer security foundations symposium hedin birgisson bello sabelfeld jsflow tracking information flow in javascript and its apis in proc acm symposium on applied computing devriese piessens noninterference through secure in proc ieee symposium on security and privacy de groef devriese nikiforakis piessens flowfox a web browser with flexible and precise information flow control in proc acm conference on computer and communications security goguen meseguer security policies and security models in proc ieee symposium on security and privacy myers liskov a decentralized model for information flow control in proc acm symposium on operating systems principles zdancewic myers robust declassification in proc ieee computer security foundations workshop volpano irvine smith a sound type system for secure flow analysis comput secur january just cleary shirley hammer information flow analysis for javascript in proc acm sigplan international workshop on programming language and systems technologies for internet clients austin flanagan permissive dynamic information flow analysis in proc acm sigplan workshop on programming languages and analysis for security bohannon pierce weirich zdancewic reactive noninterference in proc acm conference on computer and communications security maffeis mitchell taly an operational semantics for javascript in proc asian symposium on programming languages and systems aplas guha saftoiu krishnamurthi the essence of javascript in proc european conference on programming politz carroll lerner pombrio krishnamurthi a tested semantics for getters setters and eval in javascript in proceedings of the dynamic languages symposium bodin chargueraud filaretti gardner maffeis naudziuniene schmitt smith a trusted mechanised javascript specification in proc acm symposium on principles of programming languages guarnieri pistoia tripp dolby teilhet berg saving the world wide web from vulnerable javascript in proc international symposium on software testing and analysis issta chugh meister jhala lerner staged information flow for javascript in proc acm sigplan conference on programming language design and implementation austin flanagan efficient information flow analysis in proc acm sigplan fourth workshop on programming languages and analysis for security zdancewic programming languages for information security phd thesis cornell university august birgisson hedin sabelfeld boosting the permissiveness of dynamic tracking by testing in computer security esorics volume of lncs springer berlin heidelberg austin flanagan multiple facets for dynamic information flow in proc annual acm symposium on principles of programming languages bielova devriese massacci piessens reactive for a browser model in international conference on network and system security nss bohannon pierce featherweight firefox formalizing the core of a web browser in proc usenix conference on web application development webapps denning a lattice model of secure information flow commun acm may dhawan ganapathy analyzing information flow in browser extensions in proc annual computer security applications conference acsac robling denning cryptography and data security longman publishing boston ma usa xin zhang efficient online detection of dynamic control dependence in proc international symposium on software testing and analysis masri podgurski algorithms and tool support for dynamic information flow analysis information software technology lengauer tarjan a fast algorithm for finding dominators in a flowgraph acm trans program lang syst january richards gal eich vitek automated construction of javascript benchmarks in proceedings of the acm international conference on object oriented programming systems languages and applications appendix data structures the formal model described in section was typechecked in the various data structures used for defining the functions used in the semantics of the language are given in figure the of the javascript program is represented as a structure containing the source and a boolean flag indicating the strict mode is set or not the instruction as indicated before is a structure consisting of the opcode and the list of operands the opcode is a string indicating the operation and the operand is a union of registerindex immediatevalue identifier boolean funcindex and offset the immediatevalue denotes the directly supplied value to an opcode registerindex is the index of the register containing the value to be operated upon identifier represents the string name directly used by the opcode boolean is a often a flag indicating the truth value of some parameter and offset represents the offset where the control jumps to similarly functionindex indicates the index of the function object being invoked the function s source code is represented in the form of a control flow graph cfg formally it is defined as a struct with a list of cfg nodes each of which contain the instructions that are to be performed and the edges point to the next instruction in the program multiple outgoing edges indicate a branching instruction it also contains variables indicating the number of variables used by the function code and a reference to the globalobject the labels are interpreted as a structure consisting of long integer label the label represents the value of the label which are interpreted as bit vectors a special label star which represents partially leaked data is used for deferred check the program counter pc is represented as a stack of each of which contains the context label and the ipd of the operation that pushed the node the callframe of the current node and the handler flag indicating the presence of an exception handler different types of values are used as operands for performing the operations they include boolean integer string double and objects or special values like nan or undefined these values are associated with a label each and are wrapped by the jsvalue class all the values used in the data structures have the type jsvalue the objects consist of properties a prototype chain pointer with an associated label and a structure label for the object the properties are represented as a structure of the propertyname and its descriptor the descriptor of the property contains the value some boolean flags and a property label the struct sourcecode string programsrc bool strictmode struct jsvalue valuetemplate data jslabel label typedef char opcode union operand int immediatevalue string identifier int registerindex int funcindex bool flag int offset struct instruction opcode opc operand opr struct cfgnode instruction inst struct cfgnode left struct cfgnode right struct cfgnode succ struct propertydescriptor jsvalue value bool writable bool enumerable bool configurable jslabel structlabel struct property string propertyname propertydescriptor pdesc struct propertyslot property prop propertyslot next struct register jsvalue value struct callframenode register rf cfg cfg cfgnode returnaddress scopechainnode sc jsfunctionobject callee jslabel calleelabel int argcount bool getter int dreg struct callframestack callframenode cfn callframestack previous struct pcnode jslabel l struct jsobject cfgnode ipd property property callframenode cfn struct proto bool handler jslabel l struct cfg jsobject struct cfgnode cfgnode prototype struct pcstack jsglobalobject globalobject jslabel structlabel pcnode node int numvars pcstack previous int numfns bool strictmode struct heap unsigned location struct jsactivation jsobject o callframenode callframenode struct jslabel jslabel structlabel label enum functiontype jsfunction hostfunction enum scopechainobjecttype enum specials lexicalobject variableobject nan undefined struct jsfunctionobject jsobject union schainobject union valuetype cfg funccfg jsobject obj bool b scopechainnode scopechain jsactivation actobj int n functiontype ftype string s double d struct scopechainnode jsobject o struct jsglobalobject schainobject object jsobject scopechainobjecttype scobjtype jsfunctionobject evalfunction scopechainnode next union valuetemplate jsobject objectprototype jslabel scopelabel specials s jsobject functionprototype valuetype v fig data structures heap is a collection of objects with an associated memory address it is essentially a map from location to object there are subtypes of jsobject that define the function object and the global object the function object contains a pointer to the associated cfg and the scope chain it also contains a field defining the type of function it represents namely host or the is made up of various nodes each of which contains a set of registers the associated cfg the return address of the function a pointer to the scope chain and an exception table the registers store values and objects and are used as operands for performing the operations the exception table contains the details about the handlers associated with different instructions in the cfg of the the scope chain is a list of nodes containing objects or activation objects along with a label indicating the context in which the object in that node was added the activation object is a structure containing a pointer to the node for which the activation object was created the next section defines the different procedures used in the semantics of the language the statement stop implies that the program execution hangs algorithms the different used in the semantics presented in section are described below procedure isinstanceof jslabel context jsvalue obj jsvalue protoval oproto while oproto do if oproto protoval then ret jsvalue true context return ret end if oproto context end while ret jsvalue false context return ret end procedure procedure opret callframestack callstack int ret jsvalue retvalue ret if hostcallframeflag then return nil callstack retvalue end if return callstack retvalue end procedure procedure opcall callframestack callstack cfgnode ip int func int argcount jsvalue funcvalue func jsfunctionobject fobj callframenode sigmatop new callframenode callframenode prevtop sigmatop calltype calltype getcalldata funcvalue fobj if calltype calltypejs then scopechainnode sc sc argcount argcount for i argcount do i end for ip else if calltype calltypehost then stop end if ip callstack return retstate end procedure not modeled procedure opcalleval jslabel contextlabel callframestack callstack cfgnode ip int func int argcount jsvalue funcvalue func jsfunctionobject fobj jsobject variableobject argument arguments if ishosteval funcvalue then scopechainnode sc ip sc argcount sourcecode progsrc compiler progsrc cfg evalcodeblock compiler progsrc unsigned numvars unsigned numfuncs if numvars numfuncs then if then jsactivation variableobject new jsactivation callstack schainobject scobj variableobject scobj variableobject contextlabel else for scopechainnode n sc n do if then variableobject break end if end for end if for i numvars do identifier iden i if iden then iden end if end for for i numfuncs do jsfunctionobject fobj i fobj end for end if evalcodeblock ip ip callstack return retstate else return opcall contextlabel callstack ip func argcount end if end procedure procedure createarguments heap h callframestack callstack jsobject jsargument jsargument h callstack jsargument h jsvalue jsargument return retstate end procedure procedure newfunc callframestack callstack heap heap int funcindex jslabel context cfg cblock sourcecode fccode funcindex cfg fcblock compiler fccode jsfunctionobject fobj jsfunctionobject fcblock context fobj heap jsvalue fobj return retstate end procedure procedure createactivation callframestack callstack jslabel contextlabel jsactivation jsactivation new jsactivation callstack contextlabel schainobject scobj jsactivation jsvalue vactivation jsvalue jsactivation if contextlabel then scobj variableobject contextlabel contextlabel else stop end if return retstate end procedure procedure createthis jslabel contextlabel callframestack callstack heap h jsfunctionobject callee propertyslot p callee string str prototype jsvalue proto str jsobject obj new jsobject contextlabel contextlabel obj h jsvalue obj return retstate end procedure procedure newobject heap h jslabel contextlabel jsobject obj emptyobject contextlabel objectprototype contextlabel obj h jsvalue obj return retstate end procedure procedure getpropertybyid jsvalue v string p int dst jsobject o jslabel label jsvalue ret jsundefined if then label return ret end if while o null do if p then if then jsvalue v jsfunctionobject funcobj jsfunctionobject callframenode sigmatop new callframenode sigmatop scopechainnode sc cfg newcodeblock newcodeblock ip sc true dst ip ip callstack else ret getproperty p label end if return ret else o end if label label end while end procedure procedure putdirect jslabel contextlabel callframestack callstack heap h int base string property int propval jsvalue basevalue base value jsvalue propvalue propval value jsobject obj propertydescriptor datapd propertydescriptor true true true propvalue property datapd contextlabel obj return h end procedure procedure putindirect jslabel contextlabel callframestack callstack heap h int base string property int val jsvalue basevalue base jsvalue propvalue val jsobject obj bool isstrict contextlabel contextlabel if property obj getproperty property isstrict then property propvalue obj return h end if return putdirect contextlabel callstack h base property val end procedure procedure delbyid jslabel contextlabel callframestack callstack heap h int base identifier property jsvalue basevalue base jsobject obj int loc obj property prop property propertydescriptor pd if prop contextlabel then if property then h jsvalue true return retstate end if if property prop isconfigurable then if then jsvalue property pd loc obj h jsvalue true return retstate end if end if h jsvalue false return retstate else stop end if end procedure procedure putgettersetter jslabel contextlabel callframestack callstack heap h int base identifier property jsvalue gettervalue jsvalue settervalue jsvalue basevalue base jsobject obj int loc obj jsfunctionobject getterobj setterobj jsfunctionobject getterfuncobj null setterfuncobj null if then getterfuncobj end if if then setterfuncobj end if if getterfuncobj null then property getterobj end if if setterfuncobj null then setterobj end if propertydescriptor accessor propertydescriptor false false false true jsvalue v jsvalue contextlabel v property accessor contextlabel loc obj return h end procedure procedure getpropnames callframestack callstack instruction ip int base int i int size int breakoffset jsvalue baseval base jsobject obj propertyiterator propitr if then jsundefined jsundefined jsundefined ip breakoffset return retstate end if jsvalue propitr jsvalue jsvalue ip return retstate end procedure procedure getnextpropname callframestack cstack instruction ip jsvalue base int i int size int iter int offset int dst jsobject obj propertyiterator propitr iter topropertyiterator int b rfile i int e rfile size while b e do string key b jsvalue b if then jsvalue key ip ip offset break end if end while return retstate end procedure procedure resolveinsc jslabel contextlabel scopechainnode scopehead string property jsvalue v jslabel l scopechainnode scn scopehead while scn null do propertyslot pslot if property then v property contextlabel return v end if scn if variableobject then contextlabel else if lexicalobject then contextlabel end if contextlabel scn scopenextlabel end while v jsundefined contextlabel return v end procedure procedure resolveinscwithskip jslabel contextlabel scopechainnode scopehead string property int skip jsvalue v jslabel l scopechainnode scn scopehead while do scn if variableobject then contextlabel else if lexicalobject then contextlabel end if contextlabel scn scopenextlabel end while while scn null do propertyslot pslot if property then v property contextlabel return v end if scn if variableobject then contextlabel else if lexicalobject then contextlabel end if contextlabel scn scopenextlabel end while v jsundefined contextlabel return v end procedure procedure resolveglobal jslabel contextlabel callframestack cstack string property jsvalue v struct cfg cblock jsglobalobject globalobject cblock getglobalobject propertyslot pslot globalobject if property then v property contextlabel return v end if v jsundefined contextlabel return v end procedure procedure resolvebase jslabel contextlabel callframestack cstack scopechainnode scopehead string property bool strict jsvalue v scopechainnode scn scopehead cfg cblock jsglobalobject gobject while scn null do jsobject obj contextlabel contextlabel propertyslot pslot obj if null strict property then v emptyjsvalue contextlabel return v end if if property then v jsvaluecontainingobject obj contextlabel return v end if scn if scn null then contextlabel scn scopenextlabel end if end while v jsvalue gobject contextlabel return v end procedure procedure resolvebaseandproperty jslabel contextlabel callframestack cstack int bregister int pregister string property jsvalue v scopechainnode scn while scn null do jsobject obj contextlabel contextlabel propertyslot pslot obj if property then v property contextlabel v v jsvaluecontainingobject obj contextlabel v return ret end if scn if scn null then contextlabel scn scopenextlabel end if end while end procedure procedure getscopedvar jslabel contextlabel callframestack callstack heap h int index int skip jsvalue v scopechainnode scn while do if variableobject then contextlabel structlabel else if lexicalobject then contextlabel structlabel end if contextlabel scn scopelabel scn end while v index if variableobject then structlabel else if lexicalobject then structlabel end if return v end procedure procedure putscopedvar jslabel contextlabel callframestack callstack heap h int index int skip int value callframestack cstack scopechainnode scn jsvalue val value while do if variableobject then contextlabel else if lexicalobject then contextlabel end if contextlabel scn scopelabel scn end while cstack contextlabel index val return cstack end procedure procedure pushscope jslabel contextlabel callframestack callstack heap h int scope scopechainnode sc jsvalue v scope jsobject o schainobject scobj if contextlabel then o scobj lexicalobject contextlabel sc else if star then o scobj lexicalobject star sc end if return callstack end procedure procedure popscope jslabel contextlabel callframestack callstack heap h scopechainnode sc jslabel l if l contextlabel then sc else stop end if return callstack end procedure procedure jmpscope jslabel contextlabel callframestack callstack heap h int count scopechainnode sc while do jslabel l if l contextlabel then sc else stop end if end while return callstack end procedure procedure throwexception callframestack callstack cfgnode iota cfgnode handler while do end while while do end while handler iota handler callstack end procedure semantics prim dst r r r l t t v dst l l l prim dst dst succ isipd prim reads the values from two registers and performs the binary operation generically denoted by and writes the result into the dst register the label assigned to the value in dst register is the join of the label of value in and the head of the in order to avoid implicit leak of information the label of the existing value in dst is compared with the current context label if the label is lower than the context label the label of the value in dst is set to mov mov dst r src r l src t v src dst l l l dst dst succ isipd mov copies the value from the src register to the dst register the label assigned to the value in dst register is the join of the label of value in src and the head of the in order to avoid implicit leak of information the label of the existing value in dst is compared with the current context label if the label is lower than the context label the label of the value in dst is joined with jfalse jfalse cond r target offset cond cond t ipd cf false cond false left right isipd jfalse is a branching instruction based on the value in the cond register it decides which branch to take the operation is performed only if the value in cond is not labelled with a if it contains a we terminate the execution to prevent possible leak of information the push function defined in the rule does the following a node is pushed on the top of the containing the ipd of the branching instruction and the label of the value in cond joined with the context to define the context of this branch if the ipd of the instruction is sen or the same as the top of the then we just join the label on top of the with the context label determined by the cond register r r target offset l t t left right l ipd cf false isipd is another branching instruction if the value of is less than then it jumps to the target else continues with the next instruction the operation is performed only if the values in and are not labelled with a if any one of them contains a we abort the execution to prevent possible leak of information the push function defined in the rule does the following a node is pushed on the top of the containing the ipd of the branching instruction and the join of the label of the values in and joined with the context to define the context of this branch if the ipd of the instruction is sen or the same as the top of the then we just join the label on top of the with the context label determined above typeof typeof dst r src r l src t v determinetype src dst l l l dst succ dst isipd typeof determines the type string for src according to ecmascript rules and puts the result in register dst we do a deferred nsu check on dst before writing the result in it the determinetype function returns the data type of the value passed as the parameter instanceof dst r value r cprot r v isinstanceof value cprot l v dst l l l instanceof dst dst succ v v isipd instanceof tests whether the cprot is in the prototype chain of the object in register value and puts the boolean result in the dst register after deferred nsu check enter enter succ isipd enter marks the beginning of a code block ret ret res r opret res isipd ret is the last instruction to be executed in a function it pops the and returns the control to the callee s the return value of the function is written to a local variable in the interpreter which can be read by the next instruction being executed end end res r opend res end marks the end of a program opend passes the value present in res register to the caller the native function that invoked the interpreter call func r args n func h f opcall func args f t func t ipd cf h isipd call initially checks the function object s label for and if the label contains a the program execution is aborted the reason for termination is the possible leak of information as explained above if not call creates a new copies the arguments initializes the registers pointer codeblock and the return address the registers are initialized to undefined and assigned a label obtained by joining the label of the context in which the function was created and the label of the function object itself we treat call as a branching instruction and hence push a new node on the top of the with the label determined above along with its ipd and the field h in the push function is determined by looking up the exception table if it contains an associated exception handler it sets the field to true else it is set to false if the ipd is the sen then we just join the label on the top of the stack with the currently calculated label it then points the instruction pointer to the first instruction of the new code block res r l t v res l l l res succ res isipd copies the return value to the res register the label assigned to the value in res register is the join of the label of the return value and the head of the in order to avoid implicit leak of information deferred is performed func r args n func h f opcalleval func args f t func t ipd cf h isipd calls a function with the string passed as an argument converted to a code block if func register contains the original global eval function then it is performed in local scope else it is similar to call dst r v createarguments l v v dst l l l dst dst succ isipd creates the arguments object and places its pointer in the local dst register after the deferred nsu check the label of the arguments object is set to the context dst r funcindex f v newfunc funcindex l v t dst l l l dst dst succ v v isipd constructs a new function instance from function at funcindex and the current scope chain and puts the result in dst after deferred nsu check dst r v createactivation l v t dst l l l dst dst succ v v isipd creates the activation object for the current if it has not been already created and writes it to the dst after the deferred nsu check and pushes the object in the if the label of the head of the existing is less than the context then the label of the pushed node is set to else it is set to the context construct construct func r args n func h f opcall func args f t func t ipd cf h isipd construct invokes register func as a constructor and is similar to call for javascript functions the this object being passed the first argument in the list of arguments is a new object for host constructors no this is passed dst r v createthis l v t dst l l l dst dst succ v v isipd creates and allocates an object as this used for construction later in the function the object is labelled the context and placed in dst after deferred nsu check the prototype chain pointer is also labelled with the context label dst r v newobject l v t v v dst l l l dst dst succ isipd constructs a new empty object instance and puts it in dst after deferred nsu check the object is labelled with the context label and the prototype chain pointer is also labelled with the context dst r base r prop id vdst r v getpropertybyid base prop vdst l v t dst l l l dst dst succ v v isipd gets the property named by the identifier prop from the object in the base register and puts it into the dst register after the deferred nsu check if the object does not contain the property it looks up the prototype chain to determine if any of the proto objects contain the property when traversing the prototype chain the context is joined with the structure label of all the objects and the prototype chain pointer labels until the property is found or the end of the chain it then joins the property label to the context if the property is not found it returns undefined the joined label of the context is the label of the property put in the dst register if the property is an accessor property it calls the getter function sets the getter flag in the and updates the destination register field with the register where the value is to be inserted it then transfers the control to the first instruction in the getter function base r prop id value r direct b value direct true putdirect base prop value putindirect base prop value succ isipd writes into the heap the property of an object we check for in the label of value register if it contains a the program aborts as this could potentially result in an implicit information flow if not it writes the property into the object the basic functionality is to search for the property in the object and its prototype chain and change it if the property is not found a new property for the current object with the property label as the context is created based on whether the property is in the object itself or needs to be created in the object itself or in the prototype chain of the object it calls putdirect and putindirect respectively dst r base r prop id base v delbyid base prop l v t v dst l l l dst dst succ isipd deletes the property specified by prop in the object contained in base if the structure label of the object is less than the context the deletion does not happen if the property is found the property is deleted and boolean value true is written to dst else it writes false to dst the label of the boolean value is the structure label of the object joined with the property label base r prop id getter r setter r getter setter putgettersetter base prop getter setter succ isipd puts the accessor descriptor to the object in register base it initially checks if the structure label of the object is greater or equal to the context the property for which the accessor properties are added is given in the register prop the property label of the accessor functions is set to the context putgettersetter calls putindirect internally and sets the property of the object with the specified value dst r base r i r size r breaktarget offset base getpropnames base i size breaktarget ln base t vn t vn vn n dst i size dst dst i i size size vn undefined l base l base t base t prop base p l ipd cf false isipd creates a property name list for object in register base and puts it in dst initializing i and size for iteration through the list after the deferred nsu check if base is undefined or null it jumps to breaktarget it is a branching instruction and pushes the label with join of all the property labels and the structure label of the object along with the ipd on the if the ipd of the instruction is sen or the same as the top of the then we just join the label on top of the with the context label determined above dst r base r i n size n iter n target offset getnextpropnames base i size iter target ln vn t vn vn n dst i dst dst i i isipd copies the next name from the property name list created by getpnames in iter to dst after deferred nsu check and jumps to target if there are no names left it continues with the next instruction although it behaves as a branching instruction the context pertaining to this opcode is already pushed in also the ipd corresponding to this instruction is the same as the one determined by thus we do not push on the in this instruction l v t resolve resolve dst r prop id v resolveinsc prop v v dst l l l dst dst succ isipd resolve searches for the property in the scope chain and writes it into dst register if found the label of the property written in dst is a join of the context label all the nodes structure label of the object contained in it traversed in the scope chain and the label associated with the pointers in the chain until the node object where the property is found if the initial label of the value contained in dst was lower than the context label then the label of the value in dst is joined with in case the property is not found the instruction throws an exception similar to throw as described later dst r prop id skip n v resolveinscwithskip prop skip l v t v v dst l l l dst dst succ isipd looks up the property named by prop in the scope chain similar to resolve but it skips the top skip levels and writes the result to register dst if the property is not found it also raises an exception and behaves similarly to resolve l v t dst r prop id v resolveglobal prop v v dst l l l dst dst succ isipd looks up the property named by prop in the global object if the structure of the global object matches the one passed here it looks into the global object else it falls back to perform a full resolve dst r prop id isstrict bool v resolvebase prop isstrict l v t v v dst l l l dst dst succ isipd looks up the property named by prop in the scope chain similar to resolve but writes the object to register dst if the property is not found and isstrict is false the global object is stored in dst bdst r pdst r prop id bdst pdst resolvebaseandproperty basedst propdst prop bdst t bdst pdst t pdst bdst pdst bdst bdst succ pdst pdst isipd looks up the property named by prop in the scope chain similar to and writes the object to register bdst it also writes the property to pdst if the property is not found it raises an exception like resolve dst r index n skip n v getscopedvar index skip l v t dst l l l dst dst succ v v isipd loads the contents of the index local from the scope chain skipping skip nodes and places it in dst after deferred nsu the label of the value in dst includes the join of the current context along with all the structure label of objects in the skipped nodes index n skip n value r value putscopedvar index skip value succ isipd puts the contents of the value in the index local in the scope chain skipping skip nodes the label of the value includes the join of the current context along with the structure label of all the objects in the skipped nodes scope r pushscope scope succ isipd converts scope to object and pushes it onto the top of the current scope chain the contents of the register scope are replaced by the created object the scope chain pointer label is set to the context popscope succ isipd removes the top item from the current scope chain if the scope chain pointer label is greater than or equal to the context count n target n jmpscope count succ isipd removes the top count items from the current scope chain if the scope chain pointer label is greater than or equal to the context it then jumps to offset specified by target throw throw ex r excvalue ex throwexception isipd throw throws an exception and points to the exception handler as the next instruction to be executed if any the exception handler might be in the same function or in an earlier function if it is not present the program terminates if it has an exception handler it has an edge to the synthetic exit node apart from this throwexception pops the from the until it reaches the containing the exception handler it writes the exception value to a local interpreter variable excvalue which is then read by catch l excvalue t catch catch ex r ex l l l ex excvalue ex succ excvalue empty isipd catch catches the exception thrown by an instruction whose handler corresponds to the catch block it reads the exception value from excvalue and writes into the register ex if the label of the register is less than the context a is joined with the label it then makes the excvalue empty and proceeds to execute the first instruction in the catch block proofs and results the fields in a frame of the are denoted by the following symbols represents the ipd field in the top frame of the returns the label field in the top frame of the and returns the field in the top frame of the in the definitions and proofs that follow we assume that the level of attacker is l in the lattice presented earlier in the equivalence relation the level of the attacker is omitted for clarity purposes from definitions and proofs definition partial bijection a partial bijection is a binary relation on heap locations satisfying the following properties if a b and a then b and if a b and b then a using partial bijections we define equivalence of values labeled values and objects definition value equivalence two values and are equivalent up to written if either a b and a b or v where v is some primitive value definition labeled value equivalence two labeled values and are equivalent up to written if one of the following holds or or h or l and the first clause of the above definition is standard for the check it equates a partially leaked value to every other labeled value objects are formally denoted as n pi vi flags i a p s here pi s correspond to the property name vi s are their respective values and flags i represent the writable enumerable and configurable flags as described in the propertydescriptor structure in the cpp model above as the current model does not allow modification of the flags they are always set to true thus we do not need to account for the flags i in the equivalence definition below represents a labelled pointer to the object s prototype definition object equivalence for ordinary objects n pi vi flags i a p s and n flags m p we say n n iff either s h or the following hold s l pn in particular n m vi and a p a p for function objects f n f and f n f we say f f iff either s h or n n f f and the equality f f of nodes f f in cfgs means that the portions of the cfgs reachable from f f are equal modulo renaming of operands to bytecodes under equivalence of scope chains is defined below because we do not allow to flow into heaps we do not need corresponding clauses in the definition of object equivalence definition heap equivalence for two heaps we say that iff a b a b unlike objects we allow to permeate scope chains so our definition of scope chain equivalence must account for it scope chains are denoted as a node contains a label along with an object s either jsactivation or jsobject represented as s definition scope chain equivalence for two scope chain nodes s s we say that s s if one of the following holds s o s and o or s vn s and vi equivalence of two scope chains is defined by the following rules nil nil nil s if h or s nil if h or and s s if one of the following holds a or b h or c l s s and definition equivalence for two call frames we say iff registers registers i i c c h c c l and note that a register is simply a labeled value in our semantics so clause above is definition equivalence for two we say iff the corresponding nodes of and having label l are equal except for the c field in proofs that follow two nodes are equal if their respective fields are equal except for the c field definition equivalence given suppose is the lowest node in is the lowest node in is the node of pointed to by is the node of pointed to by is prefix of up to and including or if l or is empty is prefix of up to and including or if l or is empty then iff and i i definition state equivalence two states i and i are equivalent written as iff and lemma confinement lemma if i and h then and where a a a proof as h the l labelled nodes in the will remain unchanged branching instructions pushing a new node would have label h due to monotonicity of even if is the ipd corresponding to the it would only pop the h labelled node thus the l labelled nodes will remain unchanged hence we assume that the is the first node labelled h in the context stack for other higher labelled nodes above the first node labelled h in the the corresponding to the nodes having l label in the remain the same hence by case analysis on the instruction type prim a if dst then dst by premise of prim dst by definition dst dst b if dst then dst will contain a and by definition dst dst only dst changes in the so by definition also other remain unchanged by definition thus mov similar to prim jfalse and so and similar to jfalse typeof similar to prim instanceof similar to prim enter so so ret if f alse then only is popped the until are unchanged when true then it sets with res now let is the prefix of such that if then changes in does not effect the callframe equivalence and if then when l or and h when h each of the cases give from definition so by definition so end the confinement lemma does not apply call if it pushes on top of is the lowest node in if it joins the label with the l labelled nodes remain unchanged and the all the until remain unchanged so by definition so similar to prim if it is a eval it is similar to call in strict mode it pushes a node on with label h if h else labels it in mode it does not push a node on the remains equivalent with corresponding in by definition as other l are unchanged by definition so over the initial by definition if the argument object is created at x then x x after the step is taken similar to prim over the initial by definition if the function object is created at x then x x after the step is taken similar to prim over the initial by definition if the argument objects is created at x then x x after the step is taken it puts the object in dst with label h or depending on dst value s initial label also pushes a node containing the object in the scope chain with a if l or with label h if h or nil thus by definition by definition other are unchanged so by definition construct similar to call similar to over the initial by definition if the new object is created at x then x x after the step is taken similar to prim similar to mov when the property is a data property if the property is an accessor property then getter is invoked and if the invocation of getter pushes an entry on top of remains the lowest node in if it joins the label with the l labelled nodes remain unchanged and the all the until remain unchanged so by definition so sets the property of the object base object to the value with label h if the structure label of the object s thus the object remains lowequivalent by definition thus by definition also so deletes the property if structure label of object s thus the object remains by definition by definition similar to mov sets accessor property of the object base object with getter and setter and label h if the structure label of the object s thus the object remains by definition thus by definition also so similar to mov and jfalse similar to mov resolve if the property exists it is similar to mov if it does not it is similar to throw similar to resolve similar to resolve similar to resolve similar to resolve similar to mov writes the value in the indexth register in skipth node if index h then index else if index l then index other are unchanged thus by definition and so pushes node on with label h if h or nil else assigns a as the label thus registers remain unchanged by definition other are unchanged so by definition so pops the node from the if h registers remain unchanged by definition other are unchanged so by definition so similar to throw pops the until the handler is reached until the property of ipd ensures that either or thus is this and the ones below remain unchanged thus by definition so catch similar to mov n corollary if i i and i n h then and proof to prove proof by induction on basis ih from definition l labelled nodes of and are equal from lemma so l labelled nodes of and are equal thus l labelled nodes of and are equal and by definition to prove basis ih from lemma as i n h the lowest hlabelled node is the same grows monotonically in let the pointed to by lowest node be cn with size until the k from definition size of the prefix is same and by transitivity of equality it is the same for all the three cases until cn respectively with sizes k the following conditions hold i k i registers i registers and i k i registers i registers thus i k i registers i registers as the number of registers is the same given by r i k i r i r and i k i r i r let and vn n represents the values in the registers for and respectively then from definition a h in this case n h and vn n from lemma and definition b l and in this case either i n ii n l and vn in this case the value remains unchanged thus from definition vn n c now the following cases arise i vn n ii by lemma ln thus vn n i k i i and i k i i thus i k i i i k i i and i k i i from definition a if and be the two scope chains then due to confinement lemma i nil or i s n where n in either case i k i i from definition b if and sn n be the three then for and one of the following holds i due to confinement lemma and definition n ii h due to confinement lemma and definition ln h iii l due to confinement lemma either one should hold a n by definition b n l sn no additions to the scope chain thus i k i i from definition i k i i and i k i i thus i k i i i k i c i c h i c i c l i i and i k i c i c h i c i c l i i then either i k i c i c h or i k i c i c l i i i k i i and i k i i thus i k i i i k i i and i k i i thus i k i i i k i i and i k i i thus i k i i from definition and definition corollary if i h then proof by induction on basis by definition ih i and i n from ih and definition a b a b from lemma thus b c b c as a b and b c we have a c because is an identity bijection thus if a c a c then if a and b contain an ordinary object then for their respective structure labels s and either s h if h then h by definition where is the structure label of the object in c thus a c s l pn n m vi and a p a p for respective properties in a and b if l then l and m k and a p a p for respective properties in b and c s l pn n k if vi and then either i h or i l and ri also as a p a p and a p a p we have a p a p thus by definition a c if a and b contain a function object then for their respective structure labels s and either s h if h then h by definition where is the structure label of the function object in c thus a c s l l is the structure label of the function object in c thus n n from the above result for objects the cfgs f f f and the scope chains by corollary thus a c thus lemma supporting lemma suppose i i i i l and then and proof every instruction executes isipd at the end of the operation if is the ipd corresponding to the then it pops the first node on the as and would either pop in both the runs or in none thus for instructions that push branch we explain in respective instructions proof by case analysis on the instruction type prim no new object is created so as so and src i src i for i case analysis on the definition of for src i if src src src src then dst dst hence dst dst by definition if h and h dst dst so dst dst by definition if h h then dst dst so dst dst by definition symmetrical reasoning for h dst dst so dst dst by definition only dst changes in the top of both the thus by definition other in and are unchanged by definition and so mov similar reasoning as prim with single source jfalse no new object is created so cond cond cond cond l l is the label to be pushed on cond cond h h is the label to be pushed on the ipd of would be the same as we have same cfg in both the cases if the ipd is sen then we join the label of with the label obtained above which is the same in both the runs thus because if the ipd is not sen then it is some other node in the same thus the ipd field is also the same the h field is false in both the cases thus the pushed node is the same in both the cases and hence as either ipd or and may or may not be equal similar reasoning as jfalse similar to mov no new object is created so the label of the value in the dst is the label of the context joined with the label of all the prototype chain pointers traversed as value value where s and are the structure labels of objects pointed to by value and value respectively then by definition if s h then dst h and dst so dst dst from definition if s l then the objects have similar properties and prototype chains if it is not an instance and none of traversed prototype chain and objects are h then dst dst l and false else if it is present it has true so dst dst from definition if any one of traversed prototype chain and objects are h then dst dst so dst dst from definition only dst changes in the top of both the thus by definition other are unchanged and by definition and so enter no new object is created so ret no new object is created so since so only two cases arise for the getter flag f alse is same as with popped similarly is same as with popped as other are changed by definition true only resgister which changes is the and now if h then from defintion and if l then res res and res res end no and call no new object is created so pushes the same node on both similar to jfalse the only difference is the h field as the cfgs are same if it has an associated exception handler we set the h field to true in both the runs else it is false thus is the node pushed on and hence as f unc f unc if h as until and remain unchanged which correspond to the c field in the lowest node and by definition l registers created in the new contain undefined with label l and as so the function objects n n implying and also return addresses are the same and the callee is the same so other are unchanged so similar to move similar to in strict mode it pushes a node on the with label the pushed nodes are thus by definition in mode it does not push anything and is similar to call thus and let the argument object be created at x and y in and then x y dst dst l and as the objects are thus by definition and by definition also by definition as the objects are lowequivalent let the function object be created at x and y in and then x y function objects are as and func func dst dst by definition thus by definition and by definition also by definition as the objects are similar to construct similar to call similar to similar to no new object is created so as base base either the objects have the same properties or are labelled h because of definition in case of data property either dst dst h or dst dst l and value of prop is the same so by definition dst dst in case of an accessor property only dst changes in the top of both the and dst dst since and thus by definition other are unchanged and by definition for reasoning is similar to call and so no new object is created so because if value is labelled h then the properties created or modified will have label h and structure labels of the respective objects will become else if value is labelled l then the properties created or modified will have same value and label thus the objects remain by definition and hence by definition no new object is created so if the deleted property is h or if the structure label of the object is h then dst dst h else if is labelled l then dst dst l and value is true or false depending on whether the property is deleted or not by definition and if structure labels of the objects are l they have same properties by definition if not they have structure label as thus objects remain by definition and by definition reasoning similar to no new object is created so as base base and so are the objects and as thus the structure label of the object is either h in both the runs or l and have the same properties with values definition the ipd in both the cases is the same and so is the c field the mh field is set to false thus for it is similar to mov but done for dst i and size similar to mov but done for dst and base resolve no new object is created so if property is found in a l object and the node labels are also l then the property value is the same as if it is in h object or any node labels are h or have a then label of the property is h or thus dst dst by definition thus if property is not found in both runs it is similar to throw if property is not found in second run then in the first run the property is in h context so the exception thrown is also until the of are unchanged so similar to resolve similar to resolve similar to resolve similar to resolve no new object is created so reads indexth register in object in skipth node in the and writes into dst as the value if labelled l is the same else is labelled h or by definition dst dst and by definition thus no new object is created so writes into the scope chain node the same value if value is labelled if it is labelled in any of the runs scope chains remain equivalent if value is h it checks the label of register and puts the value with label h or thus by definition and by definition by definition no new object is created so pushes in the a node containing the object in scope with node label as scope scope and by definition registers and other remain the same so no new object is created so pops a node from the scope chain if h as so other registers remain the same so similar to throw no new object is created so the property of ipd ensures that and the and and the ones below them remain unchanged thus by definition catch similar to mov lemma supporting lemma suppose i i i i i i l l i n h j m h then and proof starting with the same instruction and high context in both the runs we might get two different instructions and this is only possible if was some branching instruction in the first place and this divergence happened in a high context now to prove from the property of the ipds we know that if pushes a h node on top of which was originally l ipd pops that node since we start from the same instrucion ipd where to prove n and m because pushes equal nodes and are not the ipds as and from lemma we get and ipd if ipd and ipd as ipd it pops the and which correspond to and in the nth and mth step ipd is the point where we pop the final h node on the because and from corollary n and m if and ipd then it pops the node pushed by in the other run as h and l by the property of ipd ipd which would pop from the the first frame labelled h on the thus n and m symmetric case of the above to prove a n and m from lemma we get from corollary we get and from lemma we have as ipd we compare all of and as the ipd of an instruction can lie only in the same comparison for all in and suffice r r r let and be represented by vn and vm in and respectively the in and are represented by and respectively we do case analysis on the different cases of definitions for and to show vn vm as i n h j m h if l then either vn vm or vn vm by definition vn vm if h then vn vm by definition vn vm if then vn and if then vm by definition vn vm lets and be the scopechains in and and sn and sm represent the scopechains in and sn and sm are the respective in the nth and mth step of the two runs and n and m are their node labels for scope chain pointers the following cases arise nil in this case sn and sm either remain nil or its head will have a h label because of the rules of the instructions that modify the ii and a in this case n and m will be too b h in this case n and m will be h too l in this case n and m or scopechains remain unchanged b n and m in case of jfalse and and and in case of if n and m base undefined and base undefined because base base hence base base thus dst i size h and similarly dst i size other registers remain unchanged and so do the other thus from the case a above we know that if then c n and m symmetric case of the above to prove a n and m from lemma we get from corollary we and from lemma we have assume is get an object at x in and is an object at y in such that x y and on and om are the respective objects in the nth and mth step of the two runs we do case analysis on the different cases of definitions for and to show on om if h then on om by definition on om if l then on om on om by definition on om similarly for function objects the structure labels would remain h if they were originally h or will remain l with the same cfgs and scopechains b n and m in case of jfalse and thus from the case a above we know that if then c n and m symmetric case of the above definition trace a trace is defined as a sequence of configurations or states resulting from a program evaluation for a program evaluation p sn where si i the corresponding trace is given as t p sn definition an e over a trace t sn where si i is defined inductively as e nil nil si e t e si t e t if l else if theorem suppose p and p are two program evaluations then for their respective given by e t p sn e t p if and n m then sn proof proof proceeds by induction on basis by assumption ih sk where to prove let sk i and sk then i from lemma where i or from lemma where corollary suppose i i i hend i i hend i then such that proof and are empty at the end of steps from the semantics we know that in l context both runs would push and pop the same number of nodes thus both take same number of steps in l context let k be the number of states in l context then in theorem n m thus sk where sk hend i and hend i by definition where
| 6 |
energy storage sharing in smart grid a modified auction based approach dec wayes tushar member ieee bo chai chau yuen senior member ieee shisheng huang member ieee david smith member ieee vincent poor fellow ieee and zaiyue yang member ieee paper studies the solution of joint energy storage es ownership sharing between multiple shared facility controllers sfcs and those dwelling in a residential community the main objective is to enable the residential units rus to decide on the fraction of their es capacity that they want to share with the sfcs of the community in order to assist them storing electricity for fulfilling the demand of various shared facilities to this end a modified mechanism is designed that captures the interaction between the sfcs and the rus so as to determine the auction price and the allocation of es shared by the rus that governs the proposed joint es ownership the fraction of the capacity of the storage that each ru decides to put into the market to share with the sfcs and the auction price are determined by a noncooperative stackelberg game formulated between the rus and the auctioneer it is shown that the proposed auction possesses the incentive compatibility and the individual rationality properties which are leveraged via the unique stackelberg equilibrium se solution of the game numerical experiments are provided to confirm the effectiveness of the proposed scheme index grid shared energy storage auction theory stackelberg equilibrium incentive compatibility i ntroduction e nergy storage es devices are expected to play a significant role in the future smart grid due to their capabilities of giving more flexibility and balance to the grid by providing a to the renewable energy es can improve the electricity management in a distribution network reduce the electricity cost through opportunistic demand response and improve the efficient use of energy the distinct features of es make it a perfect candidate to assist in tushar and yuen are with singapore university of technology and design sutd somapah road singapore email wayes tushar yuenchau chai is with the state grid smart grid research institute beijing china email chaibozju huang is with the ministry of home affairs singapore email shisheng smith is with the national ict australia nicta act australia and adjunct with the australian national university email poor is with the school of engineering and applied science at princeton university princeton nj usa email poor yang is with the state key laboratory of industrial control technology at zhejiang university hangzhou china email yangzy this work is supported in part by the singapore university of technology and design sutd through the energy innovation research program eirp singapore and idc grant and in part by the national science foundation under grant smith s work is supported by nicta which is funded by the australian government through the department of communications and the australian research council residential demand response by altering the electricity demand due to the changes in the balance between supply and demand particularly in a residential community setting where each household is equipped with an es the use of es devices can significantly leverage the efficient flows of energy within the community in terms of reducing cost decarbonization of the electricity grid and enabling effective demand response dr however energy storage requires space in particular for large consumers like shared facility controllers sfcs of large apartment buildings the energy requirements are very high which consequently necessitates the actual installment of very large energy storage capacity the investment cost of such storage can be substantial whereas due to the random usage of the facilities depending on the usage pattern of different residents some of the storage may remain unused furthermore the use of ess for rus is very limited for two reasons firstly the installation cost of es devices is very high and the costs are entirely borne by the users secondly the ess are mainly used to save electricity costs for the rus rather than offer any support to the local energy authorities which further makes their use economically unattractive hence there is a need for solutions that will capture both the problems related to space and cost constraints of storage for sfcs and the benefit to rus for supporting third parties to this end numerous recent studies have focused on energy management systems with es devices as we will see in the next section however most of these studies overlook the potential benefits that local energy authorities such as sfcs can attain by jointly sharing the es devices belonging to the rus particularly due to recent cost reduction of es devices sharing of es devices installed in the rus by the sfcs has the potential to benefit both the sfcs and the rus of the community as we will see later in this context we propose a scheme that enables joint es ownership in smart grid during the sharing each ru leases the sfcs a fraction of its es device to use and charges and discharges from the rest of its es capacity for its own purposes on the contrary each sfc exclusively uses its portion of es devices leased from the rus this work is motivated by in which the authors discussed the idea of joint ownership of es devices between domestic customers and local network operators and demonstrated the potential benefits that can be obtained through such sharing however no policy has been developed in to determine how the fraction of battery capacity which is shared by the network operators and the domestic users is decided note that as an owner of an es device each ru can decide ieee trans smart grid ii s tate of he a rt whether or not to take part in the joint ownership scheme with the sfcs and what fraction of the es can be shared with the sfcs hence there is a need for solutions that can capture this decision making process of the rus by interacting with the sfcs of the network in this context we propose a joint es ownership scheme in which by participating in storage sharing with the sfcs both the rus and sfcs benefit economically due to the interactive nature of the problem we are motivated to use auction theory to study this problem exploiting the communications aspects auction mechanisms can exchange information between users and electricity providers meet users demands at a lower cost and thus contribute to the economic and environmental benefits of smart in particular we modify the vickrey auction technique by integrating a stackelberg game between the auctioneer and the rus and show that the modified scheme leads to a desirable joint es ownership solution for the rus and the sfcs to do this we modify the auction price derived from the vickrey auction to benefit the owner of the es through the adaptation of the adopted game as well as keep the cost savings to the sfcs at the maximum we study the attributes of the technique and show that the proposed auction scheme possesses both the incentive compatibility and the individual rationality properties leveraged by the unique equilibrium solution of the game we propose an algorithm for the stackelberg game that can be executed distributedly by the rus and the auctioneer and the algorithm is shown to be guaranteed to reach the desired solution we also discuss how the proposed scheme can be extended to the time varying case and finally we provide numerical examples to show the effectiveness of the proposed scheme the importance and necessity of the proposed study with respect to actual operation of smart grid lies in assisting the sfcs of large apartment buildings in smart communities to reduce space requirements and investment costs of large energy storage units furthermore by participating in storage sharing with the sfcs the rus can benefit economically which can consequently influence them to efficiently schedule their appliances and thus reduce the excess use of electricity we stress that energy management schemes are not new in the smart grid paradigm and have been discussed in and however the scheme discussed in the paper differs from these existing approaches in terms of the considered system model chosen methodology and analysis and the use of the set of rules to reach the desired solution the remainder of the paper is organized as follows we provide a comprehensive literature review of the related work in section ii followed by the considered system model in section iii our proposed modified mechanism is demonstrated in section iv where we also discuss how the scheme can be adopted in a time varying environment the numerical case studies are discussed in section v and finally we draw some concluding remarks in section vi in the recent years there has been an extensive research effort to understand the potential of es devices for residential energy management this is mainly due to their capabilities in reducing the intermittency of renewable energy generation as well as lowering the cost of electricity the related studies can be divided into two general categories the first category of studies consisting of which assume that the ess are installed within each ru premises and are used solely by the owners in order to perform different energy management tasks such as optimal placement sizing and control of charging and discharging of storage devices the second type of studies deal with es devices that are not installed within the rus but located in a different location such as in electric vehicles evs here the ess of evs are used to provide ancillary services for rus and local energy providers furthermore another important impact of es devices on residential distribution grids is studied in and in particular these studies focus on how the use of es devices can bring benefits for the stakeholders in external energy markets in the authors propose a optimization method for siting and sizing of ess of a distribution grid to capture the between the storage stakeholders and the distribution system operators furthermore in optimal storage profiles for different stakeholders such as distribution grid operators and energy traders are derived based on case studies with real data studies of other aspects of smart grid can be found in as can be seen from the above discussion the use of es devices in smart grid is not only limited to address the intermittency of renewable generation and assisting users to take part in energy management to reduce their cost of electricity but also extends to assisting the grid or other similar energy entities such as an sfc and generating revenues for stakeholders however one similarity between most of the above mentioned literature is that only one entity owns the es and uses it according to its requirements nonetheless this might not always be the case if there are large number of in a community in this regard considering the potential benefits of es sharing as discussed in this paper investigates the case in which the sfcs in a smart community are allowed to share some fraction of the ess owned by the rus through a third party such as an auctioneer or a community representative the proposed modified auction scheme differs from the existing techniques for energy management such as those in in a number of ways particularly in contrast to these studies the proposed auction scheme captures the interaction between the sfcs and the rus whereby the decision on the auction price is determined via a stackelberg game by exploiting auction rules including the determination rule payment rule and allocation rule the interaction between the sfcs and rus is greatly simplified for instance the determination rule can easily identify the number of rus that are participating in the auction process which further leverage please note that such a technique can be applied in the real distribution network such as in electric vehicle charging stations by using the information and power flow infrastructure of smart grids each ru may participate as a single entity or as a group where rus connected via an aggregator ieee trans smart grid sicap either due to the fact that some sfcs do not have their own ess or that the ess of the sfcs are not large enough to store all the excess energy at that time it is important to note that the es requirement of the sfcs can stem from any type of intermittent generation profile that the sfcs or rus can adopt for example one can consider that the proposed scheme is based on a hybrid generation profile comprising both solar and wind generation however the proposed technique is equally suitable for other types of intermittent generation as well we assume that there are n rus where n is the set of all rus in the system that are willing to share some parts of their es with the sfcs of the network the battery cap capacity of each ru i n is si and each ru i wants to put xi fraction of its es in the market to share with the sfcs where xi bi scap i di sicap total capacity of es device of ru di the amount that ru i does not sell bi the maximum es space the ru i might sell to the sfcs bi sicap di di sharing price pt each i decides ri ri pt yes no x n bn leaves the et sharing market yes each m decides am qm take ke part in es sharing am pt no sharing price pt here bi is the maximum amount of battery space that the ru can share with the sfcs if the tradeoff for the sharing is attractive for it di is the amount of es that the ru does not want to share rather uses for its own needs to run the essential loads in the future if there is any electricity disruption within the ru or if the price of electricity is very high fig the fraction of the es capacity that an ru i is willing to share with the sfcs of the community the determination of the auction price via the stackelberg game in the payment rule furthermore on the one hand the work here complements the existing works focusing on the potential of es for energy management in smart grid on the other hand the proposed work has the potential to open new research opportunities in terms of control of energy dispatch from es the size of es and exploring other interactive techniques such as cooperative games and optimization for es sharing to this end to offer an es space xi on the one hand each ru i decides on an reservation price ri per unit of energy hereinafter we will use es space and energy interchangeably to refer to the es space that each ru might share with the sfcs however if the price pt which each ru received for sharing its es is lower than ri the ru i removes its es space xi from the market as the expected benefit from the joint sharing of es is not economically attractive for it on the other hand each sfc m m that needs to share es space with the rus to store their energy decides a reservation bid am which represents the maximum unit price the sfc m is willing to pay for sharing per unit of es with the rus in the smart community to enter into the sharing market and if am pt the sfc removes its commitment of joint es ownership with rus from the market due to the same reason as mentioned for the ru a graphical representation of the concept of es sharing and their decision making process of sharing the es space of each ru i with the sfcs are shown in fig please note that to keep the formulation simple we do not include any specific storage model in the scheme however by suitably modeling some related parameters such as the storage capacity scap i and parameters like di and bi the proposed scheme can be adopted for specific es devices iii s ystem m odel let us consider a smart community that consists of a large number of rus each ru can be an individual home a single unit of a large apartment complex or a large number of units connected via an aggregator that acts as a single entity each ru is equipped with an es device that the ru can use to store electricity from the main grid or its renewable energy sources if there are any or can perform dr management according to the price offered by the grid the es device can be a storage device installed within each ru premises or can be the es used for the ru s electric vehicles the entire community is considered to be divided into a number of blocks where each block consists of a number of rus and an sfc each sfc m m where m is the set of all sfcs and m is responsible for controlling the electrical equipment and machines such as lifts parking lot lights and gates water pumps and lights in the corridor area of a particular block of the community which are shared and used by the residents of that block on regular basis each sfc is assumed to have its own renewable energy generation and is also connected to the main electricity grid with appropriate communication protocols considering the fact that the nature of energy generation and consumption is highly sporadic let us assume that the sfcs in the community need some extra ess to store their electricity after meeting the demand of their respected shared facilities at a particular time of the day this can be the interaction that arises from the choice of es sharing price between the sfcs and rus as well as the need of the sfcs to share the es space to store their energy and the profits that the rus can reap from allowing their ess to be shared give rise to a market of es sharing between the rus and the sfcs in the smart grid in this market the involved n rus and m sfcs will interact with each other to decide as to how many of them will take part in sharing the ess between themselves and also to agree on the es sharing parameters such as the trading price pt and the amount of es space to ieee trans smart grid iv auction based es ownership vickrey auction is a type of auction scheme where the bidders submit their written bids to the auctioneer without knowing the bids of others participating in the auction the highest bidder wins the auction but pays the second highest bid price nevertheless in this paper we modify the classical vickrey auction to model the joint es ownership scheme for a smart community consisting of multiple customers the sfcs and multiple owners of es devices the rus the modification is motivated by the following factors unlike the classical vickrey auction the modified scheme would enable the multiple owners and customers to decide simultaneously and independently whether to take part in the joint es sharing through the determination rule of the proposed auction process as we will see shortly the modification of the auction provides each participating ru i with flexibility of choosing the amount of es space that they may want to share with the sfcs in cases when the auction pt is lower than their expected reservation price ri and finally the proposed auction scheme provides solutions that satisfy both the incentive compatibility and individual rationality properties as we will see later which are desirable in any mechanism that adopts auction theory to this end the proposed auction process as shown in fig consists of three elements fig energy management in a smart community through auction process consisting of multiple rus with es devices an auctioneer and a number of sfcs be shared in the considered model the rus not only decide on the reservation prices ri but also on the amount of es space xi that they are willing to share with the sfcs the amount of xi is determined by the between between the economic benefits that the ru i expects to obtain from giving the sfcs the joint ownership of its es device and the associated reluctance of the ru for such sharing the reluctance to share ess may arise from the rus due to many factors for instance sharing would enable frequent charging and discharging of ess that reduce the of an es device hence an ru i may set its higher so as to increase its reluctance to participate in the es sharing however if the ru is more interested in earning revenue rather than increasing es life time it can reduce its and thus get more net benefits from sharing its storage therefore for a given set of bids am and storage requirement qm by the sfcs the maximum amount of es xi that each ru i will decide to put for sharing is strongly affected by the trading price pt and the reluctance of each ru i n during the sharing process in this context we develop an auction based joint es ownership scheme in the next section we understand that the proposed scheme involves different types of users such as auctioneers sfcs and rus therefore the communication protocol used by them could be asynchronous however in our study we assume that the communication between different entities of the system are synchronous this is mainly due to the fact that we assume our algorithm is executed once in a considered time slot and the duration of this time slot can be one hour therefore synchronization is not a significant issue for the considered case and the communication complexity is affordable for example the auctioneer can wait for five minutes until it receives all the data from sfcs and the rus and then the algorithm which is proposed in section can be executed owner the rus in set n that own the es devices and expect to earn some economic benefits through maximizing a utility function by letting the sfcs to share some fraction of their es spaces customer the sfcs in set m that are in need of ess in order to store some excess electricity at a particular time of the day the sfcs offer the rus a price with a view to jointly own some fraction of their es devices auctioneer a third party estate or building manager that controls the auction process between the owners and the customers according to some predefined rules the proposed auction policies consist of a determination rule b payment rule and c storage allocation rule here determination rule allows the auctioneer to determine the maximum limit for the auction price pmax and the number t of sfcs and rus that will actively take part in the es sharing scheme once the auction process is initiated the payment rule enables the auctioneer to decide on the price that the customer needs to pay to the owners for sharing their es devices which allows the rus to decide how much storage space they will be putting into the market to share with the sfcs finally the auctioneer allocates the es spaces for sharing for each sfc following the allocation rule of the proposed auction it is important to note that although both the customers and owners do not have any access to others private information such as the amount of es to be shared by an ru or the required energy space by any sfc the rules of auction are known to all the participants of the joint ownership process please note that the life time degradation due to charging and discharging may not true for all electromechanical systems such as system reluctance parameter refers to the opposite of preference parameter hereinafter p will be used to refer to auction price instead of sharing or t trading price ieee trans smart grid of the sfcs and rus in the network cosumers participating in auction owner customer am ri k n j hence the joint ownership of es would be a detrimental choice for the rus and the sfcs within the set n j and k respectively which consequently remove them from the proposed auction process now one desirable property of any auction mechanism is that no participating agents in the auction mechanism will cheat once the payment and allocation rules are being established to this end we propose that once j and k are determined k sfcs and j rus will be engaged in the joint es sharing process which is a necessary condition for matching total demand and supply while maintaining a truthful auction scheme nevertheless if truthful auction is not a necessity sfc k and ru j can also be allowed to participate in the joint es ownership auction price maximum auction price pmax t vickrey price pmin t owners participating in auction storage amount fig determination of the vickrey price the maximum auction price and the number of participating rus and sfcs in the auction process the proposed scheme initially determines the set of sfcs m and rus n that will effectively take part in the auction mechanism once the upper bound of the auction price pmax is determined eventually the payment and the allocation t rules are executed in the course of the auction plan b payment rule we note that the intersection of the demand and supply curves demonstrates the highest reservation price pmax for the t participating j rus according to the vickrey auction mechanism the auction price for sharing the es devices would be the second highest reservation price the vickrey price which will be indicated as pmin hereinafter however we t note that this second highest price might not be considerably beneficial for all the participating rus in the auction scheme in contrast if pt is set to pt pmax t the price could be detrimental for some of the sfcs therefore to make the auction scheme attractive and beneficial to all the participating rus and at the same time to be cost effective for all the sfcs we strike a balance between the pmax and pmin t t to do so we propose a scheme for deciding on both the auction price pt and the amount of ess xi that rus will put into the market for sharing according to pt in particular we propose a stackelberg game between the auctioneer that decides on the auction price pt to maximize the average cost savings to the sfcs as well as satisfying their desirable needs of ess and the rus that decide on the vector of the amount of es x that they would like to put into the market for sharing such that their benefits are maximized please note that the solution of the proposed problem formulation can also be solved following other distributed algorithms algorithms designed via the optimization technique stackelberg game stackelberg game is a decision making process in which the leader of the game takes the first step to choose its strategy the followers on the other hand choose their strategy in response to the decision made by the leader in the proposed game we assume the auctioneer as the leader and the rus as the followers hence it can be seen as a stackelberg game slmfsg we propose that the auctioneer as a leader of the slmfsg will take the first step to choose a suitable min auction price pt from the range pmin t pt meanwhile each ru i j as a follower of the game will play its best strategy by choosing a suitable xi bi in response to the price pt offered by the auctioneer the best a determination rule the determination rule of the proposed scheme is executed by the following steps inspired from i the rus of set n the owners of the ess declare their reservation price ri in an increasing order which we can consider without loss of generality as rn the rus submit the reservation price along with the amount xi of es that they are interested to share with the sfcs to the auctioneer ii the sfcs bidding prices am are arranged in a decreasing order am the sfcs submit to the auctioneer along with the quantity qm of es that they require iii once the auctioneer receives the ordered information from the rus and the sfcs it generates the aggregated supply reservation price of the rus versus the amount of es the rus interested to share and demand curves reservation bids am verses the quantity of es qm needed using and respectively iv the auctioneer determines the number of of participating sfcs k and rus j that satisfies ak rj from the intersection of the two curves using any standard numerical method as soon as the sfc k m and ru j n are determined from the intersection point as shown in fig an important aspect of the auction mechanism is to determine the number of sfcs and rus which will take part in the joint ownership of ess we note that once the number of sfcs k and rus j are determined the following relationship holds for the rest ieee trans smart grid response strategy of each ru i will stem from a utility function ui which captures the benefit that an ru i can gain from deciding on the amount of es xi to be shared for the offered price whereas the auctioneer chooses the price pt with a view to maximize the average cost savings z of the sfcs in the network now to capture the interaction between the auctioneer and the rus we formally define the slmfsg as j auctioneer ui xi z pt which consists of i the set of rus j participating in the auction scheme and the auctioneer ii the utility ui that each ru i reaps from choosing a suitable strategy xi in response to the price pt announce by the auctioneer iii the strategy set xi of each ru i j iv the average cost savings z that incurred to each sfc m k from the strategy by the chosen max of the auctioneer and v the strategy pt pmin t pt auctioneer in the proposed approach each ru i iteratively responses to the strategy pt chosen by the auctioneer independent of other rus in set j i the response of i is affected by the offered price pt its reluctance parameter and the initial reservation price ri however we note that the auctioneer does not have any control over the decision making process of the rus it only sets the auction price pt with a view to maximize the cost savings z with respect to the cost with the initial bidding price for the sfcs to this end the target of auctioneer is assumed to maximize the average cost savings x a p m t xi k by choosing an appropriate price p to offer to each ru from t min max am is the average savthe range pt pt here ing in auction price that the sfcs pay to the rus for sharing p the ess and i xi is the total amount of es that all the sfcs share from the rus from z we note that the cost savings will be more if pt is lower for all m k however this is conflicted by that fact that a lower pt may lead to the choice of lower xi j by the rus which in turn will affect the cost to the sfcs hence to reach a desirable solution set the auctioneer and the rus continue to interact with each other until the game reaches a stackelberg equilibrium se now the utility function ui which defines the benefits that an ru i can attain from sharing xi amount of its es with the sfcs is proposed to be ui xi pt ri xi xi bi where is the reluctant parameter of ru i and ri is the reservation price set by ru ui mainly consists of two parts the first part pi ri xi is the utility in terms of its revenue that an ru i obtains from sharing its xi portion of es device the second part on other hand is the negative impact in terms of liability on the ru i stemming from sharing its es with the sfc this is mainly due to the fact that once an ru decides to share its xi amount of storage space with an sfc the ru can only use scap i xi amount of storage for its own use the term captures this restriction of the ru on the usage of its own es in the reluctance parameter is introduced as a design parameter to measure the degree of unwillingness of an ru to take part in energy sharing in particular a higher value of refers to the case when an ru i is more reluctant to take part in the es sharing and thus as can be seen from even with the same es sharing attains a lower net benefit thus ui can be seen as a net benefit to ru i for sharing its es the utility function is based on the assumption of a marginal utility which is suitable for modeling the benefits of power consumers as explained in in addition the proposed utility function also possesses the following properties i the utility of any ru increases as the amount of price pt paid to it for sharing per unit of es increases ii as the reluctance parameter increases the ru i becomes more reluctant to share its es and consequently the utility decreases and iii for a particular price pt the more an ru shares with the sfcs the less interested it becomes to share more for the joint ownership to that end for a particular price pt and reluctance parameter the objective of ru i is max pt ri xi xi bi definition let us consider the game as described in where the utility of each ru i and the average utility per sfc are described via ui and z respectively now will reach a se if and only if the solution of the game satisfies the following set of conditions ui ui xi j max xi pmin t pt and x xi k i am pt x xi k i am where hence according to and both the rus and the sfcs achieve their best possible outcomes at the se hence neither the rus nor the auctioneer will have any incentive to change their strategies as soon as the game reaches the se however achieving an equilibrium solution in pure strategies is not always guaranteed in games therefore we need to investigate whether the proposed possesses an se or not theorem there always exists a unique se solution for the proposed slmfsg between the auctioneer and the participating rus in set j proof firstly we note that the strategy set of max eer is and continuous within the range pmin t pt hence there will always be a strategy for the auctioneer that will enable the rus to offer some part of their xi ieee trans smart grid es within their limits to the sfcs secondly for any price pt the utility function ui in is strictly concave with respect of xi j hence for any i min max price pt pt pt each ru will have a unique xi which will be chosen from a bounded range bi and maximize ui therefore it is evident that as soon as the scheme will find a unique such that the average utility z per sfc attains a maximum value the slmfsg will consequently reach its unique se to this end first we note that the amount of es at which the ru i achieves its maximum utility in response to a price pt can be obtained from pt ri algorithm algorithm for slmfsg to reach the se initialization pmin t z for auction price pt from pmin to pmax do t t for each ru i j do ru i adjusts its amount of es xi to share according to arg now replacing the value of in and doing some simple arithmetics the auction price which maximizes the average cost savings to the sfcs can be found as p a ri m pt max pt ri xi end for the auctioneer computes the average cost savings to sfcs j x am pt xi if z z then the auctioneer record the desirable price and maximum average cost savings pt z z end if end for the se is achieved guaranteed to reach se of the proposed slmfsg where am for any m k and for any i j is exclusive therefore is unique for and thus theorem is proved proof in the proposed algorithm we note that the choice of strategies by the rus emanate from the choice pt of the auctioneer which as shown in will always attain a nonempty single value at the se due to its bounded strategy set max pmin t pt on the other hand as the algorithm is designed in response to the each ru i will choose its strategy xi from the bounded range bi in order to maximize its utility function ui to that end due to the bounded strategy set and continuity of ui with respect to xi it is confirmed that each ru i will always reach a fixed point for the given therefore the proposed algorithm is always guaranteed to reach the unique se of the slmfsg algorithm for payment to attain the se the auctioneer which has the information of am m k needs to communicate with each ru it is considered that the auctioneer does not have any knowledge of the private information of the rus such as in this regard in order to decide on a suitable auction price pt that will be beneficial for both the rus and the sfcs the auctioneer and the rus interact with one another to capture this interaction we design an iterative algorithm which can be implemented by the auctioneer and the rus in a distributed fashion to reach the unique se of the proposed slmfsg the algorithm initiates with the auctioneer who sets the auction price pt to pmin and the optimal average t cost saving per sfc z to now in each iteration after having the information on the offered auction price by the auctioneer each ru i plays its best response xi bi and submits its choice to the auctioneer the auctioneer on other hand receives the information on x from all the participating rus and determines the average cost savings per sfc z from its knowledge on the reservation bids and using then the auctioneer compares the z with z if z z the auctioneer updates the optimal auction price to the one recently offered and sends a new choice of price to the rus in the next iteration however if z z the auctioneer keeps the same price and offers another new price to the rus in the next iteration the iteration process continues until the conditions in and are satisfied and hence the slmfsg reaches the se we show the process of the proposed algorithm in algorithm allocation rule now once the the amount of es that each ru i j decides to put into the market for sharing in response to the auction price is determined the auctioneer allocates the quantity qi to be jointly shared by each ru i and the sfcs according to following rule if qm qi x p p if xi qm where f max pf and is the allotment of the excess es xi qm that an ru i must endure essentially the rule in emphasizes that if the requirements of the sfcs exceed the available es space from the rus each ru i will allow the sfcs to share all of the es xi that it put into the market however if the available es exceeds the total demand by the sfcs then ru p i will have to share a peach fraction of the oversupply qm nonethless this burden if there is any can be distributed in different ways among the participating rus for instance the burden can be distributed either proportionally to the amount of es that each ru i shared with the sfcs or proportionally to theorem the algorithm proposed in algorithm is always ieee trans smart grid the reservation ri of each ru alternatively the total burden can also be shared equally by the rus in the auction scheme proportional allocation in proportional allocation a fraction of the total burden is allocated to each ru i in to the reservation price ri or such that p proportion i xi qm which can be implemented as follows x i qm ri p i j i ri it is clear that all the participants in the proposed auction scheme are individually rational which leads to the following corollary corollary the proposed auction technique possesses the individual rationality property in which the j rational owners and k rational customers actively participate in the mechanism to gain the higher utility theorem the proposed auction mechanism is incentive compatible truthful auction is the best strategy for any ru i j and sfc m k by replacing ri with in the burden allocation can be determined in proportion to the shared es by each ru equal allocation according to equal allocation each ru bears an equal burden x x x qm i j j i proof to validate theorem first we note that the choice of strategies by the rus always guaranteed to converge to a unique se as proven in theorem and theorem which confirms the stability of their selections now according to once the owners of an auction process the rus in this proposed case decide on a stable amount of commodity j to supply to or to share with the customers the auction process always converges to a auction if the allocation of commodity is conducted according to the rules described in and therefore neither any ru nor any sfc will have any intention to falsify their allocation once they adopt and for sharing the storage space of the rus from their se amount therefore the auction process is incentive compatible and thus theorem is proved of the oversupply here it is important to note that although proportional allocation allows the distribution of oversupply according to some properties of the rus equal allocation is more suitable to make the auction scheme strategy proof strategy proofness is important for designing auction mechanisms as it encourages the participating players not to lie about their private information such as reservation price which is essential for the acceptability and sustainability of such mechanisms in energy markets therefore we will use equal allocation of for the rest of the paper adaptation to case to extend the proposed scheme to a case we assume that the es sharing scheme works in a fashion where each time slot has a suitable time duration based on the type of application hour it is considered that in each time slot all the rus and sfcs take part in the proposed es sharing scheme to decide on the parameters such as the auction price and the amount of ess that needs to be shared however in a case the amount of es that an ru shares at time slot t may be affected by the burden that the ru needed to bear in the previous time slot t to this end first we note that once the number of participating rus and sfcs is decided for a particular time slot via the determination rule the rest of the procedures the payment and allocation rules are executed following the descriptions in section and respectively for the respective time slot now if the total number of rus and sfcs is fixed the rus and sfcs that participate in the modified auction scheme in any time slot is determined by their respective reservation and bidding prices for that time slot further the proposed auction process may evolve across different time slots based on the change of the amount of es that each participating ru i may want to share and the change in the total amount of es required for the sfcs in different time slots now before discussing how the proposed modified auction scheme can be extended to a first we define the properties of the auction process we note that once the auction process is executed there is always a possibility that the owners of the es might cheat on the amount of storage that they wanted put into the market during auction in this context we need to investigate whether the proposed scheme is beneficial enough individually rational for the rus such that they are not motivated to cheat incentive compatible once the auction is executed now for the individual rationality property first we note that all the players the rus and the auctioneer on behalf of the sfcs take part in the slmfsg to maximize their benefits in terms of their respected utility from their choice of strategies the choice of the rus is to determine vector of es such that each of the ru can be benefitted at its maximum on the other hand the strategy of the auctioneer is to choose a price pt to maximize the savings of the sfcs accordingly once both the rus and the auctioneer reach such a point of the game when neither the owners nor the customers can be benefitted more from choosing another strategy the slmfsg reaches the se to this end it is already proven in theorem that the proposed in this auction process must possesses a unique se therefore as a subsequent outcome of the theorem certain loads such as lifts and water pumps in large apartment buildings are not easy to schedule as they are shared by different users of the buildings hence we focus on the time variation of the storage sharing process by the rus of the considered system please note that the reservation price ri indicates how much each ru i wants to be paid forpsharing its es with the sfcs and thus affects the determination of total and the total burden ieee trans smart grid which households are equipped with a dedicated battery to sell the stored electricity to the grid nonetheless xi t is also affected by the amount of burden that an ru needed to bear due to an oversupply of es spaces if there was any in the previous time slot to this end the amount of es space that an ru i can offer to the sfcs at t can be defined as xi if i xi t max bi t xi otherwise following parameters t index of time slot t total number of time slot ri t the reservation price of ru i n at time slot ri ri t is the reservation price vector for ru i n xi t the fraction of es space that the ru i wants to shares with the sfcs at time slot xi xi t the vector of es space shared by ru i with the sfc during the total considered times bi t maximum available es of ru i for sharing at time slot am t the bidding price of each sfc m m at time slot am am t is the reservation price vector for sfc m n qm t the required es space by each sfc m at time slot pt t the auction price at time slot ui t the benefit that each ru i achieves at time slot zt the average cost saving per sfc at time slot t the burden that is shared by each participating ru at time slot kt number of participating sfcs in the modified auction scheme at time slot jt number of participating rus in the modified auction scheme at time slot to this end the utility function ui t of each ru i and the average cost savings zt per sfc at time slot t can be defined as ui t xi t pt t ri t xi t t the sfc m on the other hand decides on the amount of es qm t that it needs to share from the rus at t based on the random requirement of the shared facilities at t the available shared es space qm from time slot t and the random generation of renewable energy sources where appropriate hence qm t f qm renewables facility requirement now if we assume that the fraction of shared es available from previous time slot is negligible qm the requirement qm t can be assumed to be random for each time slot t considering the random nature of both renewable generation and energy requirement of shared facilities note that this assumption is particularly valid if the sfc uses all its shared ess from the previous time slot for meeting the demand of the shared facilities and can not use them in considered time slot nonetheless please note that this assumption does not imply that the relationship between the auction process across different time slots is the auction process in one time slot still depends on other time slots due to the dependency of xi t via to this end for the modeled xi t n and qm t m the proposed modified auction scheme studied in section iv can be adopted in each time slot t t with a view to maximize and it is important to note that the reservation price vector ri of each ru i n and the bidding price vector am of each sfc m m can be modeled through any existing pricing schemes such as price now t and constitute the solutions of the proposed modified auction scheme in a condition if the comprises the solution vector of all es spaces shared by the participating rus in each time slot t t for the auction price vector further all the auction rules adopted in each time slot of the proposed case will be similar to the rules discussed in section iv hence the solution of the proposed modified auction scheme for a timevarying environment also possesses the incentive compatibility and individual rationality properties for each time slot and zt pkt am t pt t kt x xi t now at time slot t the determination rule of the proposed scheme determines the number of participating rus and sfcs based on their reservation and bidding prices for that time slot the number of participation is also motivated by the available es space of each ru and the requirement of each sfc however unlike the static case in a environment the offered es space by an ru at time slot t is influenced by its contribution to the auction process in the previous time slot for instance if an ru i receives a burden in time slot t its willingness to share es space xi t at time slot t may reduce xi t is also affected by the maximum amount of es bi t available to ru i at for simplicity we assume that bi t and t do not change over different time slots therefore an ru i can offer to share the same amount of es space xi t to the sfcs at time slot t if it did not share any amount in time slot t an analogous example of such arrangement can be found in fit scheme with es device in c ase s tudy for numerical case studies we consider a number of rus at different blocks in a smart community that are interested in allowing the sfcs of the community to jointly share their es devices we stress that when there are a large number of ru and sfcs in the system the reservation and bidding prices will vary significantly from one another therefore it will be difficult to find an intersection point to determine the highest please note that in each time slot t and are related with each other in a similar manner as and are related for the static case however unlike the static case the execution of the auction process in each time slot t is affected by the value of parameters such as xi t and pt for that particular time slot es shared by each ru ieee trans smart grid table i change of average utility achieved by each sfc and each ru in the network according to algorithm due to the change of the reluctance of each ru for sharing one kwh es with the sfc reluctance parameter number of iteration average utility per ru net benefit average utility for sfc average cost savings average utility for sfc x ru to put into the market for sharing as can be seen from the figure on the one hand ru ru and ru reach the se much quicker than ru and ru on the other hand no interest for sharing any es is observed for ru and this is due to the fact that as the interaction between the auctioneer and the rus continues the auction price pt is updated in each iteration in this regard once the auction price for any ru becomes larger than its reservation price it put all its reserve es to the market with an intention to be shared by the sfcs due to this reason ru ru and ru put their ess in the market much sooner after the iteration than ru and ru with higher reservation prices whose interest for sharing es reaches the se once the auction price is encouraging enough for them to share their ess after the and iterations unfortunately the utilities of ru and are not convenient enough to take part in the auction process and therefore their shared es fractions are we note that the demonstration of the convergence of the slmfsg to a unique se subsequently demonstrates the proofs of theorem theorem theorem and corollary which are strongly related to the se as explained in the previous section now we would like to investigate how the reluctance parameters of the rus may affect their average utility from algorithm and thus affecting their decisions to share es to this end we first determine the average utility that is experienced by each ru and sfc for a reluctance parameter of then considering the outcome as a benchmark we show the effect of different reluctance parameters on the achieved average benefits of each sfc and ru in table i the demonstration of this property is necessary in order to better understand the working principle of the designed technique for es sharing according to table i as the reluctance of each ru increases it becomes more uncomfortable lower utility to put its es in the market to be jointly owned by the sfcs as a consequence it also affects the average utility achieved by each sfc as shown in table i the reduction in average utilities per ru are and respectively compared to the average utility achieved by an ru at for every ten times reduction in the reluctance parameter for similar settings the reduction of average utility for the sfcs are and at and respectively therefore the proposed scheme will enable the rus to put more storage in the auction market if the related reluctance for this sharing is small note that although the current investment cost of batteries is very high compared to their relative short life times it is expected that battery costs will go down in the near future and become very popular for addressing z number of iteration fig convergence of algorithm to the se at se the average utility per sfc reaches its maximum and the es that each ru wants to put into the market for share reaches a steady state level that maximize their benefits reservation price pmax according to the determination rule so t in this paper we limit ourself to around rus however having rus can in fact cover a large community through aggregation such as discussed in here each ru is assumed to be a group of households where each household is equipped with a battery of capacity hour kwh the reluctance parameter of all rus are assumed to be similar which is taken from range of it is important to note that is considered as a design parameter in the proposed scheme which we used to map the reluctance of each ru to share its es with the sfcs such reluctance of sharing can be affected by parameters like es capacity the condition of the environment if applicable and the ru s own requirement now considering the different system parameters in our proposed scheme we capture these two extremes with not reluctant and highly reluctant the required electricity storage for each sfc is assumed to be within the range of kwh nevertheless the required es for sharing could be different if the usage pattern by the users changes since the type of ess and their associated cost used by different rus can vary significantly the choices of reservation price to share their ess with the sfcs can vary considerably as well in this context we consider that the reservation price set by each ru and sfc is taken from a range of it is important to note that all chosen parameter values are particular to this study only and may vary according the availability and number of rus requirements of sfcs trading policy time of the and the country now we first show the convergence of algorithm to the se of the slmfg in fig for this case study we assume that there are five sfcs in the smart grid community that are taking part in an auction process with eight rus from fig first we note that the proposed slmfg reaches the se after interations when the average cost savings per sfc reaches its maximum hence the convergence speed which is just few seconds is reasonable nonetheless an interesting property can be observed when we examine the choice of es by each ieee trans smart grid for the sfcs it would put a higher burden on the rus to carry as a consequence the relative utility from auction is lower nevertheless if the requirement of the sfcs is higher the sharing brings significant benefits to the rus as can be seen from fig on the other hand for higher reluctance rus tend to share a lower es amount which then enables them to endure a lower burden in case of lower demands from the sfcs this consequently enhances their achieved utility nonetheless if the requirement is higher from the sfcs their utility reduces subsequently compared to the rus with lower reluctance parameters thus from observing the effects of different s on the average utility per ru in fig we understand that if the total required es is smaller rus with higher reluctance benefit more and vice versa this illustrates the fact that even rus with high unwillingness to share their ess can be beneficial for sfcs of the system if their required ess are small however for a higher requirement sfcs would benefit more from having rus with lower reluctances as they will be interested in sharing more to achieve higher average utilities now we discuss the computational complexity of the proposed scheme which is greatly reduced by the determination rule of the modified auction scheme as this rule determines the actual number of participating rus and sfcs in the auction we also note that after determining the number of participating sfcs and rus the auctioneer iteratively interacts with each of the rus and sets the auction price with a view to increase the average savings for the sfc therefore the main computational complexity of the modified auction scheme stems from the interactions between the auctioneer and the participating rus to decide on the auction price in this context the computational complexity of the problem falls within a category of that of a single leader multiple follower stackelberg game whose computational complexity which can be approximated to increase linearly with the number of followers and is shown to be reasonable in numerous studies such as in and hence the computational complexity is feasible for adopting the proposed scheme having an insight into the properties of the proposed auction scheme we now demonstrate how the technique can benefit the rus of the smart network compared to existing es allocation schemes such as equal distribution ed and fit schemes ed is essentially an allocation scheme that allows the sfcs to meet their total storage requirements by sharing the total requirement equally from each of the participating rus we assume that if the shared es amount exceeds the total amount of reservation storage that an ru puts into the market the ru will share its full reservation amount in fit which is a popular scheme for energy trading between consumers and the grid we assume that each ru prefers to sell the same storage amount of energy to the grid at an fit price rate instead of sharing the same fraction of storage with the sfc to this end the resulting average utilities that each ru can achieve from sharing its es space with the sfcs by adopting the proposed ed and fit schemes are shown in table ii from table ii first we note that as the amount of required es by the sfcs increases the average utility achieved per more willing to share average utility achieved by the rus less willing to share supply demand supply demand required battery space by the sfcs kwh fig effect of change of required es amount by the sfcs on the achieved average utility per ru intermittency of renewables we have foreseen such a near future when our proposed scheme will be applicable to gain the benefit of storage sharing and thus motivate the rus to keep their small according to the observation from table i it can further be said that if the reluctance parameters of rus change over either different days or different time slots the performance of the system in terms of average utility per ru and average cost savings per sfc will change accordingly for the given system parameters once all the participating rus put their es amount into the auction market they are distributed according to the allocation rule described in and in this regard we investigate how the average utility of each ru is altered as the total storage amount required by the sfcs changes from in the network for this particular case the considered total es requirement of the sfcs is assumed to be and in general as shown in fig the average utility of each ru initially increases with the increase required by the sfcs and eventually becomes saturated to a stable value this is due to the fact that as the required amount of es increases the ru can share more of its reserved es that it put into the market with the sfcs with the determined auction price from the slmfsg hence its utility increases however each ru has a particular fixed es amount that it puts into the market to share consequently once the shared es amount reaches its maximum even with the increase of requirement by the sfcs the ru can not share more therefore its utility becomes stable without any further increment interestingly the proposed scheme as can be seen in fig favors the rus with higher reluctance more when the es requirement by the sfcs is relatively lower and favors the rus with lower reluctance during higher demands this is due to the way we have designed the proposed allocation scheme which is dictated by the burden in and the allocation of es through we note that according to if is lower the ru i will put a higher amount of es in the market to share however if the total required amount of es is lower ieee trans smart grid table ii comparison of the change of average utility per ru in the smart grid system as the required total amount of energy storage required by the sfcs varies required es space by the sfcs average utility net benefit of ru for equal distribution ed scheme average utility net benefit of ru for fit scheme average utility net benefit of ru for proposed scheme percentage improvement compared to ed scheme percentage improvement compared to fit scheme es shared by each ru kwh es available to share kwh ru also increases for all the cases the reason for this increment is explained in fig also in all the studied cases the proposed scheme shows a considerable performance improvement compared to the ed and fit schemes an interesting trend of performance improvement can be observed if we compare the performance of the proposed scheme with the ed and fit performances for each of the es requirements in particular the performance of the proposed scheme is higher as the requirement of the es increases from to however the improvement is relatively less significant as the es requirement switches from to this change in performance can be explained as follows in the proposed scheme as we have seen in fig the amount of es shared by each participating ru is influenced by their reluctance parameters hence even the demand of the sfcs could be larger the rus may choose not to share more of their es spaces once their reluctance is limited in this regard the rus in the current case study increase their share of es as the requirement by the sfcs increases which in turn produces higher revenue for the rus furthermore once the rus choice of ess reach the saturation the increase in demand from to in this case does not affect their share as a consequence their performance improvement is not as noticeable as the previous four cases nonetheless for all the considered cases the auction process performs superior to the ed scheme with an average performance improvement of which clearly shows the value of the proposed methodology to adopt joint es sharing in smart grid the performance improvement with respect to the fit scheme which is on average is due to the difference between the determined auction price and the price per unit of energy for the fit scheme finally we show how the decision making process of each ru in the system is affected by its decision in the previous time slot and the total storage requirement by the sfcs the total number of time slots that are considered to show this performance analysis is four in this context we assume that there are five rus in the system with es of and kwh respectively to share with the sfcs the total es requirements of the sfcs for considered four time slot are and please note that these numbers are considered for this case study only and may have different values for different scenarios now in fig we show the available es to each of the rus at the begining of each time slot and how much they are going to share if the modified auction scheme is adopted in each time slot for a simple analysis we assume that once an ru shares its total available es it can not share its es for the remaining of the time slots number of time slot number of time slot fig demonstration of how the proposed modified auction scheme can be extended to time varying system the reservation es amount varies by the rus varies between different time slots based on their sharing amount in the previous time slot the total required storage by the sfcs is chosen randomly due to the reasons explained in section the reservation prices are considered to change from one time to the next based on a predefined time of use price scheme now as can be seen from fig in time slot and share all their available ess with the sfc whereby other rus do not share their ess due to the reasons explained in fig since the total requirement is therefore neither of and needs to carry any burden in time slot only shares its ess of to meet the requirement as the sfc s requirement is lower than the supply needs to carry a burden of kwh similarly in time slot and all of and take part in the energy auction scheme as they have enough es to share with the sfc however the es to share in time slot stems from the burden of oversupply from time slot the scheme is not shown for more than time slot as the available es from all rus is already shared by the sfcs by the end of time slot thus the proposed modified auction scheme can successfully capture the time variation if the scheme is modified as given in section vi c onclusion in this paper we have modeled a modified auction based joint energy storage ownership scheme between a number of residential units rus and shared facility controllers sfcs in smart grid we have designed a system and discussed the determination payment and allocation rule of the auction where the payment rule of this scheme is facilitated by a stackelberg game slmfsg ieee trans smart grid between the auctioneer and the rus the properties of the auction scheme and the slmfsg have been studied and it has been shown that the proposed auction possesses the individual rationality and the incentive compatibility properties leveraged by the unique stackeberg equilibrium of the slmfsg we have proposed an algorithm for the slmfsg which has been shown to be guaranteed to reach the se and that also facilitates the auctioneer and the rus to decide on the auction price as well as the amount of es to be put into the market for joint ownership a compelling extension of the proposed scheme would be to study of the feasibility of scheduling of loads such as lifts and water machines in shared space another interesting research direction would be to determine how a very large number of sfcs or rus with different reservation and bidding prices can take part in such a modified auction scheme one potential way to look at this problem can be from a cooperative in which the sfcs and rus may cooperate to decide on the amount of reservation es and bidding price they would like to put into the market so as to participate in the auction and benefit from sharing another very important yet interesting extension of this work would be to investigate how to quantify the reluctance of each ru to participate in the es sharing such quantification of reluctance or convenience will also enable the practical deployment of many energy management schemes already described in the literature tushar chai yuen smith wood yang and poor energy management with distributed energy resources in smart grid ieee trans ind vol no apr klemperer auction theory a guide to the literature journal of economic surveys vol no pp july ma deng song and han incentive mechanism for demand side management in smart grid using auction ieee trans smart grid vol no pp may vickrey counterspeculation auctions and competitive sealed tenders the journal of finance vol no pp mar chai chen yang and zhang demand response management with multiple utility companies a game approach ieee trans smart grid vol no pp march zhu zhang gjessing and dependable demand response management in the smart grid a stackelberg game approach ieee trans smart grid vol no pp march siano demand response and smart survey elsevier renewable and sustainable energy reviews vol pp feb denholm ela kirby and milligan the role of energy storage with renewable electricity generation national renewable energy laboratory nrel colorado usa technical report jan cao jiang and zhang reducing electricity cost of smart appliances via energy buffering framework in smart grid ieee trans parallel distrib vol no pp sep sechilariu wang and locment building integrated photovoltaic system with energy storage and smart grid communication ieee trans ind vol no pp april carpinelli celli mocci mottola pilo and proto optimal integration of distributed energy storage devices in smart grids ieee trans smart grid vol no pp june kim ren van der schaar and lee bidirectional energy trading and residential load scheduling with electric vehicles in the smart grid ieee sel areas vol no pp july roy leemput geth salenbien buscher and driesen apartment building electricity system impact of operational electric vehicle charging strategies ieee trans sustain energy vol no pp jan yu ding zhong liu and xie phev charging and discharging cooperation in networks a coalition game approach ieee internet things vol no pp dec lin leung and li optimal scheduling with regulation service ieee internet things vol no pp dec tan and wang integration of hybrid electric vehicles into residential distribution grid based on intelligent optimization ieee trans smart grid vol no pp july igualada corchero and heredia optimal energy management for a residential microgrid including a system ieee trans smart grid vol no pp july geth tant haesen driesen and belmans integration of energy storage in distribution grids in ieee power and energy society general meeting minneapolis mn july pp nykamp bosman molderink hurink and smit value of storage in distribution grids competition or cooperation of stakeholders ieee trans smart grid vol no pp sep tushar yuen smith and poor price discrimination for energy trading in smart grid a game theoretic approach ieee trans smart grid to appear li yuen hassan tushar wen wood hu and liu demand response management for residential smart grid from theory to practice ieee section on smart grids a hub of interdisciplinary research vol naeem shabbir hassan yuen ahmed and tushar understanding customer behavior in demand response management program ieee section on smart grids a hub of interdisciplinary research vol liu yuen yu zhang and xie energy consumption management for heterogeneous residential demands in smart grid ieee trans smart grid vol june doi r eferences silvestre graditi and sanseverino a generalized framework for optimal sizing of distributed energy resources in microgrids using an swarm approach ieee trans ind vol no pp feb llorens and jurado control of a hybrid system integrating renewable energies hydrogen and batteries ieee trans ind vol no pp may fang misra xue and yang smart grid the new and improved power grid a survey ieee commun surveys vol no pp oct liu yuen huang hassan wang and xie ratio constrained management with consumer s preference in residential smart grid ieee sel topics signal vol pp no pp jun liu yuen hassan huang yu and xie electricity cost minimization for a microgrid with distributed energy resources under different information availability ieee trans ind vol no pp apr hassan khalid yuen and tushar customer engagement plans for peak load reduction in residential smart grids ieee trans smart grid vol no pp hassan khalid yuen huang pasha wood and kerk framework for minimum user participation rate determination to achieve specific demand response management objectives in residential smart grids elsevier international journal of electrical power energy systems vol pp tushar yuen huang smith and poor cost minimization of charging stations with photovoltaics an approach with ev classification ieee trans intell transp vol doi huang tushar yuen and otto quantifying economic benefits in the ancillary electricity market for smart appliances in singapore households elsevier sustainable energy grids and networks vol pp mar wang gu li bale and sun active demand response using shared energy storage for household energy management ieee trans smart grid vol no pp dec ieee trans smart grid wang yuen chen hassan and ouyang demand scheduling for delay tolerant applications elsevier journal of networks and computer applications vol pp july zhang he and chen data gathering optimization by dynamic sensing and routing in rechargeable sensor network trans vol june doi zhang cheng shi and chen optimal dos attack scheduling in wireless networked control system ieee trans control syst vol doi wang and wang grid power peak shaving and valley filling using systems ieee trans power vol no pp july tushar saad poor and smith economics of electric vehicle charging a game theoretic approach ieee trans smart grid vol no pp dec gkatzikis koutsopoulos and salonidis the role of aggregators in smart grid demand response markets ieee sel areas vol no pp july tushar j zhang smith poor and prioritizing consumers in smart grid a game theoretic approach ieee trans smart grid vol no pp may saad han poor and a noncooperative game for double energy trading between phevs and distribution grids in proc ieee int l conf smart grid commun smartgridcomm brussels belgium pp bradley and a frank design demonstrations and sustainability impact assessments for hybrid electric vehicles renewable and sustainable energy reviews vol no pp jan derin and ferrante scheduling energy consumption with local renewable and dynamic electricity prices in proc workshop green smart embedded syst technol methods tools stockholm sweden apr pp huang and sycara design of a double auction computational intelligence vol no pp feb oduguwa and roy optimisation using genetic algorithm in proc ieee international conference on artificial intelligence systems geelong australia feb pp samadi schober wong and jatskevich optimal pricing algorithm based on utility maximization for smart grid in proc ieee int l conf smart grid commun smartgridcomm gaithersburg md pp guojun yongsheng xiaoqin xicong qianggang and niancheng study on the proportional allocation of electric vehicles with conventional and fast charge methods when in distribution network in proc china international conference on electricity distribution ciced shanghai china sept pp tsikalakis zoulias caralis panteri and da carvalho tariffs for promotion of energy storage technologies energy policy vol no pp mar breakthrough in electricity storage new large and powerful redox flow battery science daily march retrieved august online available ali and advancing may published in pv magazine online available http garun at reservations tesla s powerwall is already sold out until accessed may online available http lipa proposal concerning modifications to lipa s tariff for electric service accessed on april online available http
| 3 |
jan direct sum decomposability of polynomials and factorization of associated forms maksym fedorchuk abstract for a homogeneous polynomial with a discriminant we interpret direct sum decomposability of the polynomial in terms of factorization properties of the macaulay inverse system of its milnor algebra this leads to an criterion for direct sum decomposability of such a polynomial and to an algorithm for computing direct sum decompositions over any either of characteristic or of large positive characteristic for which polynomial factorization algorithms exist we also give simple necessary criteria for direct sum decomposability of arbitrary homogeneous polynomials over arbitrary and apply them to prove that many interesting classes of homogeneous polynomials are not direct sums introduction a homogeneous polynomial f is called a direct sum if after a linear change of variables it can be written as a sum of two or more polynomials in disjoint sets of variables f xa xn when f is a homogeneous polynomial over c an isolated hypersurface singularity in cn the geometric of such decomposition stems from the classical theorem that describes the monodromy operator of the singularity f cn as a tensor product of the monodromy operators of the singularities ca and direct sums are also the subject of a symmetric strassen s additivity conjecture postulating that the waring rank of f in is the sum of the waring ranks of and see for example in this paper we give a new criterion for recognizing when a smooth is a direct sum over a either of characteristic or of large positive characteristic the problem of such a criterion for an arbitrary smooth or singular form has been successfully addressed earlier by kleppe over an arbitrary and over an algebraically closed both works interpret direct sum decomposability we refer to any homogeneous polynomial as a form and we call a form f in n variables smooth if it a smooth hypersurface in see for further terminology and notational conventions maksym fedorchuk of a form f in terms of its apolar ideal f see for more details in particular over an algebraically closed gives an criterion for recognizing when f is a direct sum in terms of the graded betti numbers of f however none of these works seem to give an method for computing a direct sum decomposition when it exists and the criterion of can not be used over see example although our criterion works only for smooth forms it does so over an arbitrary either of characteristic or of large characteristic and it leads to an algorithm for direct sum decompositions over any such for which polynomial factorization algorithms exist this algorithm is given in section recall that to a smooth form f of degree d in n variables one can assign a degree n d form a f in n dual variables called the associated form of f the associated form a f is as a macaulay inverse system of the milnor algebra of f which simply means that the apolar ideal of a f coincides with the jacobian ideal of f a f such leads to an observation that for a smooth form f that is written as a sum of two forms in disjoint sets of variables the associated form a f decomposes as a product of two forms in disjoint sets of dual variables lemma for example up to a scalar a n the main purpose of this paper is to prove the converse statement and thus establish an criterion for direct sum decomposability of a smooth form f in terms of the factorization properties of its associated form a f see theorem in lemma we give a simple necessary condition valid over an arbitrary for direct sum decomposability of an arbitrary form in terms of its gradient point it is then applied in section to prove that a wide class of homogeneous forms contains no direct sums in theorem we show that this simple necessary condition is in fact when a form is git stable over an algebraically closed notation and conventions let k be a the span of a subset w of a space will be denoted by hw i if w is a representation of the multiplicative group gm then for every i z we denote by w i the of the action of weight i let v be a vector space over k with n dimk v we set s sym v and d sym v homogeneous elements of s and d will be called forms we have a action of s on d also known as the apolar pairing decomposability of polynomials and associated forms namely if xn is a basis of v and zn is the dual basis of v then the pairing dd is given by g f g f zn given a homogeneous nonzero f dd the apolar ideal of f is f g s g f s and the space of essential variables of f is e f g f g if char k or char k d then the pairing sd k is perfect and for every f dd we have f symd e f dd cf furthermore the graded is a gorenstein artin local ring with socle in degree d and a theorem of macaulay establishes a bijection between graded gorenstein artin quotients of socle degree d and elements of p dd see lemma or exercise let xn be a basis of v the gradient point of f is to be i sd the jacobian ideal of f is jf s and the milnor algebra of f is mf remark even though we allow k to have positive characteristic we do not take d to be the divided power algebra cf appendix a as the reader might have anticipated the reason for this is that at several places we can not avoid but to impose a condition that char k is large enough or zero in this case the divided power algebra is isomorphic to d up to the needed degree for a homogeneous ideal i s we denote by v i the closed subscheme of pv by i we say that a form f is smooth if the hypersurface v f is smooth over k this is of course equivalent to v f being over the algebraic closure of k the locus of smooth forms in will be denoted by direct sums and products recall from that f v is called a direct sum or a form of type if there is a direct sum decomposition v u w and nonzero u and w such that f in other words f is a direct sum if and only if for some choice of a basis xn of v we have that f xa xn maksym fedorchuk where a n and recall also that f v is called degenerate if there exists u v such that f u by analogy with direct sums we will call a nonzero form f d a direct product if there is a direct sum decomposition v u w and f for some sym u and sym w in other words a nonzero homogeneous f sym d is a direct product if and only if for some choice of a basis zn of v we have that f zn za zn where a n furthermore we call a direct product decomposition in balanced if n a deg a deg note that a factorization f is a direct product decomposition if and only if e e remark note that the roles of s and d are interchangeable in and so for f s we can the apolar ideal f d and the space of essential variables e f with this notation if char k or char k d then f is a direct sum if and only if we can write f where and e e furthermore if char k or char k d we have that e f v is dual to f v and dimk dimk e f we say that l grass n symd v is a balanced direct sum if there is a direct sum decomposition v u w and elements grass dimk u symd u and grass dimk w symd w such that l symd u symd w symd associated forms we recall the theory of associated forms as developed in let grass n sd res be the open subset in grass n sd parameterizing linear spaces gn i sd such that gn form a regular sequence in note that if char k or char k d then f is smooth if and only if grass n sd res for every u gn i grass n sd res the ideal iu gn is a complete intersection ideal and the is a graded gorenstein artin local ring with socle in degree n d suppose char k or char k n d then by macaulay s theorem there exists a unique up to scaling form a u dn such that a u iu the form a u is called the associated form of gn by alper and isaev who systematically studied it in section in particular they that although given over c their proof applies whenever char k or char k n d decomposability of polynomials and associated forms the assignment u a u gives rise to an sl n associated form morphism a grass n sd res pdn when u for a smooth form f we set a f a and following eastwood and isaev call a f the associated form of f the property of a f is that a f jf this means that a f is a macaulay inverse system of the milnor algebra mf summarizing when char k or char k max n d d we have the following commutative diagram of sl n morphisms p dn grass n sd a res remark in alper and isaev the associated form a gn as an element of dn which they achieve by choosing a canonical generator of the socle of gn given by the jacobian determinant of gn for our purposes it will to consider a gn i up to a scalar main results theorem let d suppose that either char k or char k max n d d let f be a smooth form then the following are equivalent f is a direct sum is a balanced direct sum a f is a balanced direct product a f is a direct product admits a gm defined over a f admits a gm defined over moreover if zn is a basis of v in which a f factors as a f za zn then f decomposes as f xa xn in the dual basis xn of v maksym fedorchuk recall from that for a form f a decomposition f fr is called a maximally fine direct sum decomposition if v e e fr and fi is not a direct sum in e fi for all i for nondegenerate forms of degree d kleppe has established that a maximally direct sum decomposition is unique theorem we use theorem to give an alternate proof of this result for smooth forms deducing it from the fact that a polynomial ring over a is a ufd proposition let d suppose that either char k or char k n d let f be a smooth form then f has a unique maximally fine direct sum decomposition theorem let d suppose k is an algebraically closed field with char k then the following are equivalent for a git stable f f is a direct sum the morphism grass n sd has a positive fiber dimension at hf i has a gm is strictly semistable dimk sl v consequently the locus of direct sums is closed in the stable locus prior works in kleppe and teitler prove that for a form f over an algebraically closed the apolar ideal f has a minimal generator in degree d if and only if either f is a direct sum or f is a limit of direct sums in which case the gl n of f contains an element of the form x xi g xn where h and g are degree forms in and variables respectively since the form given by equation is visibly sl n and in particular singular this translates into a computable and criterion for recognizing whether a smooth form f is a direct sum over an algebraically closed in kleppe uses the quadratic part of the apolar ideal f to an associative algebra m f of dimension over the base m f is from the milnor algebra mf he then proves that over an arbitrary direct sum decompositions of f are in bijection with complete sets of orthogonal idempotents of m f decomposability of polynomials and associated forms a key step in the proof of the direct sum criterion in is the jordan normal form decomposition of a certain linear operator which in general requires solving a characteristic equation similarly a complete set of orthogonal idempotents requires solving a system of quadratic equations this makes it challenging to turn or into an algorithm for direct sum decompositions when they exist the case of a linear factor in theorem was proved in proposition using a criterion of smith and stong for indecomposability of gorenstein artin algebras into connected sums our proof of the linear factor case and the statement for higher degree factors appear to be new corollaries and are generalizations of corollary whose proof relies on a theorem of saying that the apolar ideals of the generic determinant and permanent are generated in degree our approach is independent of s results proofs of decomposability criteria some implications in the statements of theorems and are easy observations the main of which is separated in lemma below others are found in recent papers the remaining key ingredient that completes the main circle of implications is separated into proposition below lemma no restrictions on k suppose f is a direct sum such that then the following hold there is a subgroup of sl v that fixes and such that we have the following decompositions v v v and d if dimk n then grass n sd is a balanced direct sum if dimk n then there is a family gt t k of pairwise nonproportional forms in such that for all t k and gt cf for all t k and c proof this is obvious namely suppose f xa xn in some basis xn of v where then the subgroup acting with weight on xi and weight on xi clearly suppose further that dimk from for all t k we see that dimk a and dimk n a thus is a balanced direct sum this proves taking gt for t proves maksym fedorchuk proof of theorem the implications are in lemma next we prove suppose decomposes as a balanced direct sum in a basis xn of v then sa i for some sa k xa d and k xn d it follows that for every i a and a j n we have k xa k xn using the assumption on char k we conclude that f k xa k xn and so is a direct sum in the same basis as the equivalence is proved in proposition below this concludes the proof of equivalence for the three conditions next we prove suppose a f za zn is a direct product decomposition in a basis zn of v let xn be the dual basis of v suppose xdnn is the smallest with respect to the graded reverse lexicographic order monomial of degree n d that does not lie in jf n since zndn must appear with a nonzero in a f we have that da deg on the other hand by lemma we have that da a d it follows that deg a d by symmetry we also have that deg n a d we conclude that both inequalities must be equalities and so a f is a balanced direct product decomposition alternatively we can consider a diagonal action of gm sl v on v that acts on v as follows t zn t t za zn then a f is homogeneous with respect to this action and has weight n a deg a deg however the relevant parts of the proof of theorem go through to show that a f the numerical criterion for semistability this forces n a deg a deg we now turn to the last two conditions first the morphism a is an sl n equivariant locally closed immersion by and so is stabilizer preserving this proves the equivalence the implication follows from the proof of theorem that shows that for a smooth f the gradient point has a gm if and only if f is a direct sum we note that even though stated over c the relevant parts of the proof of theorem use only lemma which remains valid over a k with char k or char k d and the fact that a smooth form over any must satisfy the numerical criterion for stability decomposability of polynomials and associated forms proof of theorem by theorem for every git stable f the gradient point is polystable furthermore it admits a gm if and only if f is a direct sum moreover proposition shows that the morphism of the git quotients sl n s sl n grass n sd ss sl n is injective this proves that for every stable f the dimension of at hf i equals to the dimension of the stabilizer of this concludes the proof of all equivalences the fact that the locus of direct sums is closed in s now follows from the upper semicontinuity on the domain of dimensions proposition let d suppose k is a field with char k or char k n d then an element u grass n symd v res is a balanced direct sum if and only if a u is a balanced direct product moreover if zn is a basis of v in which a u factors as a balanced direct product then u decomposes as a balanced direct sum in the dual basis xn of v proof the forward implication is an easy observation consider a balanced direct sum u gn i grass n k xn d res where ga k xa d and gn k xn d then up to a nonzero scalar a u a ga a gn where a ga k za a and a gn k zn see lemma which also follows from the fact that on the level of algebras we have k xa k xn k xn gn ga gn suppose now a u is a balanced direct product in a basis zn of v a u za zn where deg a d and deg n a d let xn be the dual basis of v and let iu k xn be the complete intersection ideal spanned by the elements of u we have that iu a u k xn it is then evident from and the of an apolar ideal that xa a iu and xn iu we also have the following observation maksym fedorchuk claim dimk u xa a dimk u xn n a proof by symmetry it to prove the second statement since u is spanned by a length n regular sequence of degree d forms we have that dimk u xn n a suppose we have a strict inequality let r k xn iu xn k xa then i is generated in degree d and has at least a minimal generators in that degree it follows that the top degree of r is strictly less than a d and so ia k xa a cf lemma but then k xa a xn iu using this gives k xa a k xn iu thus every monomial of k za a k zn appears with in a u which contradicts at this point we can apply prop to conclude that u k xa d contains a regular sequence of length a and that u k xn d contains a regular sequence of length this shows that u decomposes as a balanced direct sum in the basis xn of v however for the sake of we proceed to give a more direct argument by claim there exists a regular sequence sa k xa d such that sa xa gn xa and a regular sequence k xn d such that xn gn xn let w sa i grass n sd res and let iw be the ideal generated by w we are going to prove that u w which will conclude the proof of the proposition since char k or char k n d macaulay s theorem applies and so to prove that u w we need to show that the ideals iu and iw coincide in degree n d for this it to prove that iw n iu n decomposability of polynomials and associated forms since sa is a regular sequence in k xa d we have that k xa a sa similarly we have that k xn together with and this gives xa a xn iu iw set j xa a xn it remains to show that iw n iu n jn to this end consider a x qi si x rj tj iw n where qa sn since sa k xa d and we are working modulo j we can assume that qi xn for all i a similarly we can assume that rj xa a for all j n a by construction we have sa iu xn and iu xa using this and we conclude that a x qi si x rj tj iu this the proof of the proposition proof of proposition if char k the case of n d is vacuous since no smooth binary cubic will be a direct sum in all other cases char k n d implies char k max n d d suppose f fs gt are two maximally direct sum decompositions then a a fs a a gt pdn where v e a fi e a gj suppose some a fi shares irreducible factors with more than one a gj then by the uniqueness of factorization in d we must have a factorization a fi such that e e then a fi is a direct product and so fi must be a direct sum by theorem contradicting the maximality assumption therefore no a fi shares an irreducible factor with more than one a gj and by symmetry no a gj shares an irreducible factor with more than one maksym fedorchuk a fi it follows that s t and up to reordering a fi a gi and thus e a fi e a gi for all i we conclude that e fi e gi which using ft gt forces fi gi for all i necessary conditions for direct sum decomposability our next two results give easily necessary conditions for an arbitrary form to be a direct sum they hold over an arbitrary with no restriction on characteristic we keep notation of theorem suppose f is a form in then let b dimk if f has a factor g such that dimk f is not a direct sum if f has a repeated factor then f is not a direct sum corollary suppose f is a form with dimk if f has a linear factor then f is not a direct sum proof of theorem we apply lemma for suppose that in some basis of v we have f gh xa xn s and that dimk b while dimk let be the subgroup of a sl v acting with weight n a on xi and weight on xi then and d is the decomposition into the since we have dimk b dimk b dimk it follows by dimension considerations that some nonzero multiple of g belongs to one of the two of in thus g itself is homogeneous with respect to it follows that either g k xa or g k xn this forces either or respectively a contradiction for suppose f is a direct sum with a repeated factor let be the of sl v as above since g some nonzero multiple of g belongs to a of in and so we obtain a contradiction as in our next result needs the following definition given a basis xn of v and a nonzero f we the state of f to be the set of f dn dn d such that x a dn xdnn where a dn k dn f decomposability of polynomials and associated forms in other words the state of f is the set of monomials appearing with nonzero in f we set theorem suppose k let d suppose f is such that in some basis xn of v the following conditions hold for all i for all i j n f dn char k di for some i n and o xdnn for all i n the graph with the vertices in n and the edges given by ij is connected then f is not a direct sum remark in words says that no two partials of f share a common monomial and says that any monomial all of whose nonzero partials appear in partials of f must appear in f as an immediate corollary of this theorem we show that the n n generic determinant and permanent polynomials and the generic polynomials as well as any other polynomial of the same state are not direct sums when n corollary ppolynomials are not direct sums let n suppose s k xi j ni and f xn n where k then f is not a direct sum corollary polynomials are not direct sums let n suppose s k xi j where we set xj i xi j for j i and f p where k then f is not a direct sum proof of both corollaries it is easy to see that f all conditions of orem proof of theorem if char k p we set dn p divides di for all i n to be the set of all monomials whose gradient point is trivial maksym fedorchuk suppose f is a direct sum note that condition implies that dimk then by lemma there exists a form g such that and g cf for all c since by condition we must have g f then since condition implies that in fact ci for some ci comparing the second partials and using condition we conclude that ci cj for all i j we obtain g f which is a contradiction finding a balanced direct product decomposition algorithmically in this section we show how theorem reduces the problem of a direct sum decomposition of a given smooth form f to a polynomial factorization problem to begin suppose that we are given a smooth form f v in some basis of v then the associated form a f is computed in the dual basis of v as the form apolar to jf n to apply theorem we now need to determine if a f decomposes as a balanced direct product and if it does then in what basis of v the following simple lemma explains how to do it cf for notation lemma suppose char k or char k max n d d for a smooth f the associated form a f is a balanced direct product if and only if there is a factorization a f such that v or equivalently e e v moreover in this case we have g v and a f decomposes as a balanced direct product in any basis of v such that its dual basis is compatible with the direct sum decomposition in equation proof the equivalence of the two conditions in follows from the fact that e gi v is dual to i v the claim now follows from and theorem observing that any factorization a f we have g a f jf an algorithm for direct sum decompositions suppose k is a with either char k or char k max n d d for which there exists a polynomial factorization algorithm let f k xn where d step compute jf up to degree n d if jf n k xn n then f is not smooth and we stop otherwise continue decomposability of polynomials and associated forms step compute a f as the dual to jf n ha f i t k zn n g t for all g jf n step compute the irreducible factorization of a f in k zn and check for the existence of balanced direct product factorizations using lemma if any exist then f is a direct sum otherwise f is not a direct sum step for every balanced direct product factorization of a f lemma gives a basis of v in which f decomposes as a direct sum the above algorithm was implemented in a macaulay package written by justin kim and zihao fang its source code is available upon request in what follows we give a few examples of the algorithm in action remark jaroslaw has pointed out that already step in the above algorithm is computationally highly expensive when n and d are large however it is reasonably fast when both n and d are small with example below taking only a few seconds example binary quartics suppose k is an algebraically closed of characteristic then every smooth binary quartic has a standard form ft t up to a scalar the associated form of ft is a ft t clearly a ft is singular if and only if t or t for these values of t a ft is in fact a balanced direct product and so ft is a direct sum namely up to scalars we have a a a note that over r the associated form a is not a balanced direct product hence is not a direct sum over r by theorem since the apolar ideal of is the same over r and over c this example illustrates that the direct sum decomposability criterion of fails over example consider the following element in q f maksym fedorchuk then its associated form is a f one checks that a f where with e and e i is a balanced direct product factorization of a f it follows that f is a direct sum in fact f is projectively equivalent to acknowledgments the author is grateful to jarod alper for an introduction to the subject alexander isaev for numerous stimulating discussions that inspired this work and zach teitler for questions that motivated most of the results in section the author was partially supported by the nsa young investigator grant and alfred sloan research fellowship justin kim and zihao fang wrote a macaulay package for computing associated forms while supported by the boston college undergraduate research fellowship grant under the direction of the author references jarod alper and alexander isaev associated forms in classical invariant theory and their applications to hypersurface singularities math jarod alper and alexander isaev associated forms and hypersurface singularities the binary case reine angew to appear doi weronika jaroslaw johannes kleppe and zach teitler apolarity and direct sum decomposability of polynomials michigan math michael eastwood and alexander isaev extracting invariants of isolated hypersurface singularities from their moduli algebras math david eisenbud commutative algebra with a view toward algebraic geometry volume of graduate texts in mathematics new york maksym fedorchuk git semistability of hilbert points of milnor algebras math maksym fedorchuk and alexander isaev stability of associated forms preprint decomposability of polynomials and associated forms anthony iarrobino and vassil kanev power sums gorenstein algebras and determinantal loci volume of lecture notes in mathematics berlin johannes kleppe additive splittings of homogeneous polynomials thesis marcos sebastiani and thom un sur la monodromie invent sepideh masoumeh apolarity for determinants and permanents of generic matrices commut algebra larry smith and stong projective bundle ideals and duality algebras j pure appl algebra zach teitler conditions for strassen s additivity conjecture illinois j fedorchuk department of mathematics boston college commonwealth ave chestnut hill ma usa address
| 0 |
graphvae towards generation of small graphs using variational autoencoders martin simonovsky nikos komodakis feb abstract deep learning on graphs has become a popular research topic with many applications however past work has concentrated on learning graph embedding tasks which is in contrast with advances in generative models for images and text is it possible to transfer this progress to the domain of graphs we propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fullyconnected graph of a predefined maximum size directly at once our method is formulated as a variational autoencoder we evaluate on the challenging task of molecule generation introduction deep learning on graphs has very recently become a popular research topic bronstein et with useful applications across fields such as chemistry gilmer et medicine ktena et or computer vision simonovsky komodakis past work has concentrated on learning graph embedding tasks so far encoding an input graph into a vector representation this is in stark contrast with advances in generative models for images and text which have seen massive rise in quality of generated samples hence it is an intriguing question how one can transfer this progress to the domain of graphs their decoding from a vector representation moreover the desire for such a method has been mentioned in the past by et al however learning to generate graphs is a difficult problem for methods based on gradient optimization as graphs are discrete structures unlike sequence text generation graphs can have arbitrary connectivity and there is no clear best way how to linearize their construction in a sequence of steps on the other hand learning the order for paris est des ponts paristech champs sur marne france correspondence to martin simonovsky tal construction involves discrete decisions which are not differentiable in this work we propose to sidestep these hurdles by having the decoder output a probabilistic graph of a predefined maximum size directly at once in a probabilistic graph the existence of nodes and edges as well as their attributes are modeled as independent random variables the method is formulated in the framework of variational autoencoders vae by kingma welling we demonstrate our method coined graphvae in cheminformatics on the task of molecule generation molecular datasets are a challenging but convenient testbed for our generative model as they easily allow for both qualitative and quantitative tests of decoded samples while our method is applicable for generating smaller graphs only and its performance leaves space for improvement we believe our work is an important initial step towards powerful and efficient graph decoders related work graph decoders graph generation has been largely unexplored in deep learning the closest work to ours is by johnson who incrementally constructs a probabilistic multi graph as a world representation according to a sequence of input sentences to answer a query while our model also outputs a probabilistic graph we do not assume having a prescribed order of construction transformations available and we formulate the learning problem as an autoencoder xu et al learns to produce a scene graph from an input image they construct a graph from a set of object proposals provide initial embeddings to each node and edge and use message passing to obtain a consistent prediction in contrast our method is a generative model which produces a probabilistic graph from a single opaque vector without specifying the number of nodes or the structure explicitly related work deep learning includes random graphs erdos albert stochastic blockmodels snijders nowicki or state transition matrix learning gong xiang graphvae towards generation of small graphs using variational autoencoders figure illustration of the proposed variational graph autoencoder starting from a discrete attributed graph g a e f on n nodes a representation of propylene oxide stochastic graph encoder embeds the graph into continuous representation z given a e a e e e fe on predefined point in the latent space our novel graph decoder outputs a probabilistic graph g k n nodes from which discrete samples may be drawn the process can be conditioned on label y for controlled sampling at test time e reconstruction ability of the autoencoder is facilitated by approximate graph matching for aligning g with discrete data decoders text is the most common discrete representation generative models there are usually trained in a maximum likelihood fashion by teacher forcing williams zipser which avoids the need to backpropagate through output discretization by feeding the ground truth instead of the past sample at each step bengio et al argued this may lead to expose bias possibly reduced ability to recover from own mistakes recently efforts have been made to overcome this problem notably computing a differentiable approximation using gumbel distribution kusner or bypassing the problem by learning a stochastic policy in reinforcement learning yu et our work also circumvents the problem namely by formulating the loss on a probabilistic graph molecule decoders generative models may become promising for de novo design of molecules fulfilling certain criteria by being able to search for them over a continuous embedding space olivecrona et with that in mind we propose a conditional version of our model while molecules have an intuitive representation as graphs the field has had to resort to textual representations with fixed syntax smiles strings to exploit recent progress made in text generation with rnns olivecrona et segler et et as their syntax is brittle many invalid strings tend to be generated which has been recently addressed by kusner et al by incorporating grammar rules into decoding while encouraging their approach does not guarantee semantic chemical validity similarly as our method method we approach the task of graph generation by devising a neural network able to translate vectors in a continuous code space to graphs our main idea is to output a probabilistic graph and use a standard graph matching algorithm to align it to the ground truth the proposed method is formulated in the framework of variational autoencoders vae by kingma welling although other forms of regularized autoencoders would be equally suitable makhzani et li et we briefly recapitulate vae below and continue with introducing our novel graph decoder together with an appropriate objective variational autoencoder let g a e f be a graph specified with its adjacency matrix a edge attribute tensor e and node attribute matrix f we wish to learn an encoder and a decoder to map between the space of graphs g and their continuous embedding z rc see figure in the probabilistic setting of a vae the encoder is defined by a variational posterior and the decoder by a generative distribution where and are learned parameters furthermore there is a prior distribution p z imposed on the latent code representation as a regularization we use a simplistic isotropic gaussian prior p z n i the whole model is trained by minimizing the upper bound on negative log g kingma welling l g log kl z graphvae towards generation of small graphs using variational autoencoders the first term of l the reconstruction loss enforces high similarity of sampled generated graphs to the input graph the second term regularizes the code space to allow for sampling of z directly from p z instead from later the dimensionality of z is usually fairly small so that the autoencoder is encouraged to learn a compression of the input instead of learning to simply copy any given input while the regularization is independent on the input space the reconstruction loss must be specifically designed for each input modality in the following we introduce our graph decoder together with an appropriate reconstruction loss probabilistic graph decoder graphs are discrete objects ultimately while this does not pose a challenge for encoding demonstrated by the recent developments in graph convolution networks gilmer et graph generation has been an open problem so far in a related task of text sequence generation the currently dominant approach is or prediction bowman et however graphs can have arbitrary connectivity and there is no clear way how to linearize their construction in a sequence of on the other hand iterative construction of discrete structures during training without supervision involves discrete decisions which are not differentiable and therefore problematic for fortunately the task can become much simpler if we restrict the domain to the set of all graphs on maximum k nodes where k is fairly small in practice up to the order of tens under this assumption handling dense graph representations is still computationally tractable we propose to make the decoder output a probabilistic e a e e e fe on k nodes at once this effectively graph g sidesteps both problems mentioned above in probabilistic graphs the existence of nodes and edges is modeled as bernoulli variables whereas node and edge attributes are multinomial variables while not discussed in this work continuous attributes could be easily modeled as gaussian variables represented by their mean and variance we assume all variables to be independent e has thus a probaeach tensor of the representation of g bilistic interpretation specifically the predicted adjacency e contains both node probabilities a ea a matrix a e and edge probabilities aa b for nodes a b the edge ate indicates class probabilities for tribute tensor e edges and similarly the node attribute matrix fe contains class probabilities for nodes the decoder itself is deterministic its architecture is a simple perceptron mlp with three outputs in its last layer sigmoid activation function is used to compute e whereas and softmax is applied to obtain a e e and fe respectively at test time we are often interested e which can be obtained by in a discrete point estimate of g e e e and fe note taking and argmax in a that this can result in a discrete graph on less than k nodes reconstruction loss given a particular of a discrete input graph g on n k e on k nodes nodes and its probabilistic reconstruction g evaluation of equation requires computation of likelihood e p since no particular ordering of nodes is imposed in either e or g and matrix representation of graphs is not invariant g to permutations of nodes comparison of two graphs is hard however approximate graph matching described further in subsection can obtain a binary assignment matrix e is x where xa i only if node a g assigned to i g and xa i otherwise knowledge of x allows to map information between both graphs specifically input adjacency matrix is mapped to the predicted graph as xax t whereas the predicted node attribute matrix and slices of edge attribute matrix are transferred to the input graph as x t fe and e l t e x l x the maximum likelihood estimates crossentropy of respective variables are as follows log p x ea a log a ea a a log a a a a k x ea b b log a ea b b log a log p f x t log fi fi i log p n x t log ei j ei j where we assumed that f and e are encoded in notation the formulation considers existence of both matched and unmatched nodes and edges but attributes of only the matched ones furthermore averaging over nodes and edges separately has shown beneficial in training as otherwise the edges dominate the likelihood the overall reconstruction loss is a weighed sum of the previous terms while algorithms for canonical graph orderings are available mckay piperno vinyals et al empirically found out that the linearization order matters when learning on sets log p log p log p f log p graphvae towards generation of small graphs using variational autoencoders graph matching computing a differentiable loss the goal of graph matching is to find correspondences x between nodes of graphs e based on the similarities of their node pairs g and g e it can be s i j a b for i j g and a b expressed as integer quadratic programming problem of similarity maximization over x and is typically approximated by relaxation of x into continuous domain x cho et for our use case the similarity function is defined as follows s i j a b t e ea b a ea a a eb b i j a b ei j ea b ai j a t e ea a i j a b fi fa a the first term evaluates similarity between edge pairs and the second term between node pairs being the iverson bracket note that the scores consider both feature compate and existential compatibility a e which ibility fe and e has empirically led to more stable assignments during training to summarize the motivation behind both equations and our method aims to find the best graph matching and then further improve on it by gradient descent on the loss given the stochastic way of training deep network we argue that solving the matching step only approximately is sufficient this is conceptually similar to the approach for learning to output unordered sets by vinyals et where the closest ordering of the training data is sought in practice we are looking for a graph matching algorithm robust to noisy correspondences which can be easily implemented on gpu in batch mode matching mpm by cho et al is a simple but effective algorithm following the iterative scheme of power methods see appendix a for details it can be used in batch mode if similarity tensors are s i j a b for n i j k and the amount of iterations is fixed matching outputs continuous assignment matrix x unfortunately attempts to directly use x instead of x in equation performed badly as did experiments with direct maximization of x or soft discretization with softmax or gumbel softmax jang et we therefore discretize x to x using hungarian algorithm to obtain a strict while this operation is gradient can still flow to the decoder directly through the loss function and training convergence proceeds without problems note that this approach is often taken in works on object detection stewart et where a set of detections need to be matched to a set of ground truth bounding boxes and treated as fixed before some predicted nodes are not assigned for n our current implementation performs this step on cpu although a gpu version has been published date nagi further details encoder a feed forward network with graph convolutions ecc simonovsky komodakis is used as encoder although any other graph embedding method is applicable as our edge attributes are categorical a single linear layer for the filter generating network in ecc is sufficient due to smaller graph sizes no pooling is used in encoder except for the global one for which we employ gated pooling by li et al as usual in vae we formulate encoder as probabilistic and enforce gaussian distribution of by having the last encoder layer outputs features interpreted as mean and variance allowing to sample zl n g g for l c using the trick kingma welling disentangled embedding in practice rather than random drawing of graphs one often desires more control over the properties of generated graphs in such case we follow sohn et al and condition both encoder and decoder on label vector y associated with each input graph decoder y is fed a concatenation of z and y while in encoder y y is concatenated to every node s features just before the graph pooling layer if the size of latent space c is small the decoder is encouraged to exploit information in the label limitations the proposed model is expected to be useful only for generating small graphs this is due to growth of gpu memory requirements and number of parameters o k as well as matching complexity o k with small decrease in quality for high values of in section we demonstrate results for up to k nevertheless for many applications even generation of small graphs is still very useful evaluation we demonstrate our method for the task of molecule generation by evaluating on two large public datasets of organic molecules and zinc application in cheminformatics quantitative evaluation of generative models of images and texts has been troublesome theis et as it very difficult to measure realness of generated samples in an automated and objective way thus researchers frequently resort there to qualitative evaluation and embedding plots however qualitative evaluation of graphs can be very unintuitive for humans to judge unless the graphs are planar and fairly simple graphvae towards generation of small graphs using variational autoencoders fortunately we found graph representation of molecules as undirected graphs with atoms as nodes and bonds as edges to be a convenient testbed for generative models on one hand generated graphs can be easily visualized in standardized structural diagrams on the other hand chemical validity of graphs as well as many further properties a molecule can fulfill can be checked using software packages sanitizemol in rdkit or simulations this makes both qualitative and quantitative tests possible chemical constraints on compatible types of bonds and atom valences make the space of valid graphs complicated and molecule generation challenging in fact a single addition or removal of edge or change in atom or bond type can make a molecule chemically invalid comparably flipping a single pixel in number generation problem is of no issue to help the network in this application we introduce three e remedies first we make the decoder output symmetric a e and e by predicting their upper triangular parts only as undirected graphs are sufficient representation for molecules second we use prior knowledge that molecules are connected and at test time only construct maximum spanning ea a in tree on the set of probable nodes a a order to include its edges a b in the discrete pointwise ea b originally third estimate of the graph even if a we do not generate hydrogen explicitly and let it be added as padding during chemical validity check dataset dataset ramakrishnan et contains about organic molecules of up to heavy non hydrogen atoms with distinct atomic numbers and bond types we set k de and dn we set aside samples for testing and for validation model selection we compare our unconditional model to the characterbased generator of et al cvae and the generator of kusner et al gvae we used the code and architecture in kusner et al for both baselines adapting the maximum input length to the smallest possible in addition we demonstrate a conditional generative model for an artificial task of generating molecules given a histogram of heavy atoms as label y the success of which can be easily validated setup the encoder has two graph convolutional layers and channels with identity connection batchnorm and relu followed by the output formulation in equation of li et al with auxiliary networks being a single fully connected layer fcl with output channels finalized by a fcl outputting the decoder has fcls and channels with batchnorm and relu followed by parallel triplet of fcls to output graph tensors we set c batch size mpm iterations and train for epochs with adam with learning rate and embedding visualization to visually judge the quality and smoothness of the learned embedding z of our model we may traverse it in two ways along a slice and along a line for the former we randomly choose two orthonormal vectors and sample z in regular grid pattern over the induced plane for the latter we randomly choose two molecules g g of the same label from test set and interpolate between their embeddings g g this also evaluates the encoder and therefore benefits from low reconstruction error we plot two planes in figure for a frequent label left and a less frequent label in right both images show a varied and fairly smooth mix of molecules the left image has many valid samples broadly distributed across the plane as presumably the autoencoder had to fit a large portion of database into this space the right exhibits stronger effect of regularization as valid molecules tend to be only around center an example of several interpolations is shown in figure we can find both meaningful and row and less meaningful transitions though many samples on the lines do not form chemically valid compounds decoder quality metrics the quality of a conditional decoder can be evaluated by the validity and variety of generated graphs for a given label y l we draw ns samples z l s p z and compute the discrete point estimate of their decodings l s arg max l s y l let v l be the list of chemically valid molecules from l s and c l be the list of chemically valid molecules with atom histograms equal to y l we are interested in ratios valid l l and accurate l l furthermore let unique l c l l be the fraction of unique correct graphs and novel l c l c l the fraction of novel graphs we define unique l and novel l if l finally the introduced metrics are aggregated by frequencies of labels in valid p l l l valid freq y unconditional decoders are evaluated by assuming there is just a single label therefore valid accurate in table we can see that on average of generated molecules are chemically valid and in the case of conditional models about have the correct label which the decoder was conditioned on larger embedding sizes c are less regularized demonstrated by a higher number of unique samples and by lower accuracy of the conditional graphvae towards generation of small graphs using variational autoencoders n o n n n n nh n n ho oh n o o n oh n o n ho n o o oh n oh oh oh n ho n n n oh n n n o nh o o oh nh nh nh oh nh oh nh o o oh nh nh o ho oh n oh oh oh oh n o ho oh o ho o n oh oh nh o o oh oh o oh n oh oh o n oh nh o ho o n oh oh o o o o o oh o oh oh nh n ho o n oh oh n o n ho oh n n n oh n o ho o oh ho n n n o oh n oh o o o o o oh oh oh n n n n n n oh oh hn oh n n oh oh oh n n oh n oh oh ho o n n oh n o o nh n o o nh oh o o o nh n o n o n oh n n oh o oh o n nh o n o n o n o nh o n o n o oh oh o nh oh nh nh nh o o oh o oh hn o nh n n o o n o nh nh o o o o nh o o o nh o nh o n o oh o oh o nh n n o n oh n o n oh o o o o oh n oh o o o o o o o n o n o oh nh nh nh o o o oh o oh oh o nh n o oh o o o oh nh nh o o o n o nh oh o o ho o oh nh o o n n oh oh n nh nh o oh o o oh oh n nh o oh oh oh o o n o n oh nh nh o n oh n oh o o n oh n o o n o oh n oh oh n n hn oh oh nh nh nh o o nh o o figure decodings of latent space points of a conditional model sampled over a random plane in of c within units from center of coordinates left samples conditioned on carbon nitrogen oxygen right samples conditioned on carbon nitrogen oxygen color legend as in figure model as the decoder is forced less to rely on actual labels the ratio of valid samples shows less clear behavior likely because the discrete performance is not directly optimized for for all models it is remarkable that about of generated molecules are out of the dataset the network has never seen them during training looking at the baselines cvae can output only very few valid samples as expected while gvae generates the highest number of valid samples but of very low variance less than additionally we investigate the importance of graph matching by using identity assignment x instead and thus learning to reproduce particular node permutations in the training set which correspond to the canonical ordering of smiles strings from rdkit this ablated model denoted as nogm in table produces many valid samples of lower variety and surprisingly outperforms gvae in this regard in comparison our model can achieve good performance in both metrics at the same time likelihood besides the metric introduced above we also report evidence lower bound elbo commonly used in vae literature which corresponds to g in our notation in table we state mean bounds over test set using a single z sample per graph we observe both reconstruction loss and decrease due to larger c providing more freedom however there seems to be no strong correlation between elbo and valid which makes model selection somewhat difficult implicit node probabilities our decoder assumes independence of node and edge probabilities which allows for isolated nodes or edges making further use of the fact that molecules are connected graphs here we investigate the effect of making node probabilities a function of edge probabilities specifically we consider the probability for node ea a maxb a ea b a as that of its most probable edge a the evaluation on in table shows a clear improvement in valid accurate and novel metrics in both the conditional and unconditional setting however this is paid for by lower variability and higher reconstruction loss this indicates that while the new constraint is useful the model can not fully cope with it zinc dataset zinc dataset irwin et contains about druglike organic molecules of up to heavy atoms with distinct atomic numbers and bond types we set k de and dn and use the same split strategy as with we investigate the degree of scalability of an unconditional generative model setup the setup is equivalent as for but with a wider encoder channels graphvae towards generation of small graphs using variational autoencoders oh f f n f n f n f hn n f o f n n f n n nh oh n nh nh n o o n o o o n hn o o nh oh o nh nh o o oh o nh o n o o ho o oh o o o o o o o o o o o o oh oh o o o oh oh o oh o oh o n oh o oh ho n o ho o ho o n n n oh ho n o n oh o n oh o oh oh oh figure linear interpolation between pairs of randomly chosen molecules in of c in a conditional model color legend encoder inputs green chemically invalid graphs red valid graphs with wrong label blue valid and correct white decoder quality metrics our best model with c has archived valid which is clearly worse than for using implicit node probabilities brought no improvement for comparison cvae failed to generated any valid sample while gvae achieved valid models provided by kusner et al c we attribute such a low performance to a generally much higher chance of producing a inconsistency number of possible edges growing quadratically to confirm the relationship between performance and graph size k we kept only graphs not larger than k nodes corresponding to of zinc and obtained valid and valid for k nodes of zinc to verify that the problem is likely not caused by our proposed graph matching loss we synthetically evaluate it in the following matching robustness robust behavior of graph matching using our similarity function s is important for good performance of graphvae here we study graph matching in isolation to investigate its scalability to that end we add gaussian noise n n n to each tensor of input graph g truncating and renormalizing to keep their probabilistic interpretation to create its noisy version gn we are interested in the quality of matching between self p g g using noisy assignment matrix x between g and gn the advantage to naive checking x for identity is the invariance to permutation of equivalent nodes in table we vary k and for each tensor separately and report mean accuracies computed in the same fashion as losses in equation over random samples from zinc with size up to k nodes while we observe an expected fall of accuracy with stronger noise the behavior is fairly robust with respect to increasing k at a fixed noise level the most sensitive being the adjacency matrix note that accuracies are not comparable across tables due to different dimensionalities of random variables we may conclude that the quality of the matching process is not a major hurdle to scalability conclusion in this work we addressed the problem of generating graphs from a continuous embedding in the context of variational autoencoders we evaluated our method on two molecular datasets of different maximum graph size while we achieved to learn embedding of reasonable quality on small molecules our decoder had a hard time capturing complex chemical interactions for larger molecules nevertheless we believe our method is an important initial step towards more powerful decoders and will spark interesting in the community there are many avenues to follow for future work besides the obvious desire to improve the current method for example by incorporating a more powerful prior distribution or adding a recurrent mechanism for correcting mistakes graphvae towards generation of small graphs using variational autoencoders log elbo valid accurate unique novel cond ours c ours c ours c ours c unconditional table performance on conditional and unconditional models evaluated by mean reconstruction log mean evidence lower bound elbo and decoding quality metrics section baselines cvae et and gvae kusner et are listed only for the embedding size with the highest valid ours c ours c ours c ours c nogm c cvae c gvae c log elbo valid accurate unique novel cond c c c c uncond table performance on conditional and unconditional models with implicit node probabilities improvement with respect to table is emphasized in italics c c c c table mean accuracy of matching zinc graphs to their noisy counterparts in a synthetic benchmark as a function of maximum graph size noise k k k k k k e f we would like to extend it beyond a proof of concept by applying it to real problems in chemistry such as optimization of certain properties or predicting chemical reactions an advantage of a decoder compared to smilesbased decoder is the possibility to predict detailed attributes of atoms and bonds in addition to the base structure which might be useful in these tasks our autoencoder might also be used to graph encoders for on small datasets goh et acknowledgments we thank shell xu hu for discussions on variational methods shinjae yoo for project motivation and anonymous reviewers for their comments references and albert emergence of scaling in random networks science bengio samy vinyals oriol jaitly navdeep and shazeer noam scheduled sampling for sequence prediction with recurrent neural networks in nips pp bowman samuel vilnis luke vinyals oriol dai andrew rafal and bengio samy generating sentences from a continuous space in conll pp bronstein michael m bruna joan lecun yann szlam arthur and vandergheynst pierre geometric deep graphvae towards generation of small graphs using variational autoencoders ing going beyond euclidean data ieee signal processing magazine cho minsu sun jian duchenne olivier and ponce jean finding matches in a haystack a strategy for graph matching in the presence of outliers in cvpr pp date ketan and nagi rakesh hungarian algorithms for the linear assignment problem parallel computing erdos paul and on the evolution of random graphs publ math inst hung acad sci gilmer justin schoenholz samuel riley patrick vinyals oriol and dahl george neural message passing for quantum chemistry in icml pp goh garrett siegel charles vishnu abhinav and hodas nathan chemnet a transferable and generalizable deep neural network for property prediction arxiv preprint rafael duvenaud david miguel jorge hirzel timothy adams ryan and automatic chemical design using a continuous representation of molecules corr gong shaogang and xiang tao recognition of group activities using dynamic probabilistic networks in iccv pp irwin john sterling teague mysinger michael bolstad erin and coleman ryan zinc a free tool to discover chemistry for biology journal of chemical information and modeling jang eric gu shixiang and poole ben categorical reparameterization with corr johnson daniel learning graphical state transitions in iclr kingma diederik and welling max variational bayes corr ktena sofia ira parisot sarah ferrante enzo rajchl martin lee matthew glocker ben and rueckert daniel distance metric learning using graph convolutional networks application to functional brain networks in miccai kusner matt and miguel gans for sequences of discrete elements with the gumbelsoftmax distribution corr kusner matt paige brooks and miguel grammar variational autoencoder in icml pp landrum greg rdkit cheminformatics url http li yujia swersky kevin and zemel richard generative moment matching networks in icml pp li yujia tarlow daniel brockschmidt marc and zemel richard gated graph sequence neural networks corr makhzani alireza shlens jonathon jaitly navdeep and goodfellow ian adversarial autoencoders corr mckay brendan and piperno adolfo practical graph isomorphism ii journal of symbolic computation issn olivecrona marcus blaschke thomas engkvist ola and chen hongming molecular de novo design through deep reinforcement learning corr ramakrishnan raghunathan dral pavlo o rupp matthias and von lilienfeld o anatole quantum chemistry structures and properties of kilo molecules scientific data segler marwin kogej thierry tyrchan christian and waller mark generating focussed molecule libraries for drug discovery with recurrent neural networks corr simonovsky martin and komodakis nikos dynamic edgeconditioned filters in convolutional neural networks on graphs in cvpr snijders tom and nowicki krzysztof estimation and prediction for stochastic blockmodels for graphs with latent block structure journal of classification jan sohn kihyuk lee honglak and yan xinchen learning structured output representation using deep conditional generative models in nips pp stewart russell andriluka mykhaylo and ng andrew people detection in crowded scenes in cvpr pp graphvae towards generation of small graphs using variational autoencoders theis lucas van den oord and bethge matthias a note on the evaluation of generative models corr architecture we train it as unregularized in this section with a deterministic encoder and without term in equation vinyals oriol bengio samy and kudlur manjunath order matters sequence to sequence for sets arxiv preprint unconditional models for achieve mean test loglikelihood log of roughly about for the implicit node probability model for all c while these are significantly higher than in tables and our architecture can not achieve perfect reconstruction of inputs we were successful to increase training to zero only on fixed small training sets of hundreds of examples where the network could overfit this indicates that the network has problems finding generally valid rules for assembly of output tensors williams ronald and zipser david a learning algorithm for continually running fully recurrent neural networks neural computation xu danfei zhu yuke choy christopher bongsoo and li scene graph generation by iterative message passing in cvpr yu lantao zhang weinan wang jun and yu yong seqgan sequence generative adversarial nets with policy gradient in aaai appendix matching in this section we briefly review matching algorithm of cho et al in its relaxed form a continuous correspondence matrix x between nodes of e is determined based on similarities of node graphs g and g e represented as matrix elements pairs i j g and a b g sia jb r let denote the replica of x the relaxed graph matching problem is expressed as quadratic pn programming task arg maxx xt sx such that xia pk kn xia and x the optimization strategy of choice is derived to be equivalent to the power method with iterative update rule x sx t t the starting correspondences x are initialized as uniform and the rule is iterated until convergence in our use case we run for a fixed amount of iterations in the context of graph matching the product sx can be interpreted as p p over match candidates xia xia sia ia xjb sia jb where ni and na denote the set of neighbors of node i and a the authors argue that this formulation is strongly influenced by uninformative or irrelevant elements and propose a more robust version which considers only the best pairwise similarity from each neighbor p xia xia sia ia xjb sia jb unregularized autoencoder the regularization in vae works against achieving perfect reconstruction of training data especially for small embedding sizes to understand the reconstruction ability of our
| 9 |
universal quantum computing and feb michel raymond marcelo and klee abstract a single qubit may be represented on the bloch sphere or similarly on the s our goal is to dress this correspondence by converting the language of universal quantum computing uqc to that of a magic state and the pauli group acting on it define a model of uqc as a povm that one recognizes to be a m the povms defined from subgroups of finite index of the modular group p sl z correspond to m coverings over the trefoil knot in this paper one also investigates quantum information on a few universal knots and links such as the knot the whitehead link and borromean rings making use of the catalog of platonic manifolds available on snappy further connections between povms based uqc and m s obtained from dehn fillings are explored pacs msc codes keywords quantum computation knot theory branch coverings dehn surgeries manifolds are around us in many guises as observers in a world we are most familiar with twomanifolds the surface of a ball or a doughnut or a pretzel the surface of a house or a tree or a volleyball net may be harder to understand at first but as actors and movers in a world we can learn to imagine them as alternate universes introduction mathematical concepts pave the way for improvements in technology as far as topological quantum computation is concerned non abelian anyons have been proposed as an attractive fault tolerant alternative to standard quantum computing which is based on a universal set of quantum gates anyons are quasiparticles with world lines forming braids in whether non abelian anyons do exist in the real world would be easy to create artificially is still open to discussion topological quantum computing beyond anyons still is not well developed although as it will be shown in this essay it is a straightforward consequence of a set of ideas belonging to standard universal quantum computation uqc and simultaneously to topology for quantum computation we have in mind the concepts of magic states and the related valued measures povms that were michel raymond marcelo and klee investigated in detail in for topology the starting point consists of the and thurston conjectures now theorems topological quantum computing would federate the foundations of quantum mechanics and cosmology a recurrent dream of many physicists topology was already investigated by several groups in the context of quantum information high energy physics biology and consciousness studies from conjecture to uqc conjecture is the elementary but deep statement that every simply connected closed is homeomorphic to the s having in mind the correspondence between s and the bloch sphere that houses the qubits a b a b c one would desire a quantum translation of this statement for doing this one may use the picture of the riemann sphere c in parallel to that of the bloch sphere and follow klein lectures on the icosahedron to perceive the platonic solids within the landscape this picture fits well the hopf fibrations their entanglements described in and quasicrystals but we can be more ambitious and dress s in an alternative way that reproduces the historic thread of the proof of conjecture thurston s geometrization conjecture from which conjecture follows dresses s as a not homeomorphic to s the wardrobe of m is huge but almost every dress is hyperbolic and thurston found the recipes for them every dress is identified thanks to a signature in terms of invariants for our purpose the fundamental group of m does the job the space surrounding a knot k knot complement s is an example of a we will be especially interested by the trefoil knot that underlies work of the first author as well as the knot the whitehead link and the borromean rings because they are universal in a sense described below hyperbolic and allows to build from platonic manifolds such manifolds carry a quantum geometry corresponding to quantum computing and possibly informationally complete ic povms identified in our earlier work according to the knot k and the fundamental group g s k are universal if every closed and oriented m is homeomorphic to a quotient of the hyperbolic h by a subgroup h of finite index d of the knot and the whitehead link are universal the catalog of the finite index subgroups of their fundamental group g and of the corresponding defined from the coverings can easily been established up to degree using the software snappy in paper of the first author it has been found that may be built from finite index subgroups of the modular group p sl z to an ic is associated a subgroup of index d of a fundamental domain in the plane and a signature in terms of genus elliptic points and cusps as summarized in fig there exists a relationship between the modular group and the trefoil knot since the fundamental group s of the knot complement is the braid group the central extension of but the trefoil knot and the corresponding universal quantum computing and figure a the knot b the whitehead link c borromean rings braid group are not universal which forbids the relation of the finite index subgroups of to all it is known that two coverings of a manifold m with fundamental group g m are equivalent if there exists a homeomorphism between them besides a covering is uniquely determined by a subgroup of index d of the group g and the inequivalent coverings of m correspond to conjugacy classes of subgroups of g in this paper we will fuse the concepts of a m attached to a subgroup h of index d and the povm possibly informationally complete ic found from h thanks to the appropriate magic state and related pauli group factory figure a the trefoil knot b the link associated to the hesse sic c the link associated to the ic minimal informationally complete povms and uqc in our approach minimal informationally complete ic povms are derived michel raymond marcelo and klee from appropriate fiducial states under the action of the generalized pauli group the fiducial states also allow to perform universal quantum computation a povm is a collection of positive operators em that sum to the identity in the measurement of a state the outcome is obtained with a probability given by the born rule p i tr for a minimal one needs projectors i with dei such that the rank of the gram matrix with elements tr is precisely a the s means symmetric obeys that allows the explicit recovery the tr of the density matrix as in eq new minimal whose rank of the gram matrix is and with hermitian angles a al have been discovered the states en a sic is equiangular with and countered are considered to live in a cyclotomic field f q exp n with n gcd d r the greatest common divisor of d and r for some the hermitian angle is defined as k k deg where means the field norm of the pair in f and deg is the degree of the extension f over the rational field q the fiducial states for are quite difficult to derive and seem to follow from algebraic number theory except for d the icpovms derived from permutation groups are not symmetric and most of them can be recovered thanks to subgroups of index d of the modular group table for instance for d the action of the pauli group on the state of type with exp results into an ic whose geometry of triple products of projectors arising from the the congruence subgroup of turns out to correspond to the commutation graph of pauli operators for d all five congruence subgroups or point out the geometry of borromean rings see and table below while serves as a motivation for investigating the trefoil knot manifold in relation to uqc and the corresponding ics it is important to put the uqc problem in the wider frame of conjecture the thurston s geometrization conjecture and the related ics may also follow from hyperbolic or seifert as shown in tables to of this paper organization of the paper the paper runs as follows sec deals about the relationship between quantum information seen from the modular group and from the trefoil knot sec deals about the platonic related to coverings over the knot whitehead link and borromean rings and how they relate to a few known sec describes the important role played by dehn fillings for describing the many types of that may relate to topological quantum computing universal quantum computing and quantum information from the modular group and the related trefoil knot in this section we describe the results established in in terms of the corresponding to coverings of the trefoil knot complement s d ty hom cp gens cs link type in cyc irr hesse sic cyc irr ic irr cyc cyc irr ic reg ic cyc ic irr irr ic irr ic irr ic irr irr cyc irr nc ic irr ic irr cyc cyc cyc ic table coverings of degree d over the trefoil knot found from snappy the related subgroup of modular group and the corresponding when applicable is in the right column the covering is characterized by its type ty homology group hom where means z the number of cusps cp the number of generators gens of the fundamental group the invariant cs and the type of link it represents as identified in snappy the case of cyclic coverings corresponds to brieskorn as explained in the text the spherical groups for these manifolds is given at the right hand side column let us introduce to the group representation of a knot complement s k a wirtinger representation is a finite representation of where the relations are the form wgi gj where w is a word in the k generators gk for the trefoil knot sown in fig a michel raymond marcelo and klee wirtinger representation is s hx xyxi or equivalently x in the rest of the paper the number of coverings of the manifold corresponding to the knot t will be displayed as the ordered list t d for it is details about the corresponding coverings are in table as expected the coverings correspond to subgroups of index d of the fundamental group associated to the trefoil knot cyclic branched coverings over the trefoil knot let p q r be three positive integers with p q r the brieskorn p q r is the intersection in of the s with the surface of equation in it is shown that a cyclic covering over s branched along a torus knot or link of type p q is a brieskorn p q r see also sec for the spherical case q r the group associated to a brieskorn manifold is either dihedral that is the group dr for the triples r tetrahedral that is for octahedral that is for or icosahedral that is for the euclidean case q r corresponds to or the remaining cases are hyperbolic the cyclic branched coverings with spherical groups for the trefoil knot which is of type are identified in right hand side column of table irregular branched coverings over the trefoil knot the right hand side column of table shows the subgroups of identified in table as corresponding to an in particular the hesse sic already found associated to the congruence subgroup corresponds to the link while the ic already found associated to the congruence subgroup corresponds to the crossing link the trefoil knot and the former two links are pictured in fig five coverings of degree allow the construction of the whose geometry contain the picture of borromean rings fig the corresponding congruence subgroups of are identified in table the first two viz and define whose fundamental group is the same than the one of the link alias the borromean rings with surgeries of slope on two cusps see sect for more on this topic the other three coverings leading to the ic are the congruence subgroups and quantum information from universal knots and links pertaining to the knot the fundamental group for the knot is s x x y xy yx and the number of coverings is in the list universal quantum computing and table establishes the list of corresponding to subgroups of index d of the universal group g s the manifolds are labeled otetnn in because they are oriented and built from n tetrahedra with n an index in the table the identification of of finite index subgroups of g was first obtained by comparing the cardinality list h of the corresponding subgroup h to that of a fundamental group of a tetrahedral manifold in snappy table but of course there is more straightforward way to perform this task by identifying a subgroup h to a degree d covering of the full list of coverings over the figure eight knot up to degree is available in snappy extra invariants of the corresponding m may be found there in addition the lattice of branched coverings over was investigated in ty cp cyc cyc irr cyc cyc irr irr cyc irr irr irr irr cyc irr irr d rk pp comment ic ic ic ic ic ic table table of m found from subgroups of finite index d of the fundamental group s alias the coverings of the terminology in column is that of snappy the identified m is made of tetrahedra and has cp cusps when the rank rk of the povm gram matrix is the corresponding shows pp distinct values of pairwise products as shown let us give more details about the results summarized in table using magma the conjugacy class of subgroups of index in the fundamental group g is represented by the subgroup on three generators and two relations as follows h x y zy z yxz yz xy from which the sequence of subgroups of finite index can be found as m the manifold m corresponding to this sequence is found in snappy as alias the conjugacy class of subgroups of index in g is represented as h x y yz zy z xz xz y zy michel raymond marcelo and klee with m corresponding to the manifold alias as shown in table there are two conjugacy classes of subgroups of index in g corresponding to tetrahedral manifolds the permutation group p organizing the cosets is and the permutation group organizing the cosets is the alternating group the latter has fundamental group figure two platonic leading to the construction of the details are given in tables and h x y y z xy y y z xy with cardinality sequences of subgroups as m to h is associated an which follows from the action of the pauli group on a state of type with exp a of unity for index there are three types of corresponding to the subgroups the tetrahedral manifold of sequence m is associated to a equianguler as in table for index the coverings define six classes of and two of them and are related to the construction of ics for index one finds three classes of with two of them alias and are related to ics finally for index types of exist two of them relying on the construction of the ic for index there exists distinct not shown none of them leading to an ic a tetrahedral manifold the tetrahedral is remarkable in the sense that it corresponds to the subgroup of index of g that allows the construction of the the corresponding hyperbolic polyhedron taken from snappy is shown in fig of universal quantum computing d ty cp cyc cyc cyc cyc irr irr irr irr irr irr cyc irr irr and rk pp comment qutrit hesse sic ic ic ic ic ic ic ic ic table a few m found from subgroups of the fundamental group associated to the whitehead link for d only the m s leading to an ic are listed figure a the link associated to the qutrit hesse sic b the octahedral manifold associated to the ic the orientable tetrahedral manifolds with at most tetrahedra are and each of those has at most cusps the with at most tetrahedra identified in table belong to the s and the tetrahedral manifold is one with just one cusp table pertaining to the whitehead link one could also identify the substructure of another universal object viz the whitehead link michel raymond marcelo and klee the cardinality list corresponding to the whitehead link group is table shows that the identified for index d subgroups of are aggregates of d octahedra in particular one finds that the qutrit hesse sic can be built from and that the may be buid from the hyperbolic polyhedron for the latter octahedral manifold taken from snappy is shown on fig the former octahedral manifold follows from the link shown in fig and the corresponding polyhedron taken from snappy is shown in fig d ty hom cyc cyc irr cp comment hesse sic hesse sic hesse sic table coverings of degrees and over the m branched along the borromean rings the identification of the corresponding hyperbolic is at the column it is seen at the right hand side column that only three types of allow to build the hesse sic a few pertaining to borromean rings corresponding to coverings of degree and of the branched along the borromean rings that is a not a link but an hyperbolic link see fig are given in table the identified manifolds are hyperbolic octahedral manifolds of volume for the degree and for the degree a few dehn fillings and their povms to summarize our findings of the previous section we started from a building block a knot viz the trefoil knot or a link viz the knot whose complement in s is a m then a covering of m was used to build a povm possibly an ic now we apply a kind of phase surgery on the knot or link that transforms m and the related coverings while preserving some of the povms in a way to be determined we will start with our friend and arrive at a few standard of historic importance the homology sphere alias the brieskorn sphere the brieskorn sphere and a seifert fibered toroidal manifold then we introduce the resulting from on the knot later in this section we will show how to use the coxeter lattice and surgery to arrive at a hyperbolic of maximal symmetry whose several coverings and related povms are close to the ones of the trefoil knot let us start with a lens space l p q that is obtained by gluing the boundaries of two solid tori together so that the meridian of the first solid torus universal quantum computing and t name t trefoil table a few surgeries column their name column and the cardinality list of alias conjugacy classes of subgroups plain characters are used to point out the possible construction of an in at least one the corresponding see and sec for the ics corresponding to goes to a p q on the second solid torus where a p q wraps around the longitude p times and around the meridian q times then we generalize this concept to a knot exterior the complement of an open solid torus knotted like the knot one glues a solid torus so that its meridian curve goes to a p q on the torus boundary of the knot exterior an operation called dehn surgery p according to lickorish s theorem every closed orientable connected is obtained by performing dehn surgery on a link in the a few surgeries on the trefoil knot the homology sphere the dodecahedral space alias the homology sphere was the first example of a not the it can be obtained from surgery on the trefoil knot let p q r be three positive integers and mutually coprime the brieskorn sphere p q r is the intersection in of the s with the surface of equation the homology of a brieskorn sphere is that of the sphere s a brieskorn sphere is homeomorphic but not diffeomorphic to s the sphere may be identified to the homology sphere the sphere may be obtained from surgery on table provides the sequences for the corresponding surgeries on plain digits in these sequences point out the possibility of building ics of the corresponding degree this corresponds to a considerable filtering of the ics coming from for instance the smallest ic from has dimension five and is precisely the one coming from the congruence subgroup in table but it is built from a non modular fundamental group whose permutation representation of the cosets is the alternating group h i compare sec the smallest dimensional ic derived from is and twovalued the same than the one arising from the congruence subgroup given in table but it arises from a non modular fundamental group with the permutation representation of cosets as p sl h i the seifert fibered toroidal manifold an hyperbolic knot or link in s is one whose complement is m endowed with a complete riemannian metric of constant negative curvature it has a hyperbolic geometry and finite volume a dehn surgery on a hyperbolic knot is exceptional if it is reducible toroidal or seifert fibered comprising a closed together with a michel raymond marcelo and klee decomposition into a disjoint union of circles called fibers all other surgeries are hyperbolic these categories are exclusive for a hyperbolic knot in contrast a non hyperbolic knot such as the trefoil knot admits a toroidal seifert fiber surgery obtained by dehn filling on the smallest dimensional ics built from are the hesse sic that is obtained from the congruence subgroup as for the trefoil knot and the ic that comes from a non modular fundamental group with cosets organized as the alternating group h i akbulut s manifold exceptional dehn surgery at slope on the knot leads to a remarkable manifold found in in the context of integral homology spheres smoothly bounding integral homology balls apart from its topological importance we find that some of its coverings are associated to already discovered ics and those coverings have the same fundamental group the smallest covering of degree occurs with integral homology z and the congruence subgroup also found from the trefoil knot see table next the covering of degree and homology z leads to the ic of type also found from the trefoil knot the next case corresponds to the ic the hyperbolic manifold the hyperbolic manifold closest to the trefoil knot manifold known to us was found in the goal in is the search of fundamental groups of in two dimensions maximal symmetry groups are called hurwitz groups and arise as quotients of the groups in three dimensions the quotients of the minimal lattice of hyperbolic isometries and of its orientation preserving subgroup min play the role of hurwitz groups let c be the coxeter group the split extension c and min one of the index two subgroups of of presentation min x y y z xyz xzyz xy according to corollary all subgroups of finite index in min have index divisible by there are two of them of index called and obtained as fundamental groups of surgeries and subgroups of index in min are given in table it is remarkable that these groups are fundamental groups of oriented built with a single icosahedron except for manifold t subgroup t table the index torsion free subgroups of min and their relation to the single isosahedron the icosahedral symmetry is broken for see the text for details universal quantum computing and is also special in the sense that many small dimensional ics may be built from it in contrast to the other groups in table the smallest ics that may be build from are the hesse sic coming from the congruence subgroup the ic coming the congruence subgroup and the ics coming from the congruence subgroups or see sec and table higher dimensional ics found from does not come from congruence subgroups conclusion the relationship between and universality in quantum computing has been explored in this work earlier work of the first author already pointed out the importance of hyperbolic geometry and the modular group for deriving the basic small dimensional in sec the move from to the trefoil knot and the braid group to non hyperbolic could be investigated by making use of the coverings of that correspond to povms some of them being ic then in sec we passed to universal links such as the knot whitehead link and borromean rings and the related hyperbolic platonic manifolds as new models for quantum computing based povms finally in sec dehn fillings on were used to explore the connection of quantum computing to important exotic and to the toroidal seifert fibered to akbulut s manifold and to a maximum symmetry hyperbolic manifold slightly breaking the icosahedral symmetry it is expected that our work will have importance for new ways of implementing quantum computing and for the understanding of the link between quantum information theory and cosmology funding the first author acknowledges the support by the french investissements d avenir program project contract the other ressources came from quantum gravity research references thurston geometry and topology vol princeton university press princeton planat the for informationally complete povms entropy hilden lozano montesinos and whitten on universal groups and inventiones mathematicae fominikh garoufalidis goerner tarkaev and vesnin a census of tethahedral hyperbolic manifolds exp math yu kitaev quantum computation by anyons annals phys nayak simon stern freedman and das sarma anyons and topological quantum computation rev mod phys wang topological quantum computation american mathematical rhode island number j pachos introduction to topological quantum computation cambridge university press cambridge vijay and fu a generalization of anyons in three dimensions arxiv bravyi and kitaev universal quantum computation with ideal clifford gates and noisy ancillas phys rev planat and ul haq the magic of universal quantum computing with permutations advances in mathematical physics id pp michel raymond marcelo and klee planat and gedik magic informationally complete povms with permutations soc open sci kauffman and baadhio quantum topology series on knots and everything world scientific kauffman knot logic and topological quantum computing with majorana fermions in linear and algebraic structures in quantum computing chubb eskandarian and harizanov eds lecture notes in logic cambridge univ press seiberg senthil wang and witten a duality web in dimensions and condensed matter physics ann phys gang tachikawa and yonekura smallest hyperbolic manifolds via simple theories phys rev d r lim and jackson molecular knots in biology and chemistry phys condens matter irwin toward a unification of physics and number theory https toward the unification of physics and number theory milnor the conjecture years later a progress report the clay mathematics institute annual report http retrieved planat on the geometry and invariants of qubits quartits and octits int geom methods in mod phys manton connections on discrete fiber bundles commun math phys mosseri and dandoloff geometry of entangled states bloch spheres and hopf fibrations it phys a math j nieto correspondence mod phys scientific research sen aschheim and irwin emergence of an aperiodic dirichlet space from the tetrahedral units of an icosahedral internal space mathematics fang hammock and irwin methods for calculating empires in quasicrystals crystals adams the knot book an elementary introduction to the mathematical theory of knots freeman and co new york mednykh a new method for counting coverings over manifold with fintely generated fundamental group dokl math culler dunfield goerner and weeks snappy a computer program for studying the geometry and topology of http hilden lozano and montesinoos on knots that are universal topology chris fuchs on the quantumness of a hibert space quant inf comp appleby chien flammia and waldron constructing exact symmetric informationally complete measurements from numerical solutions preprint rolfsen knots and links mathematics lecture series houston gabai the whitehead manifold is a union of two euclidean spaces topol milnor on the brieskorn manifolds m p q r in knots groups and ed neuwirth annals of math study princeton univ press princeton hempel the lattice of branched covers over the knot topol appl haraway determining hyperbolicity of compact orientable with torus boundary arxiv universal quantum computing and ballas danciger and lee convex projective structures on nonhyperbolic arxiv conder martin and torstensson maximal symmetry groups of hyperbolic new zealand j math gordon dehn filling a survey knot theory banach center publ polish acad warsaw kirby and scharlemann eight faces of the homology in geometric topology acad press new york pp wu seifert fibered surgery on montesinos knots arxiv to appear in comm anal geom akbulut and larson brieskorn spheres bounding rational balls arxiv chan zainuddin atan and a siddig computing quantum bound states on triply punctured surface chin phys lett aurich steiner and then numerical computation of maass waveforms and an application to cosmology in hyperbolic geometry and applications in quantum chaos and cosmology jens bolte and frank steiner eds cambridge univ press preprint smooth quantum gravity exotic smoothness and quantum gravity in at the frontier of spacetime theory bells inequality machs principle exotic smoothness fundamental theories of physics book series ftp ed pp de institut cnrs umr b avenue des montboucons france address quantum gravity research los angeles ca usa address raymond address klee address marcelo
| 4 |
may an accurate and efficient numerical framework for adaptive numerical weather prediction giovanni tumolo luca bonaventura january earth system physics section the abdus salam international center for theoretical physics strada costiera trieste italy gtumolo mox modelling and scientific computing dipartimento di matematica f brioschi politecnico di milano via bonardi milano italy keywords discontinuous galerkin methods adaptive finite elements semiimplicit discretizations discretizations shallow water equations euler equations ams subject classification abstract we present an accurate and efficient discretization approach for the adaptive discretization of typical model equations employed in numerical weather prediction a approach is combined with the time discretization method and with a spatial discretization based on adaptive discontinuous finite elements the resulting method has full second order accuracy in time and can employ polynomial bases of arbitrarily high degree in space is unconditionally stable and can effectively adapt the number of degrees of freedom employed in each element in order to balance accuracy and computational cost the approach employed does not require remeshing therefore it is especially suitable for applications such as numerical weather prediction in which a large number of physical quantities are associated with a given mesh furthermore although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes even its application on simple cartesian meshes in spherical coordinates can cure effectively the pole problem by reducing the polynomial degree used in the polar elements numerical simulations of classical benchmarks for the shallow water and for the fully compressible euler equations validate the method and demonstrate its capability to achieve accurate results also at large courant numbers with time steps up to times larger than those of typical explicit discretizations of the same problems while reducing the computational cost thanks to the adaptivity algorithm introduction the discontinuous galerkin dg spatial discretization approach is currently being employed by an increasing number of environmental fluid dynamics models see and a more complete overview in this is motivated by the many attractive features of dg discretizations such as high order accuracy local mass conservation and ease of massively parallel implementation on the other hand dg methods imply severe stability restrictions when coupled with explicit time discretizations one traditional approach to overcome stability restrictions in low mach number problems is the combination of semi implicit si and semi lagrangian sl techniques in a series of papers it has been shown that most of the computational gains traditionally achieved in finite difference models by the application of si sl and sisl discretization methods are also attainable in the framework of dg approaches in particular in we have introduced a dynamically discretization approach for low mach number problems that is quite effective in achieving high order spatial accuracy while reducing substantially the computational cost in this paper we apply the technique of to the shallow water equations in spherical geometry and to the the fully compressible euler equations in order to show its effectiveness for model problems typical of global and regional weather forecasting the advective form of the equations of motion is employed and the time discretization is based on the method see this combination of two robust ode solvers yields a second order accurate and method see that is effective in damping selectively high frequency modes at the same time it achieves full second order accuracy while the in the trapezoidal rule typically necessary for realistic applications to nonlinear problems see limits the accuracy in time to first order numerical results presented in this paper show that the total computational cost of one step is analogous to that of one step of the trapezoidal rule as well as the structure of the linear problems to be solved at each time step thus allowing to extend naturally to this more accurate method any implementation based on the trapezoidal rule numerical simulations of the shallow water benchmarks proposed in and of the benchmarks proposed in have been employed to validate the method and to demonstrate its capabilities in particular it will be shown that the present approach enables the use of time steps even times larger than those allowed for dg models by standard explicit schemes see the results in the method presented in this paper just as its previous version in can be applied in principle on arbitrarily unstructured and even nonconforming meshes for example a model based on this method could run on a non conforming mesh of rectangular elements built around the nodes of a reduced gaussian grid for simplicity however no such implementation has been developed so far here only a simple cartesian mesh has been used if no degree adaptivity is employed this results in very high courant numbers in the polar regions these do not result in any special stability problems for the present sisl discretization approach as it will be shown by the numerical results reported below on the other hand even with an implementation based on a simple cartesian mesh in spherical coordinates the flexibility of the dg space discretization allows to reduce the degree of the basis and test functions employed close to the poles thus making the effective model resolution more uniform and solving the efficiency issues related to the pole problem by static this is especially advantageous because the conditioning of the linear system to be solved at each time step is greatly improved and as a consequence the number of iterations necessary for the linear solver is reduced by approximately while at the same time no spurious reflections nor artificial error increases are observed beyond these computational advantages we believe that the present approach based on is especially suitable for applications to numerical weather prediction in contrast to approaches that is local mesh coarsening or refinement in which the size of some elements changes in time indeed in numerical weather prediction information that is necessary to carry out realistic simulations such as orography profiles data on land use and soil type masks needs to be reconstructed on the computational mesh and has to be each time that the mesh is changed furthermore many physical parameterizations are highly sensitive to the mesh size although devising better parameterizations that require less tuning is an important research goal more conventional parameterizations will still be in use for quite some time as a consequence it is useful to improve the accuracy locally by adding supplementary degrees of freedom where necessary as done in a framework without having to change the underlying computational mesh in conclusion the resulting modeling framework seems to be able to combine the efficiency and high order accuracy of traditional sisl methods with the locality and flexibility of more standard dg approaches in section two examples of governing equations are introduced in section the method is reviewed in section the approach employed for the advection of vector fields in spherical geometry is described in detail in section we introduce the discretization approach for the shallow water equations in spherical geometry in section we outline its extension to the fully compressible euler equations in a vertical plane numerical results are presented in section while in section we try to draw some conclusions and outline the path towards application of the concepts introduced here in the context of a non hydrostatic dynamical core governing equations we consider as a basic model problem the shallow water equations on a rotating sphere see these equations are a standard test bed for numerical methods to be applied to the full equations of motion of atmospheric or oceanic circulation models see among their possible solutions they admit rossby and inertial gravity waves as well as the response of such waves to orographic forcing we will use the advective vector form of the shallow water equations dh u dt du f u dt here h represents the fluid depth b the bathymetry elevation f the coriolis parameter the unit vector locally normal to the earth s surface and g the gravity force per unit mass on the earth s surface assuming that x y are orthogonal curvilinear coordinates on the sphere or on a portion of it we denote by mx and my the components of the diagonal metric tensor furthermore we set u u v t where u and v are the contravariant components of the velocity vector in the coordinate direction x and y respectively multiplied by the d the corresponding metric tensor components we also denote by dt lagrangian derivative d u v dt mx my dy so that u mx dx dt v my dt in particular in this paper standard spherical coordinates will be employed as an example of a more complete model we will also consider the fully compressible non hydrostatic equations of motion following they can be written as cp u dt cv du dt dt is the powhere being a reference pressure value t p tential temperature is the exner pressure while cp cv r are the constant pressure and constant volume specific heats and the gas constant of dry air respectively here the coriolis force is omitted for simplicity notice also that by a slight abuse of notation in the case u u v w t denotes the three dimensional d operators are also while velocity field and the dt we will assume u u w t in the description of x z two dimensional vertical slice models it is customary to rewrite such equations in terms of perturbations with respect to a steady hydrostatic reference profile so that assuming x y z t z x y z t x y z t z x y z t with cp dz one obtains for a vertical plane cp u dt cv du dt dw g dt dt dz it can be observed that equations are isomorphic to equations which will allow to extend almost automatically the discretization approach proposed for the former to the more general model review of the method we review here some properties of the so called method which was first introduced in given a cauchy problem f y t y and considering a time discretization employing a constant time step the method is defined by the two following implicit stages tn un un tn un here is an implicitness parameter and it is immediate that the first stage of is simply the application of the trapezoidal rule or method over the interval tn tn it could also be substituted by an off centered cranknicolson step without reducing the overall accuracy of the method the outcome of this stage is then used to turn the two step method into a single step two stages method this combination of two robust stiff solvers yields a method with several interesting accuracy and stability properties that were analyzed in detail in as shown in this paper this analysis is most easily carried out by rewriting the method as f un tn f un tn f un n u u in this formulation the method is clearly a singly diagonal implicit runge kutta sdirk method so that one can rely on the theory for this class of methods to derive stability and accuracy results see notice that the same method has been rediscovered in and has been analyzed and applied also in to treat the implicit terms in the framework of an additive runge kutta approach see as shown in the method is second order accurate and for any value of written as in the method can also be proven to constitute a embedded pair with companion coefficients given by provided that no off centering is employed in the first stage of this equips the method with an extremely of the time discretization error furthermore for it is also lstable therefore with this coefficient value it can be safely applied to problems with eigenvalues whose imaginary part is large such as typically arise from the discretization of hyperbolic problems this is not the case for the standard trapezoidal rule or implicit method whose linear stability region is exactly bounded by the imaginary axis as a consequence it is common to apply the trapezoidal rule with off centering see as well as which results in a first order time discretization appears therefore to be an interesting one step alternative to maintain full second order accuracy especially considering that if formulated as it is equivalent to performing two steps with slightly modified coefficients in order to highlight the advantages of the proposed method in terms of accuracy with respect to other common robust stiff solvers we plot in figure the contour levels of the absolute value of the linear stability function of the method without off centering in the first stage compared to the analogous contours of the off centered method with averaging parameter in figures respectively and to those of the method in figure it is immediate to see that introduces less damping around the imaginary axis for moderate values of the time step on the other hand is more selective in damping very large eigenvalues as clearly displayed in figure where the absolute values of the linear stability functions of the same methods with the exception of for which an explicit representation of the stability function is not available are plotted along the imaginary axis figure contour levels of the absolute value of the stability function of the method without off centering in the first stage contour spacing is from to figure contour levels of the absolute value of the stability function of the off centered method with averaging parameter equivalent to an off centering parameter valued contour spacing is from to figure contour levels of the absolute value of the stability function of the off centered method with averaging parameter equivalent to an off centering parameter valued contour spacing is from to figure contour levels of the absolute value of the stability function of the method contour spacing is from to cn cn figure graph of the absolute value of the stability functions of several methods along the imaginary axis review of evolution operators for vector fields on the sphere the method can be described introducing the concept of evolution operator along the lines of indeed let g g x t denote a generic function of space and time that is the solution of dg u v dt mx my to approximate this solution on the time interval tn a numerical evolution operator e is introduced that approximates the exact evolution operator associated to the frozen velocity field v t that may coincide with the velocity field at time level tn or with an extrapolation derived from more previous time levels more precisely if x t x denotes the solution of dx t x x t x dt with initial datum x x x at time t then the expression e tn g x denotes a numerical approximation of gn xd where xd x tn x and the notation gn x g x tn is used since xd is nothing but the position at time tn of the fluid parcel reaching location x at time according to standard terminology it is called the departure point associated with the arrival point x different methods can be employed to approximate xd in this paper for simplicity the method proposed in has been employed in spherical geometry furthermore to guarantee an accuracy compatible with that of the an extrapolation of the velocity field at the intermediate time level tn was used as in on the other hand in the application to cartesian geometry for the vertical slice discretization a simple first order euler method with was employed see in case of the advection of a vector field dg u v dt mx my as in the momentum equation the extension of this approach has to take into account the curvature of the spherical manifold more specifically unit basis vectors at the departure point are not in general aligned with those at the arrival point if represent a unit vector triad in general x xd x xd x xd to deal with this issue two approaches are available the first intrinsically eulerian consists in the introduction of the christoffel symbols in the covariant derivatives definition giving rise to the well known metric terms before the sisl discretization and then in the approximation along the trajectories of those metric terms this approach has been shown to be source of instabilities in a semilagrangian frame see and therefore is not adopted in this work the second approach more suitable for discretizations takes into account the curvature of the manifold only at discrete level after the sisl discretization has been performed many variations of this idea have been proposed see in they have all been derived in a unified way by the introduction of a proper rotation matrix that transforms vector components in the unit vector triad xd d xd xd into vector components in the unit vector triad x x x to see how this rotation matrix comes into play it is sufficient to consider the action of the evolution operator e on a given vector valued function of space and time g defined as an approximation of e tn g x gn xd and to write this equation componentwise gn xd is known through its components in the departure point unit vector triad gn xd gxn xd gyn xd d gzn xd therefore via the components of e tn g x in the unit vector triad at the same point are given by projection of along gn xd gxn xd gyn xd d gzn xd gn xd gxn xd gyn xd d gzn xd gn xd gxn xd gyn xd d gzn xd in matrix notation e tn g x gx e tn g x r gzn e tn g x d r d d where under the shallow atmosphere approximation r can be reduced to the rotation matrix x xd where as shown in therefore in the following the evolution operator for vector fields will be defined componentwise as e tn g x e tn g x gxn xd gyn xd a novel sisl time integration approach for the shallow water equations on the sphere the sisl discretization of equations based on is then obtained by performing the two stages in after reinterpretation of the intermediate values in a fashion furthermore in order to avoid the solution of a nonlinear system the dependency on h in u is linearized in time as common in discretizations based on the trapezoidal rule see numerical experiments reported in the following show that this does not prevent to achieve second order accuracy in the regimes of interest for numerical weather prediction the tr stage of the sisl time of the equations in vector form is given by hn e tn h h u i h f h io tn u g f u the tr stage is then followed by the stage e tn h e tn h i h f e tn u e tn u for each of the two stages the spatial discretization can be performed along the lines described in allowing for variable polynomial order to locally represent the solution in each element the spatial discretization approach considered is independent of the nature of the mesh and could also be implemented for fully unstructured and even non conforming meshes for simplicity however in this paper only an implementation on a structured mesh in coordinates has been developed in principle either lagrangian or hierarchical legendre bases could be employed we will work almost exclusively with hierarchical bases because they provide a natural environment for the implementation of a algorithm see for example a central issue in finite element formulations for fluid problems is the choice of appropriate approximation spaces for the velocity and pressure variables in the context of swe the role of the pressure is played by the free surface elevation an inconsistent choice of the two approximation spaces indeed may result in a solution that is polluted by spurious modes for the specific case of swe see for example as well as the more recent and comprehensive analysis in here we have not investigated this issue in depth but the model implementation allows for approximations of higher polynomial degree pu for the velocity fields than ph for the height field even though no systematic study was performed no significant differences were noticed between results obtained with equal or unequal degrees in the following only results with unequal degrees pu ph are reported with the exception of an empirical convergence test for a steady geostrophic flow all the integrals appearing in the elemental equations are evaluated by means of gaussian numerical quadrature formulae with a number of quadrature nodes consistent with the local polynomial degree being used in particular notice that integrals of terms in the image of the evolution operator e of functions evaluated at the departure points of the trajectories arriving at the quadrature nodes can not be computed exactly see since such functions are not polynomials therefore a sufficiently accurate approximation of these integrals is needed which may entail the need to employ numerical quadrature formulae with more nodes than the minimal requirement implied by the local polynomial degree this overhead is actually compensated by the fact that for each gauss node the computation of the departure point is only to be executed once for all the quantities to be interpolated after spatial discretization has been performed the discrete degrees of freedom representing velocity unknowns can be replaced in the respective discrete height equations yielding in each case a linear system whose structure is entirely analogous to that obtained in the linear systems obtained from the stages are solved in our implementation by the gmres method a classical stopping criterion based on a relative error tolerance of was employed see for the gmres solver so far only a block diagonal preconditioning was employed as it will be shown in section the condition number of the systems to be solved can be greatly reduced if lower degree elements are employed close to the poles in any case the total computational cost of one step is entirely analogous to that of one step of the standard off centered trapezoidal rule employed in since the structure of the systems is the same but for each stage only a fraction of the time step is being computed once has been computed by solving this linear system then can be recovered by back substituting into the momentum equation extension of the time integration approach to the euler equations in this section we show that the previously proposed method can be extended seamlessly to the fully compressible euler equations as formulated in equations for simplicity only the application to the x z two dimensional vertical slice case is presented but the extension to three dimensions is straightforward again in order to avoid the solution of a nonlinear system the dependency on in u and the dependency on in are linearized in time as common in discretizations based on the trapezoidal rule see the counterpart of the tr substep of is first applied to to so as to obtain cp tn cp u cp n e t u cp n w cp n e t w cp n w e t w dz dz following the time energy equation can be inserted into the time vertical momentum equation in order to decouple the momentum and the energy equations as follows g dz n e t w cp g n w e t dz equations and are a set of three equations in three unknowns only namely u and w that can be compared with equations with f and mx my cartesian geometry from the comparison it is clear that the two formulations are isomorphic under correspondence h u u w we can then consider the counterpart of the substep of applied to to obtain cp e tn e tn e tn u cp e tn u w cp n e t w e tn w w dz e tn e tn again following the time energy equation can be inserted into the time vertical momentum equation in order to decouple the momentum and the energy equations g cp dz e tn w e tn w g e tn e tn now equations and are a set of three equations in three unknowns only namely u and w that can be compared with equations with f and mx my cartesian geometry again it is easy to see that also in this case exactly the same structure results as in equations with the correspondence h u u w v so that the approach and code proposed for the shallow water equations can be extended to the fully compressible euler equation in a straightforward way numerical experiments the numerical method introduced in section has been implemented and tested on a number of relevant test cases using different initial conditions and bathymetry profiles in order to assess its accuracy and stability properties and to analyze the impact of the strategy whenever a reference solution was available the relative errors were computed in the and norms at the final time tf of the simulation according to as i tf href tf i tf n h i h tf href tf h i href tf max tf href tf h max tf h where href denotes the reference solution for a model variable h and i is a discrete approximation of the global integral r h mx my dx i h mx my dx computed by an appropriate numerical quadrature rule consistent with the numerical approximation being tested and the maximum is computed over all nodal values the test cases considered for the shallow water equations in spherical geometry are a geostrophic flow in particular we have analyzed results in test case of in the configuration least favorable for methods employing meshes the unsteady flow with exact analytical solution described in the polar rotating introduced in aimed at showing that no problems arise even in the case of strong cross polar flows zonal flow over an isolated mountain and wave of wavenumber corresponding respectively to test cases and in for the first two tests analytic solutions are available and empirical convergence tests can be performed the test cases considered for the discretization of equations are inertia gravity waves involving the evolution of a potential temperature perturbation in a channel with periodic boundary conditions and uniformly stratified environment with constant frequency as described in a rising thermal bubble given by the evolution of a warm bubble in a constant potential temperature environment as described in in all the numerical experiments performed for this paper neither spectral filtering nor explicit diffusion of any kind were employed the only numerical diffusion being implicit in the time discretization approach we have not yet investigated to which extent the quality of the solutions is affected by this choice but this should be taken into account when comparing quantitatively the results of the present method to those of reference models such as the one described in in which explicit numerical diffusion is added sensitivity of the comparison results to the amount of numerical diffusion has been highlighted in several model validation exercises see since methods are most efficient for low froude number flows where the typical velocity is much smaller than that of the fastest propagating waves all the tests considered fall in this hydrodynamical regime therefore in order to assess the method efficiency a distinction has been made between the maximum courant number based on the velocity on one hand and on the other hand the maximum courant number based on the celerity or the maximum courant number based on the sound speed defined respectively as cvel max p cp csnd max ccel max where is to be interpreted as generic value of the meshsize in either coordinate direction for the tests in which was employed if pni denotes the local polynomial degree used at timestep tn to represent a model variable inside the i th element of the mesh while pmax is the maximum local polynomial degree considered the efficiency of the method in reducing the computational effort has been measured by monitoring the evolution of the quantities pn itnnadapt pn n n i iter n pmax itnnmax where n is the total number of elements itnnadapt denotes the total number of gmres iterations at time step n for the adapted local degrees configuration and itnnmax the total number of gmres iterations at time step n for the configuration with maximum degree in all elements respectively average values of these indicators over the simulations performed are reported in the following denoted by respectively the error between the adaptive and iter dof solution and the corresponding one obtained with uniform maximum polynomial degree everywhere has been measured in terms of finally in some cases conservation of global invariants has been monitored by evaluating at each time step the following global integral quantities j q n i q tn i q i q where i q has been defined in and q n q tn is the density associated to each global invariant according to the choice of q following invariants are considered mass q qmass h total energy q qenerg and potential enstrophy u f q qenstr geostrophic flow we first consider the test case of where the solution is a steady state flow with velocity field corresponding to a zonal solid body rotation and h field obtained from the velocity ones through geostrophic balance all the parameter values are taken as in the flow orientation parameter has been chosen here as making the test more challenging on a mesh error norms associated to the solution obtained on a mesh of elements for different polynomial degrees are shown in tables and for h u and v respectively all the results have been computed at tf days at fixed maximum courant numbers ccel cvel so that different values of have been employed for different polynomial order we remark that the resulting time steps are significantly larger than those allowed by typical explicit time discretizations for analogous dg space discretizations see the results in the spectral decay in the error norms can be clearly observed until the time error becomes dominant for better comparison with the results in we consider again the configuration with ph pu on elements which corresponds to the same resolution in space as for the grid used in while s is used in giving a h the proposed sisldg formulation can be run with s in which case h and the average number of iterations required by the linear solver is for the tr substep and for the substep ph pu s h h h table relative errors on h for different polynomial degrees swe test case with at time tf days ph pu s u u u table relative errors on u for different polynomial degrees swe test case with at time tf days another convergence test was performed for ph pu increasing the number of elements and correspondingly decreasing the value of the time step in this case the maximum courant numbers vary because of the mesh inhomogeneity so that ccel cvel the results are reported in tables and for h u and v respectively the empirical convergence order based on the norm errors has also been estimated showing that in this stationary test convergence rates above the second order of the time discretization can be achieved ph pu s v v v table relative errors on v for different polynomial degrees swe test case with at time tf days nx ny s h h h table relative errors on h for different number of elements ph pu swe test case with at time tf days nx ny s u u u table relative errors on u for different number of elements ph pu swe test case with at time tf days nx ny s v v v table relative errors on v for different number of elements ph pu swe test case with at time tf days unsteady flow with analytic solution in a second time dependent test the analytic solution of derived in has been employed to assess the performance of the proposed discretization more specifically the analytic solution defined in formula of was used since the exact solution is periodic the initial profiles also correspond to the exact solution an integer number of days later the proposed sisldg scheme has been integrated up to tf days with ph and pu on meshes with increasing number of elements while the time step has been decreased accordingly in this case the maximum courant numbers vary because of the mesh dishomogeneity so that ccel cvel error norms for h u v of the integrations have been computed at tf days and displayed in tables an empirical order estimation shows that full second order accuracy in time is attained nx ny s h h h table relative errors on h at different resolutions test case nx ny s u u u table relative errors on u at different resolutions test case for comparison analogous errors have been computed with the same discretization parameters but employing the off centered crank nicolson method of with the resulting improvement in the errors between the scheme and the crank nicolson is achieved at an essentially equivalent computational cost in terms of total cpu time employed nx ny s v v v table relative errors on v at different resolutions test case nx ny s h h h table relative errors on h at different resolutions test case with off centered crank nicolson nx ny s u u u table relative errors on u at different resolutions test case with off centered crank nicolson nx ny s v v v table relative errors on v at different resolutions test case with off centered crank nicolson zonal flow over an isolated mountain we have then performed numerical simulations reproducing the test case of given by a zonal flow impinging on an isolated mountain of conical shape the geostrophic balance here is broken by orographic forcing which results in the development of a planetary wave propagating all around the globe plots of the fluid depth h as well as of the velocity components u and v at days are shown in figures the resolution used corresponds to a mesh of elements with ph pu and s giving a courant number ccel in elements close to the poles it can be observed that all the main features of the flow are correctly reproduced in particular no significant gibbs phenomena are detected in the vicinity of the mountain even in the initial stages of the simulation y x figure h field after days isolated mountain wave test case ccel contour lines spacing is the evolution in time of global invariants during this simulation is shown in figures a b c respectively error norms for h and u at different resolutions corresponding to a ccel and ph pu have been computed at tf days and are displayed in tables with respect to a reference solution given by the national center for atmospheric research ncar spectral model at resolution it is apparent the second order of the proposed sisldg scheme in time since as observed in the national center for atmospheric research ncar spectral model incorporates diffusion terms y x figure u field after days isolated mountain wave test case ccel contour lines spacing is m in the governing equations while the proposed sisldg scheme does not employ any diffusion terms nor filtering nor smoothing of the topography for this test it seemed more appropriate to compute relative errors with respect to ncar spectral model solution at an earlier time tf days when it can be assumed that the effects of diffusion have less impact error norms for h and u have been computed at tf days at different resolutions corresponding to a ccel ph pu and displayed in tables nx ny min h h h table relative errors on h at different resolutions isolated mountain wave test case tf days finally the mountain wave test case has been run on the same mesh of elements s with either static or static plus dynamic adaptivity the tolerance for the dynamic adaptivity has been set to results are reported in terms of error norms with respect to a nonadaptive solution at the maximum uniform x y figure v field after days isolated mountain wave test case ccel contour lines spacing is m nx ny min u u u table relative errors on u at different resolutions isolated mountain wave test case tf days resolution and in terms of efficiency gain measured through the saving of number of linear solver iterations per as well as iter through the saving of number of degrees of freedom actually used per timestep these results are summarized in tables dof and the use of static adaptivity only resulted in iter average while the use of both static and dynamic adaptivity led to and the distribution of the iter dof statically and dynamically adapted local polynomial degree used to represent the solution after days is shown in figure it can be noticed how even after days higher polynomial degrees are still automatically concentrated around the location of the mountain nx ny s h h h table relative errors on h at different resolutions isolated mountain wave test case tf days nx ny s u u u table relative errors on u at different resolutions isolated mountain wave test case tf days adaptivity h h h static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for isolated mountain wave test case h field adaptivity u u u static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for isolated mountain wave test case u field x j qmass j qenerg days a days b x i qentsr days c figure integral invariants evolution mass a energy b potential enstrophy c isolated mountain wave test case ccel adaptivity v v v static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for isolated mountain wave test case v field y x figure statically and dynamically adapted local ph distribution at days isolated mountain wave test case wave we have then considered test case of where the initial datum consists of a wave of wave number this case actually concerns a solution of the nondivergent barotropic vorticity equation that is not an exact solution of the system for a discussion about the stability of this profile as a solution of see plots of the fluid depth h as well as of the velocity components u and v at days are shown in figures the resolution used corresponds to a mesh of elements with ph pu and s giving a courant number ccel in elements close to poles it can be observed that all the main features of the flow are correctly reproduced x y figure h field after days wave test case ccel contour lines spacing is the evolution in time of global invariants during this simulation is shown in figures a b c respectively error norms for h and u at different resolutions corresponding to a ccel and ph pu have been computed at tf days and are displayed in tables with respect to a reference solution given by the national center for atmospheric research ncar spectral model at resolution it is apparent the second order of the proposed sisldg scheme in time unlike the ncar spectral model the proposed sisldg scheme does not employ any explicit numerical diffusion finally the wave test case has been run on the y x figure u field after days wave test case ccel contour lines spacing is m nx ny min h h h table relative errors on h at different resolutions wave test case same mesh of elements s with either static or static plus dynamic adaptivity the tolerance for the dynamic adaptivity has been set to results are reported in terms of error norms with respect to a nonadaptive solution at the maximum uniform resolution and in terms of efficiency gain measured through the saving of number of linear solver iterations per iter as well as through the saving of number of degrees of freedom actually used per timestep these results are summarized in tables dof the use of static adaptivity only resulted in and iter while the use of both static and dynamic adaptivity dof average average and the distribution of the led to statically and dynamically adapted local polynomial degree used to represent the solution after days is shown in figure it can be noticed how even after days and even if the maximum allowed ph y x figure v field after days wave test case ccel contour lines spacing is m nx ny min u u u table relative errors on u at different resolutions wave test case is the use of the adaptivity criterion with leads to the use of at most cubic polynomials for the local representation of adaptivity h h h static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for wave test case h field adaptivity u u u static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for wave test case u field adaptivity v v v static static dynamic table relative errors between statically and statically plus dynamically adaptive and nonadaptive solution for wave test case v field j qmass j qenerg days a days b x i qentsr days c figure integral invariants evolution mass a energy b potential enstrophy c wave test case ccel figure statically and dynamically adapted local ph distribution at days test case nonhydrostatic inertia gravity waves in this section we consider the test case proposed in it consists in a set of waves propagating in a channel with a uniformly stratified reference atmosphere characterized by a constant frequency n the domain and the initial and boundary conditions are identical to those of the initial perturbation in potential temperature radiates symmetrically to the left and to the right but because of the superimposed mean horizontal flow u does not remain centered around the initial position contours of potential temperature perturbation horizontal velocity and vertical velocity time tf s are shown in figures respectively the computed results compare well with the structure displayed by the analytical solution of the linearized equations proposed in and with numerical results obtained with other numerical methods see it is to be remarked that for this experiment elements pu and a timestep s were used corresponding to a courant number csnd z x x figure contours of perturbation potential temperature in the internal gravity wave test z x x figure contours of horizontal velocity in the internal gravity wave test z x x figure contours of vertical velocity in the internal gravity wave test rising thermal bubble z z x z z as nonlinear nonhydrostatic experiment we consider in this section the test case proposed in it consists in the evolution of a warm bubble placed in an isentropic atmosphere at rest all data are as in contours of potential temperature perturbation at different times are shown in figure these results were obtained using elements pu and a timestep s corresponding to a courant number csnd x x x figure contours every k and the zero contour is omitted of perturbation potential temperature in the rising thermal bubble test at time min min min and min respectively in clockwise sense conclusions and future perspectives we have introduced an accurate and efficient discretization approach for typical model equations of atmospheric flows we have extended to spherical geometry the techniques proposed in combining a approach with the time discretization method and with a spatial discretization based on adaptive discontinuous finite elements the resulting method is unconditionally stable and has full second order accuracy in time thus improving standard trapezoidal rule discretizations without any major increase in the computational cost nor loss in stability while allowing the use of time steps up to times larger than those required by stability for explicit methods applied to corresponding dg discretizations the method also has arbitrarily high order accuracy in space and can effectively adapt the number of degrees of freedom employed in each element in order to balance accuracy and computational cost the approach employed does not require remeshing and is especially suitable for applications such as numerical weather prediction in which a large number of physical quantities is associated to a given the mesh furthermore although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes like reduced gaussian grids employed by spectral transform models even in applications on simple cartesian meshes in spherical coordinates the approach can cure effectively the pole problem by reducing the polynomial degree in the polar elements yielding a reduction in the computational cost that is comparable to that achieved with reduced grids numerical simulations of classical shallow water and nonhydrostatic benchmarks have been employed to validate the method and to demonstrate its capability to achieve accurate results even at large courant numbers while reducing the computational cost thanks to the adaptivity approach the proposed numerical framework can thus provide the basis of for an accurate and efficient adaptive weather prediction system acknowledgements this research work has been supported financially by the the abdus salam international center for theoretical physics earth system physics section we are extremely grateful to filippo giorgi of ictp for his strong interest in our work and his continuous support financial support has also been provided by the project sviluppi teorici ed applicativi dei metodi and by politecnico di milano we would also like to acknowledge useful conversations on the topics of this paper with erath giraldo restelli wood references baldauf and brdar an analytic solution for linear gravity waves in a channel as a test for numerical models using the nonhydrostatic compressible euler equations quarterly journal of the royal meteorological society bank coughran fichtner grosse rose and smith transient simulation of silicon devices and circuits ieee transactions on electron bates semazzi higgins and barros integration of the shallow water equations on the sphere using a vector scheme with a multigrid solver monthly weather review bonaventura a scheme using the height coordinate for a nonhydrostatic and fully elastic model of atmospheric flows journal of computational physics bonaventura redler and budich earth system modelling algorithms code infrastructure and optimisation springer verlag new york j butcher and chen a new type of rungekutta method applied numerical mathematics casulli and cattani stability accuracy and efficiency of a method for shallow water flow computational mathematics and applications a lagrange multiplier approach for the metric terms of models on the sphere quarterly journal of the royal meteorological society and staniforth a scheme for spectral models monthly weather review cullen a test of a integration technique for a fully compressible model quarterly journal of the royal meteorological society davies cullen malcolm mawson staniforth white and wood a new dynamical core for the met office s global and regional modelling of the atmosphere quarterly journal of the royal meteorological society dawson westerink feyen and pothina continuous discontinuous and coupled galerkin finite element methods for the shallow water equations international journal of numerical methods in fluids desharnais and robert errors near the poles generated by a integration scheme in a global spectral model dumbser and casulli a staggered spectral discontinuous galerkin scheme for the shallow water equations applied mathematics and computation gill dynamics academic press giraldo trajectory computations for spherical geodesic grids in cartesian space monthly weather review giraldo hesthaven and warburton discontinuous galerkin methods for the spherical shallow water equations journal of computational physics giraldo kelly and constantinescu implicitexplicit formulations of a nonhydrostatic unified model of the atmosphere numa siam journal of scientific computing giraldo and restelli timeintegrators for a triangular discontinuous galerkin oceanic shallow water model international journal of numerical methods in fluids hortal and simmons use of reduced gaussian grids in spectral models monthly weather review hosea and shampine analysis and implementation of applied numerical mathematics hack and williamson spectral transform solutions to the shallow water test set journal of computational physics carpenter droegemeier woodward and hane application of the piecewise parabolic method ppm to meteorological modeling monthly weather review kelley iterative methods for linear and nonlinear equations siam philadelphia kelly and giraldo continuous and discontinuous galerkin methods for a scalable nonhydrostatic atmospheric model mode journal of computational physics kennedy and carpenter additive schemes for equations applied numerical mathematics lambert numerical methods for ordinary differential systems wiley giraldo handorf and dethloff a discontinuous galerkin method for the shallow water equations in spherical triangular coordinates journal of computational physics handorf and dethloff unsteady analytical solutions of the spherical shallow water equations journal of computational physics le roux spurious inertial oscillations in models journal of computational physics le roux and carey analysis of the discontinuous galerkin linearized system international journal of numerical methods in fluids leveque finite difference methods for ordinary and partial differential equations and problems society for industrial and applied mathematics mcdonald and bates integration of a gridpoint shallow water model on the sphere monthly weather review mcgregor economical determination of departure points for models monthly weather review morton on the analysis of finite volume methods for evolutionary problems siam journal of numerical analysis morton priestley and stability of the scheme with inexact integration rairo modellisation matemathique et analyse numerique morton and methods and their supraconvergence numerische mathematik nair thomas and loft a discontinuous galerkin global shallow water model monthly weather review nair thomas and loft a discontinuous galerkin transport scheme on the cubed sphere monthly weather review priestley exact projections and the method a realistic alternative to quadrature journal of computational physics restelli bonaventura and sacco a discontinuous galerkin method for scalar advection by incompressible flows journal of computational physics restelli and giraldo a conservative discontinuous galerkin formulation for the equations in nonhydrostatic mesoscale modeling siam journal of scientific computing ripodas gassmann majewski giorgetta korn kornblueh wan bonaventura and heinze icosahedral shallow water model icoswm results of shallow water test cases and sensitivity to model parameters geoscientific model development ritchie application of the method to a spectral model of the shallow water equations monthly weather review rosatti bonaventura and cesari semilagrangian environmental modelling on cartesian grids with cut cells journal of computational physics saad and schultz gmres a generalized minimal residual algorithm for solving nonsymmetric linear systems siam journal on scientific and statistical computing skamarock and klemp efficiency and accuracy of the technique monthly weather review staniforth white and wood treatment of vector equations in momentum equation quarterly journal of the royal meteorological society temperton hortal and simmons a global spectral model quarterly journal of the royal meteorological society thuburn and li numerical simulations of waves tellus a thuburn and white a geometrical view of the approximation with application to the semilagrangian departure point calculation quarterly journal of the royal meteorological society tumolo bonaventura and restelli a discontinuous galerkin method for the shallow water equations journal of computational physics january walters numerically induced oscillations in approximations to the equations international journal of numerical methods in fluids walters and carey analysis of spurious oscillation modes for the and equations computers and fluids williamson drake hack jacob and swarztrauber a standard test set for the numerical approximations to the shallow water equations in spherical geometry journal of computational physics zienkiewicz and kelly the hierarchical concept in finite element analysis computers and structures
| 5 |
fundamental diagram of rail transit and its application to dynamic assignment aug toru kentaro daisuke august abstract urban rail transit often operates with high service frequencies to serve heavy passenger demand during rush hours such operations can be delayed by train congestion passenger congestion and the interaction of the two delays are problematic for many transit systems as they become amplified by this interactive feedback however there are no tractable models to describe transit systems with dynamical delays making it difficult to analyze the management strategies of congested transit systems in general solvable ways to fill this gap this article proposes simple yet physical and dynamic models of urban rail transit first a fundamental diagram of a transit system relation among and is analytically derived by considering the physical interactions in delays and congestion based on microscopic operation principles then a macroscopic model of a transit system with demand and supply is developed as a continuous approximation based on the fundamental diagram finally the accuracy of the macroscopic model is investigated using a microscopic simulation and applicable range of the model is confirmed keywords public transport rush hour fundamental diagram kinematic wave theory mfd dynamic traffic assignment introduction urban rail transit such as metro systems plays a significant role in handling the transportation needs of metropolitan areas vuchic its most notable usage is the morning commute in which heavy passenger demand is focused into a short time period to obtain general policy implications for management strategies of transit systems pricing gating scheduling many studies have theoretically analyzed such situations under certain simplifications such as the static travel time of transit operations de cea and tabuchi kraus and yoshida tian et al gonzales and daganzo trozzi et al de palma et al b it is known that urban mass transit often suffers from delays caused by congestion even if no serious incidents or accidents occur kato et al tirachini et al kariyazaki et al this means that the dynamical aspect of transit systems is important during periods of congestion corresponding author tokyo institute of technology meguro tokyo japan institute of industrial science the university of tokyo komaba meguro tokyo japan tokyo institute of technology meguro tokyo japan for instance in the tokyo metropolitan area tma which is one of the most populated regions in the world rail transit systems are essential and operated with high service frequency up to trains per hour per line headway of two minutes to serve the heavy passenger demand during peak hours kariyazaki unfortunately even if there are no accidents chronic delays occur almost daily and passengers experience longer and unreliable travel times due to congestion for example the mean delay of one of the major transit lines in tokyo during the rush hour is about eight minutes whereas the standard deviation of the delay is about two minutes iwakura et al kariyazaki estimated that on a typical weekday three million commuters across the entire tma experience such delays and the social cost caused by the delay corresponds to billion japanese yen approximately billion usd per year appropriate management strategies to solve this issue are therefore desirable in general the following types of congestion are observed in urban rail transit congestion involving consecutive trains using the same tracks also known as delay carey and congestion of passengers at station platforms namely bottleneck congestion at the doors of a train while it is stopped at a station wada et al kariyazaki et al these two types of congestion interact with each other and cause delay newell and potts wada et al kato et al tirachini et al kariyazaki et al cuniasse et al for example can prolong the time that a train spends at a station this extended dwell time interrupts the operation of subsequent trains and causes at times of high service frequency as passenger throughput deteriorates when occurs the passenger congestion at stations is a vicious cycle the extreme case is known as bunching newell and potts cuniasse et al reported production loss phenomena which occur almost daily in a railway system due to congestion moreover in the long term such chronic delays could affect passengers departure time choice kato et al observed this phenomenon in tma kim et al also reported that route choice of metro passengers is affected by congestion crowding and delay therefore these congestion dynamics affect both the and dynamics of transit systems for this reason it would be preferable to consider these dynamical aspects of transit systems in order to obtain general policy implications for transit management under heavy demand during rush hours similar to road traffic congestion problems dynamic traffic assignment szeto and lo iryo however to the authors knowledge no study has investigated such problems in transit systems in the aforementioned theoretical studies on transit commuting de cea and and others the travel time of a transit system is assumed to be constant determined by static models meaning that the dynamical aspect is neglected one reason for this might be that we do not have tractable models of transit systems that can consider the dynamics of delay and to fill this gap this article proposes tractable models of the dynamics of urban rail transit considering the physical interaction between and the that differs from which results in discomfort due to standing and crowding but is not necessarily cause any delay directly operation models have been proposed by considering the detailed mechanism of such delay and congestion see vuchic koutsopoulos and wang parbo et al cats et al li et al alonso et al and references therein and these have been used to develop efficient operation schemes however their purposes are optimization and evaluation such as it would be difficult to use them to obtain general policy implications for management strategies as they are essentially complex and intractable remainder of this article is organized as follows in section a simple and tractable operation model of rail transit is formulated by considering and the interaction between them the model describes the theoretical relation among and under ideal is a fundamental diagram fd in section a macroscopic loading model of a transit system in which demand and supply change dynamically is developed based on the proposed fd the model is based on a continuous approximation approach with the fd which is widely used for automobile traffic is an model the model is called macroscopic because it describes the aggregated behavior of trains and passengers in a certain spatial domain in section the approximation accuracy and other properties of the proposed macroscopic model are investigated through a comparison with the results of a microscopic simulation section concludes this article fundamental diagram of rail transit system in this section we analytically derive an fd of a rail transit system based on microscopic operation principles the fd is defined as the relation among and assumptions of the rail transit system we assume two principles of rail transit operation namely the train s dwell behavior at a station for passenger boarding and the cruising behavior on the railroad note that they are equivalent to those employed by wada et al the passenger boarding time is modeled using a bottleneck model that is the of passenger boarding to a dwelling train is assumed to be constant and there is a buffer time time required for door gb for the dwell time the dwell time of a train at a station tb is expressed as tb np gb where np is the number of passengers waiting to board the train at the passengers waiting for a train at a station are assumed to board the first train that arrives this means that the passenger storage capacity of a train is assumed to be unlimited the cruising behavior of a train is modeled using newell s simplified model newell which is a special case of the road traffic flow model the richards model lighthill and whitham richards in this model a vehicle travels as fast as possible while maintaining the minimum safety clearance specifically let xm t be the position of train m at time it is described as xm t min xm t vf t where indicates the preceding train is the physical minimum headway time similar to reaction time of the vehicle vf is the speed maximum speed and is the minimum spacing the parameter np can also be interpreted as the total number of passengers who are getting on and off the train in fact this more general definition is preferable in some sense however it will complicate the following discussions thus we neglect passengers getting off the train note that by carefully distinguishing the two types of passengers the following discussions are valid and do not affect the final results the first term in the min operation indicates that the traffic is in the regime where the train can travel at its maximum speed the second term indicates the traffic is in the congested regime where the train catches the preceding one and is required to decrease its speed to maintain the safety headway and distance simultaneously in a critical regime a train s speed is vf and the train catches the preceding one without loss of generality we introduce a variable buffer headway time hf to describe traffic in the regime steady state of rail transit system here we consider the steady state of rail transit operation under the assumptions stated in section the steady state is an idealized traffic state that does not change over time and its traffic state variables typically combination of flow density and speed are characterized by certain relation called an fd of the traffic flow daganzo in the case of rail transit operation the steady state can be defined as a state that satisfies all of the following conditions the model parameters namely gb vf are constant the distance between adjacent stations l is constant the headway time between successive trains h is constant the cruising speed v of all trains is the same the arriving to each station platform qp is the same additionally we assume that all trains stop at every station in order to operate the transit system qp is assumed otherwise passenger boarding will never end under the steady state the dwell time of a train at a station represented by eq can be transformed to qp gb because np is equal to qp note that the control strategy of the transit operation need not be specified here because reasonable control strategies should follow such steady operation if there is no disturbance otherwise train bunching will occur wada et al transit systems under different steady states are illustrated as diagrams in fig where the horizontal axis indicates the time of day the vertical axis indicates space and the curves indicate train trajectories in each train m arrives at and departs from station i then travels to station i at cruising speed v and finally arrives at station i under different conditions in fig the speed v is equal to the speed vf and hf is greater than zero therefore the state is classified into the regime in fig the speed is equal to vf and hf is equal to zero therefore the state is classified into the critical regime in fig the speed is less than vf therefore the state is classified into the congested regime fundamental diagram in general the following can be considered as the traffic state variables of a rail transit system q k qp kp space x train m train m station i hf v l qp gb station i h time t a regime v vf hf space x train m train m station i v l qp gb station i h time t b critical regime v vf hf space x train m station i train m v l qp gb station i h time t c congested regime v vf hf figure diagrams of rail transit system under steady states among these there are three independent variables for example the combination of q k and qp this is because of the identities in continuum flow namely q and qp kp and the identity now suppose that the relation among the independent variables of the traffic state under every steady state can be expressed using a function q as q q k qp the function q can be regarded as an fd of the rail transit system if the rail transit operation principle follows eqs and the fd function can be specified as lk qp if k k qp gb q k qp k k qp q qp if k k qp l gb l with qp gb l gb k qp qp gb l gb l q qp where q qp and k qp represent and respectively at a critical state with qp for the derivation of eqs see appendix a discussions features of fundamental diagram the fd has the following features that can be derived analytically from eq note that they can easily be found in the numerical example in fig the fd can be interpreted as a function that determines q and under a given k supply and qp demand for the given technical parameters of the transit system gb vf l although the fd equation looks complicated it represents a simple relation namely a piecewise linear triangular relation between q and k under fixed qp as mentioned the traffic state of a transit system can be categorized into three regimes critical and congested as in the standard traffic flow theory therefore there is a critical k qp for any given qp train traffic is in the regime if k k qp in the critical regime if k k qp or in the congested regime otherwise the congested regime can be considered as inefficient compared with the regime because the congested regime takes more time to transport the same amount of passengers the critical regime is the most efficient in terms of travel time as well as number of passengers per train qp however the critical regime requires more trains higher density than the regime therefore it may not be the most efficient if the operation cost is taken into account that the mean speed differs from the cruising speed v the former takes the dwelling time at a station and cruising between stations into account whereas the latter only considers the cruising time even in the critical regime the mean speed is inversely proportional to passenger demand qp this means that travel time increases as passenger demand increases in addition the size of the feasible area of q k narrows as qp increases thus the operational flexibility of the transit system declines as the passenger demand increases flow and density in the critical regime satisfy the following relations q qp l k qp l l here we have assumed l therefore the critical regime can be represented as a straight line whose slope is either positive or negative in the plane this implies a qualitative difference between transit systems specifically if the slope is positive a transit operation with constant would transition from the regime to the congested regime as passenger demand increases fig on the contrary if the slope is negative such an operation would transition from to congested as passenger demand decreases this seems paradoxical but it is actually reasonable because the operational efficiency can be degraded if the number of trains is excessive compared to passenger demand note that eqs and are consistent with edie s generalized definition edie of traffic states therefore the fd is consistent with the fundamental definition of traffic for transit operation edie s traffic state is derived h qp gb lh l qp gb one can easily confirm that eqs satisfy the fd equation numerical example for ease of understanding a numerical example is shown in fig the parameter values are presented in table in the figure the horizontal axis represents k the vertical axis represents q and the plot color represents qp the slope of the straight line from a traffic state to the origin represents the mean speed of the state the features of the fd described in section can be easily confirmed for example the figure can be read as follows suppose that the passenger demand per station is qp if the number of trains in the transit system is given by the k then the resulting train traffic has a of q and a mean speed of this is the traffic state in the regime there is a congested state corresponding to a state for the aforementioned state with q k the corresponding congested state is the critical state under qp is notice that this state has the fastest average speed under the given relations are derived by applying edie s definition to the minimum component of the diagram of the steady state which is a area in fig whose vertexes are points of i train m departs from station i ii train m arrives at station i iii train m arrives at station i and iv train m departs from station i figure numerical example of the fd table parameters of the numerical example parameter u gb l value h km h km passenger demand the triangular relation mentioned before is clearly shown in the figure the left edge of the triangle corresponds to the regime the top vertex corresponds to the critical regime and the right edge corresponds to the congested regime validity of the assumptions here we discuss the relation between the operation principles the proposed fd and an actual transit system first of all it is worth mentioning that all parameters in the proposed model have an explicit physical meaning therefore the parameter calibration required to approximate an actual transit system is relatively simple in the train cruising model each train is assumed to maintain a headway that is greater than the given minimum headway this is reasonable similar models with minimum headways have been used in existing studies carey and higgins and kozan huisman et al to analyze the effect of train congestion delay additionally the model can be considered as the moving block control which is one of the standard operation schemes for trains wada et al under the presence of adaptive control strategies such as and control daganzo wada et al the steady state is likely to be realized this is because the aim of such adaptive control is usually to eliminate other words such control makes the operation steady the passenger boarding model namely the bottleneck model is a coarse approximation of actual phenomena that would be fairly reasonable this is because the capacity of a bottleneck for ordinary pedestrian flow is often considered to be constant lam et al hoogendoorn and daamen however some observational studies have reported that in heavily crowded conditions the boarding time could increase nonlinearly as passenger numbers increase probably due to interference between passengers and a lack of space in carriages harris tirachini et al moreover there is no stock capacity for passengers in the proposed model therefore states with excessively large qp and small q in the fd might correspond to unrealistic situations this is a limitation of the current model nevertheless the scale of can be derived by the model that is qp represents the number of passengers per train relation to the macroscopic fundamental diagram the proposed fd resembles the macroscopic fundamental diagram mfd geroliminis and daganzo daganzo and its extensions geroliminis et al chiabaut they are similar in the following sense first they both consider dynamic traffic second they both describe the relations among macroscopic traffic state variables in which the traffic is not necessarily steady or homogeneous at the local scale they use aggregations based on edie s definition third they both have unimodal relations meaning that there are and congested regimes where the former has higher performance than the latter in addition there is a critical regime where the throughput is maximized therefore it is expected that existing approaches for mfd applications such as modeling control and the optimization of transport systems daganzo geroliminis and levinson geroliminis et al fosgerau are also suitable for the proposed transit fd however there are substantial differences between the proposed fd and the existing concepts in comparison with the original mfd geroliminis and daganzo daganzo and its railway variant cuniasse et al the proposed fd has an additional dimension that is in comparison with the mfd of geroliminis et al which describes the relations among total traffic flow car density and bus density in a traffic network the proposed fd explicitly models the physical interaction among the three variables in comparison with the passenger mfd of chiabaut which describes the relation between passenger flow and passenger density when passengers can choose to travel by car or bus in the proposed fd passenger demand can degrade the performance speed of the vehicles because of the inclusion of the boarding time dynamic model based on fundamental diagram recall that the proposed fd describes the relationship among traffic variables under the steady state therefore the behavior of a dynamical system in which demand and supply change over time is not described by the fd itself this feature is the same as in the road traffic fd and mfds in this section we formulate a model of urban rail transit operation where the demand and supply change dynamically in the proposed model individual train and passenger trajectories are not explicitly described therefore the model is called macroscopic the proposed model is based on an model merchant and nemhauser carey and mccartney in which the proposed fd is employed as the function in other words the transit system is considered as an system as illustrated in fig the modeling approach is often employed for traffic approximations and analysis using mfds such as optimal control to avoid congestion daganzo and analyses of user equilibrium and social optimum in morning commute problems geroliminis and levinson fosgerau the advantage of this approach is that it may be possible to conduct mathematically tractable analysis of dynamic and complex transportation systems where the detailed traffic dynamics are difficult to model in a tractable is the case for transit operations train a t its cumulative a t passenger ap t its cumulative ap t railway system internal average q k t ap t dynamics of internal average dk t a t q k t ap t dt travel time t t train q k t ap t its cumulative d t passenger dp t determined by the model its cumulative dp t figure railway system as an system formulation at time t let a t be the inflow of trains to the transit system ap t be the inflow of passengers d t be the outflow of trains from the transit system and dp t be the outflow of passengers we set the initial time to be and therefore t let a t ap t d t and dp t be the cumulative values rt of a t ap t d t and dp t respectively a t a s ds let t t be the travel time of a train that entered the system at time t and let its initial value t be given by the travel time under q a and qp ap to simplify the formulation the trip length of the passengers is assumed to be equal to that of the this means that t is the travel time of both the trains and the passengers these functions can be interpreted as follows a t trains departure rate from their origin station at time ap t passengers arrival rate at the platform of their origin station at time d t trains arrival rate at their final destination station at time dp t passengers arrival rate at their destination station at time t t travel time of a train and passengers from origin departs at time t to destination note that the arrival time at the destination is t t t therefore in reality a and ap will be determined by the transit operation plan and passenger departure time choice respectively d dp and t are endogenously determined through the operational dynamics in accordance with modeling the train traffic is modeled as follows first the exit flow d t is assumed to be d t q k t ap t assumption is reasonable if the average trip length is shared by trains and passengers if they are different a modification such as tp t t t where is the ratio of average trip length of the passengers to that of the trains would be useful where the fd function q is considered to be an this means that the dynamics of the transit system are modeled by taking the conservation of trains into account as follows dlk t a t q k t ap t dt where l represents the length of the transit route this model has been employed in several studies to represent the macroscopic behavior of a transportation system merchant and nemhauser carey and mccartney daganzo note that the average k t can be defined as k t a t d t l which is consistent with eq based on above functions and equations t a t ap t and eqs and d t and d t can be sequentially other words the train traffic can be computed using the initial and boundary conditions and the model based on the fd the passenger traffic can be derived as follows by the definition of the travel time of trains a t d t t t holds as a t and d t have already been obtained the travel time t t such that eq holds can be computed then dp t and dp t can be computed from the definition of the travel time of passengers which is also t t ap t dp t t t discussion the proposed macroscopic model computes train d t and passenger d t based on the fd function q the initial and boundary conditions a t ap t and t the notable feature of the model is that it is highly tractable as it is based on an model therefore we expect the proposed model to be useful for analyzing various management strategies for transit systems dynamic pricing during the morning commute the proposed model can accurately approximate the macroscopic behavior of a transit operation with operation small headway time under moderate changes in demand supply this is because models are reasonable when the changes in inflow are moderate compared with the relaxation time of the dynamical system because such situations often occur in busy metropolitan subway systems which suffer from congestion and delay during rush hours because of heavy demand the model may be useful for investigating such congestion problems however the accuracy of the model is expected to decrease if the operation has low steadiness such as in the event of train bunching in the next section the quantitative accuracy of the model is verified using numerical experiments the model can also derive the social cost and benefit of a transit system for example the generalized travel cost of passengers travel time schedule delay crowding disutility can be calculated from ap t and dp t in addition the operation cost of the transit system can be calculated from a t d t and the fd parameters np is considered as the sum of the number of passengers who are boarding and alighting as mentioned in note we can simply define d t to be equal to q k t ap t dp t such a model is also computable using a similar procedure verification of the macroscopic model in this section we verify the quantitative accuracy of the macroscopic model by comparing its results with that of the microscopic model eqs and the validity of the macroscopic model can be investigated by comparing its solutions with those of microscopic models using the same initial and boundary conditions and model parameters simulation setting the parameter values of the transit operation are listed in table for both the microscopic and macroscopic models the railroad is considered to be a corridor the stations are equally spaced at intervals of l and there are a total of stations trains enter the railroad with flow a t in the microscopic model a discrete train enters the railroad from the upstream boundary station if ba t c integer part of a t is incremented in the microscopic model trains leave the railroad from the downstream boundary station without any restrictions other than the passenger boarding and minimum headway clearance passengers arrive at each station with flow ap t the functions a t and ap t are exogenously determined to mimic morning rush hours with each having a peak at t the flow before the peak time increases monotonically whereas the flow after the peak time decreases other words the a t and ap t are considered these functions are specified as if t a a a a a a if t a t a if t if t ap ap ap ap ap ap if t ap t ap if t where the values of a a ap and ap are given as scenario parameters the simulation duration is set to h for the baseline scenario in section and to h for the sensitivity analysis in section the reason will be explained later the microscopic model without any control is asymptotically unstable as proven by wada et al this means that demand and supply always cause train bunching making the experiment unrealistic and useless therefore the control scheme proposed by wada et al is implemented in the microscopic model to prevent bunching and stabilize the operation this scheme has two control measures holding extending the dwell time and an increase in speed similar to daganzo the former is activated by a train if its following train is delayed and can be represented as an increase in gb in the microscopic model the latter is activated by a train if it is delayed and can be represented as an increase in vf up to a maximum allowable speed vmax in this experiment vmax is set to and vf is this control scheme can be considered realistic and reasonable as similar operations are executed in practice see appendix b for further details of the control scheme results first to examine how well the proposed model reproduces the behavior of the transit system under conditions the results for the baseline scenario are presented in section a figure result of the microscopic model in the baseline scenario a train b passenger figure result of the macroscopic model in the baseline scenario sensitivity analysis of the conditions is then conduced and applicable ranges of the proposed model are investigated in section baseline scenario the baseline scenario with parameter values a a ap and ap is investigated first a solution of the microscopic model is shown in fig as a diagram the colored curves represent the trajectories of each train traveling in the upward direction while stopping at every station around the peak time period t train congestion occurs namely some of the trains stop occasionally between stations in order to maintain the safety interval the congestion is caused by heavy passenger demand therefore the situation during rush hour is reproduced the result given by the macroscopic model is shown in fig as cumulative plots fig shows the cumulative curves for the trains where the blue curve represents the inflow a and the red curve represents the outflow fig shows those of passengers in the same manner congestion and delay can be observed around the peak period it is more remarkable in the passenger traffic for example during the peak time period dp t is less than ap t and ap where is time such that t t this means that the throughput of the transit system is reduced by the heavy passenger demand consequently t t is greater during peak hours than in periods such as t meaning that delays occur due to the congestion the macroscopic and microscopic models are compared in terms of the cumulative number of figure comparison between the macroscopic and microscopic models in the baseline scenario trains in fig in the figure the solid curves denote the macroscopic model and the dots denote the microscopic model it is clear that d in the macroscopic model follows that of the microscopic model fairly precisely for example the congestion and delay during the peak time period are captured very well however there is a slight bias the macroscopic model gives a slightly shorter travel time this is mainly due to the unsteady state train bunching generated in the microscopic model the delay caused by such bunching can not be recovered by the microscopic model under the implemented control scheme for details see appendix b it means that if the control is the bias could be reduced sensitivity analysis of the conditions the accuracy of the macroscopic model regarding the dynamic patterns of is now examined this is worth investigating quantitatively because it is qualitatively clear that the model is valid if the speed of changes is sufficiently small as discussed in section specifically the sensitivity of the peak passenger demand ap and train supply a is evaluated by assigning various values to these parameters the simulation duration is set to h to take the residual delay after t h in some scenarios into account the other parameters are the same as in the baseline scenario the results are summarized in fig fig shows the relative difference in total travel time ttt of trains between the microscopic and macroscopic models for various peak passenger flows negative values indicate that ttt of the macroscopic model is smaller the relative difference can be considered as an error index of the macroscopic model fig compares the absolute value of ttt in each model note that there are some missing values such as the relative error with a and ap this is due to that the macroscopic model does not derive a solution under the given conditions k t exceeds the jam density this corresponds to gridlock in the transportation system according to the results in fig the accuracy of the macroscopic model is high when the peak passenger demand is low these are the expected results as the speed of demand change is slow in these cases ttt given by the macroscopic model is almost always less than that of the microscopic model this might be due to the aforementioned inconsistency between the steady state assumption of the macroscopic model and control of the microscopic model as the peak passenger demand increases the relative error increases gradually when the demand is low and increases suddenly when the demand exceeds a certain value this sudden increase in the a relative error regarding passenger demand b absolute values of ttt figure comparison between the microscopic and macroscopic models under different conditions error is because of extraordinary train bunching in the microscopic model as confirmed by fig the absolute value of ttt in the microscopic model also exhibits a sudden increase when the demand exceeds the certain value this bunching often occurs in cases with excessive passenger demand such as ap such demand can be considered as unrealistically excessive as the dwell time of a train at a station is longer than the cruising time between adjacent stations in such situations this usually does not occur even in rush hours as for the sensitivity of the train supply a there is a weak tendency for faster variations in supply to cause larger errors this is also the expected result from these results we conclude that the proposed model is fairly accurate under ordinary passenger demand although it is not able to reproduce extraordinary and unrealistic situations for daily travel with excessive train bunching this might be acceptable for representing transit systems during usual rush hours conclusion in this paper the following three models of an urban rail transit system have been analyzed microscopic model a model describing the trajectories of individual trains and passengers based on newell s model and passenger boarding model this is represented in eqs and and can be solved using simulations fundamental diagram an exact relationship among and in the microscopic model under a steady state this is represented in eqs it is a equation macroscopic model a model describing train and passenger traffic using an model whose function is the fd this is represented in eqs and and can be solved using simple simulations the fd and macroscopic model are the original contributions of this study whereas the microscopic model was proposed by wada et al the microscopic model can be considered as a approximation of an actual transit system the fd represents the exact relation among steady state traffic variables in the microscopic model the macroscopic model can be considered as an macroscopic approximation of the behavior of the microscopic model the fd itself implies several insights on transit system in addition according to the results of the numerical experiment the macroscopic model can reproduce the behavior of the microscopic model accurately except for cases with unrealistically excessive demands because of the simplicity mathematical tractability and good approximation accuracy of the proposed fd and macroscopic model in ordinary situations we expect that they will contribute for obtaining general policy implications on management strategies of rail transit systems such as pricing and control for morning commute problems some improvements to the proposed model can be considerable first the model ignores this could be solved using a nonlinear passenger boarding model instead of eq or by introducing a disutility crowding term into the departure time choice problem in the macroscopic model as in tian et al de palma et al second the variability and reliability of a transit system robustness against unpredictable disturbances travel time reliability issues are not considered by the proposed model a stochastic extension of the model might be useful for this problem third the extension to cases with heterogeneity such as spatially heterogeneous station distributions and passenger demand would make the model considerably more realistic as an application of the proposed model the following morning commute problems are being investigated by the authors user equilibrium departure time choice problem find the equilibrium ap t for a given a t and desired arrival time of passengers zp t optimal demand control problem find ap t such that the total travel cost is minimized for a given a t and zp t and optimal demand and supply control problem find ap t and a t such that total travel cost is minimized for a given zp t the solutions to these problems would provide general insights into both demand and supply management strategies for transit systems dynamic pricing operation planning furthermore multimodal commuting problems combined with travel mode choices trains modeled by the proposed fd and cars and buses modeled by mfds are also considerable acknowledgements part of this research was financially supported by a kakenhi for scientific research b a derivation of fd this appendix describes derivation of the fd expressed in eqs consider a looped rail transit system under steady state operation let l be the length of the railroad s be the number of the stations m be the number of trains h be the headway time of the operation tb be the dwelling time of a train at a station tc be the cruising time of a train between adjacent stations and qp be the passenger demand flow rate per station note that the distance between adjacent stations l is and the number of passengers boarding a train at each station is qp the headway time of the operation is derived as follows the round trip time of a train in the looped railroad is s tb tc and m trains pass the station during that time then the identities n h s tb tc and gb tc qp hold moreover by the definition of headway and newell s rule the headway time h must satisfy hf v gb hf qp h tb this reduces to the relation in a regime is derived as follows as the is and is by definition eq can be transformed to q k kl qp gb the and under a critical state q k are derived as follows substituting v vf and hf into eq and using the identity q we obtain qp gb qp gb k gb l by where is the minimum where the is zero namely qp l the relation in a congested regime is derived as follows first the relation in a congested regime can be easily derived from the relation with hf and the identity q k v qp gb gb l now consider which is identical to this can be derived as dq dk l gb l which is constant and negative therefore the relation is linear in a congested regime then recalling that the linear curve passes the point q k with a slope of the relation in a congested regime can be derived as q k with k l gb l dq dk eqs are constructed based on eqs and q b adaptive control scheme in the microscopic model this appendix briefly explains the adaptive control scheme for preventing train bunching proposed by wada et al this scheme consists of two control measures holding at a station and increasing the maximum speed during cruising first the scheme modifies the buffer time for dwelling originally defined as gb in eq of train m at station i to gb max gb em i em i i i i with where i tm i tm i represents the delay tm i represents the time at which train m arrives at station i tm i represents the scheduled time without delay at which train m should arrive at station i and is a weighting parameter this scheme represents a typical holding control strategy similar to the bunching prevention method of daganzo which extends the dwelling time of a vehicle if the headway to the preceding vehicle is too small and vice versa second the scheme modifies the cruising speed vf such that the interstation travel time is reduced by min max em i gb this means that in the event of a delay the train tries to catch up by increasing its cruising speed up to the maximum allowable speed vmax which implies that the speed vf is a buffered maximum speed meanwhile the proposed train operation model in this study does not have a is a operation therefore in this study the scheduled headway in the scheme tm i i is approximated by the planned frequency tm i thus we set and substitute em i with tm i i tm i the stationary state of the operational dynamics under the original scheme is identical to the steady state defined in section in the case of the scheme makes the train operation asymptotically stable meaning that the operation schedule is robust to small disturbances in the case of the scheme prevents the propagation and amplification of delay but does not recover the original schedule the small shift found in fig is due to note that these control measures do not interrupt passenger boarding or violate the safety clearance between trains meaning that most of the fundamental assumptions of the proposed fd are satisfied references alonso munoz ibeas and moura a congested and dwell time dependent transit corridor assignment model journal of advanced transportation carey and stochastic approximation to the effects of headways on delays of trains transportation research part b methodological carey and mccartney an model used in dynamic traffic assignment computers operations research o cats j west and eliasson a dynamic stochastic model for evaluating congestion and crowding effects in transit systems transportation research part b methodological chiabaut evaluation of a multimodal urban arterial the passenger macroscopic fundamental diagram transportation research part b methodological cuniasse buisson rodriguez teboul and de almeida analyzing railroad congestion in a dense urban network through the use of a road traffic network fundamental diagram concept public transport daganzo fundamentals of transportation and traffic operations pergamon oxford daganzo urban gridlock macroscopic modeling and mitigation approaches transportation research part b methodological daganzo a approach to eliminate bus bunching systematic analysis and comparisons transportation research part b methodological de cea and transit assignment for congested public transport systems an equilibrium model transportation science de palma kilani and proost discomfort in mass transit and its implication for scheduling and pricing transportation research part b methodological de palma lindsey and monchambert the economics of crowding in public transport working paper edie discussion of traffic stream measurements and definitions in almond editor proceedings of the international symposium on the theory of traffic flow pages fosgerau congestion in the bathtub economics of transportation geroliminis and daganzo macroscopic modeling of traffic in cities in transportation research board annual meeting geroliminis and levinson cordon pricing consistent with the physics of overcrowding in lam wong and lo editors transportation and traffic theory pages springer geroliminis haddad and ramezani optimal perimeter control for two urban regions with macroscopic fundamental diagrams a model predictive approach ieee transactions on intelligent transportation systems geroliminis zheng and ampountolas a macroscopic fundamental diagram for mixed urban networks transportation research part c emerging technologies gonzales and daganzo morning commute with competing modes and distributed demand user equilibrium system optimum and pricing transportation research part b methodological harris train boarding and alighting rates at high passenger loads journal of advanced transportation higgins and kozan modeling train delays in urban networks transportation science hoogendoorn and daamen pedestrian behavior at bottlenecks transportation science huisman kroon lentink and vromans operations research in passenger railway transportation statistica neerlandica iryo properties of dynamic user equilibrium solution existence uniqueness stability and robust solution methodology transportmetrica b transport dynamics iwakura takahashi and morichi a multi agent simulation model for estimating train delays under urban rail operation in transport policy studies s review volume institute for transport policy studies in japanese kariyazaki investigation of train delay recovery mechanism and delay prevention schemes in urban railway phd thesis national graduate institute for policy studies in japanese kariyazaki hibino and morichi simulation analysis of train operation to recover delay under intervals case studies on transport policy kato kaneko and soyama choices of urban rail passengers facing unreliable service evidence from tokyo in proceedings of the international conference on advanced systems for public transport kim hong ko and kim does crowding affect the path choice of metro passengers transportation research part a policy and practice koutsopoulos and wang simulation of urban rail operations application framework transportation research record journal of the transportation research board kraus and yoshida the commuter s decision and optimal pricing and service in urban mass transit journal of urban economics lam cheung and poon a study of train dwelling time at the hong kong mass transit railway system journal of advanced transportation li dessouky yang and gao joint optimal train regulation and passenger flow control strategy for metro lines transportation research part b methodological lighthill and whitham on kinematic waves ii a theory of traffic flow on long crowded roads proceedings of the royal society of london series a mathematical and physical sciences merchant and nemhauser a model and an algorithm for the dynamic traffic assignment problems transportation science newell a simplified theory a lower order model transportation research part b methodological newell and potts maintaining a bus schedule in proceedings of the australian road research board volume parbo o nielsen and prato passenger perspectives in railway timetabling a literature review transport reviews richards shock waves on the highway operations research szeto and lo dynamic traffic assignment properties and extensions transportmetrica tabuchi bottleneck congestion and modal split journal of urban economics tian huang and yang equilibrium properties of the morning commuting in a mass transit system transportation research part b methodological tirachini hensher and rose crowding in public transport systems effects on users operation and implications for the estimation of demand transportation research part a policy and practice trozzi gentile bell and kaparias dynamic user equilibrium in public transport networks with passenger congestion and hyperpaths transportation research part b methodological vuchic urban transit operations planning and economics john wiley sons wada kil akamatsu and osawa a control strategy to prevent delay propagation in railway systems journal of japan society of civil engineers ser infrastructure planning and management in japanese extended abstract in english was presented at the european symposium on quantitative methods in transportation systems and available at https
| 3 |
jul waring s problem for unipotent algebraic groups michael larsen and dong quan ngoc nguyen abstract in this paper we formulate an analogue of waring s problem for an algebraic group at the field level we consider a morphism of varieties f g and ask whether every element of g k is the product of a bounded number of elements f k f k we give an affirmative answer when g is unipotent and k is a characteristic zero field which is not formally real the idea is the same at the integral level except one must work with schemes and the question is whether every element in a finite index subgroup of g o can be written as a product of a bounded number of elements of f o we prove this is the case when g is unipotent and o is the ring of integers of a totally imaginary number field introduction the original version of waring s problem asks whether for every positive integer n there exists m mn such that every integer is of the form anm ai n and if so what is the minimum value for mn since when hilbert proved that such a bound exists an enormous literature has developed largely devoted to determining mn there is also a substantial literature devoted to variants of waring s problem kamke proved ka a generalization of the theorem in which nth powers are replaced by general polynomials in a series of papers wooley solved waring s problem for polynomials siegel si treated the case of rings of integers in number fields and since then many papers have analyzed waring s problem for a wide variety of rings for instance bi ca vo gv lw ch el also there has been a flurry of recent activity on waring s problem for groups the typical problem here is to prove that every element in g is a product of a small number of nth powers of elements of g see for instance sh lst agks gt and the references therein this paper explores the view that algebraic groups are the natural setting for waring s problem to this extent it resembles the work on waring s problem for groups of lie type the work on the and variants of waring s problem also fit naturally in this framework we will consider morphisms of varieties resp schemes f g defined over a field resp a number ring and look at bounded generation of the groups generated by the images ml was partially supported by nsf grant michael larsen and dong quan ngoc nguyen the strategy is developed in for unipotent algebraic groups over fields of characteristic which are not formally real some justification for concentrating on the unipotent case is given in lemma below and the following remarks in we solve the unipotent version of waring s problem for totally imaginary number rings in we work over general characteristic fields and general number rings but consider only the easier waring s problem in which one is allowed to use inverses our methods throughout are elementary the only input from analytic number theory is siegel s solution of waring s problem over number rings unfortunately in the original situation of waring s problem namely the ring z the additive group ga and the morphism f ga given by f x xn our results fall short of hilbert s theorem we can prove only the easier waring s problem in this case rather than the statement that every positive integer can be represented as a bounded sum of nth powers the difficulty of course is the ordering on z it seems natural to ask whether for unipotent groups over general number rings one can characterize the set which ought to be expressible as a bounded product of images in proving the easier waring s problem we simply avoid this issue generating subvarieties throughout this paper k will always be a field of characteristic and g will be an algebraic group over a variety over k will be a reduced separated scheme of finite type and in particular need not be connected a subvariety closed subgroup will always be understood to be defined over definition let g be an algebraic group over a field a subvariety x of g is generating if there exists n such that every generic point of g lies in the image of the product map from x x x to a finite collectionsfi xi g of morphisms is generating if the union of zariski closures i f xi is generating we have the following necessary and sufficient condition for a subvariety to be generating proposition let g be an algebraic group over k and z g a closed subvariety then z is generating if and only if it satisfies the following two properties i z is not contained in any proper closed subgroup of ii for every proper closed normal subgroup h of g the image of z has positive dimension we first prove the following technical lemma lemma let k be algebraically closed let x y be irreducible closed subvarieties of assume dim xx dim x waring s problem for unipotent algebraic groups dim xy x dim x then there exists a closed subgroup h of g such that the following statements are true i x xh hx for all x x k ii y yh for some y ng h k proof as x is irreducible x is irreducible with generic point the closure x xx of its image in g is therefore the closure of the image of in g and thus irreducible if x x k then xx and xx are closed subvarieties of x of dimension dim x dim x so xx x xx thus x it follows that for x k x x x x x x defining h x for x x k we see that h does not depend on the choice of x and moreover that h as every h h k can be written x h k thus h k is x k it follows that h a subgroup which since k is algebraically closed implies that h is a closed subgroup of g which implies i for y y k xy x is connected and contains xyx which has dimension dim x dim xy x thus xy x xyx hxyxh is connected and has dimension dim it follows that the double coset hxyxh consists of a single left coset so xyx ng h k by i x also normalizes h and it follows that y normalizes finally y xy hx y hx hyh yh using this we can prove proposition proof clearly if z h g then the same is true for z n and if the image of z in is finite the same is true for z n this proves necessity of conditions i and ii for the sufficiency we may assume without loss of generality that k is algebraically closed for all z z k zz n z so dim z n is a bounded sequence of integers it therefore stabilizes for some let x denote an irreducible component of z n of dimension dim z n and y any irreducible component of z then dim x dim x dim z n and dim x dim xy x dim z n so conditions and of lemma are satisfied let h be the closed subgroup of g satisfying i and ii as h is a translate of x it is irreducible if h then x is a connected component of g which means z m is a union of components of g whenever m applying condition i to subgroups of g containing it follows that every generic point of g lies michael larsen and dong quan ngoc nguyen in z m for some m n so z is generated as claimed if h then dim h dim if h is normal in g then the image of z in is finite contrary to condition ii if h has normalizer n g then y is contained in n since n does not depend on the choice of component y z n contrary to condition i henceforth we assume g is connected we are interested in generating collections of morphisms by a theorem of chevalley barsotti and rosenlicht ro every connected algebraic group g has a closed normal subgroup h which is a linear algebraic group and such that is an abelian variety every map from a rational curve to an abelian variety is trivial thus unless g is a linear algebraic group it is impossible for any collection of morphisms g to be generating let r and u denote respectively the radical and the unipotent radical of lemma if u r then there does not exist a generating set of morphisms proof it suffices to prove that there is no generating set of morphisms from to the connected reductive group thus we may assume without loss of generality that g is connected reductive if the radical r is then the inclusion map r g induces an isogeny of tori r g g so it suffices to prove that there no generating set of morphisms from to a torus t without loss of generality we may assume that k thus we may replace t by a quotient isomorphic to the multiplicative group and it suffices to prove there is no morphism of curves from to at the level of coordinate rings this is the obvious statement that every from k t to k x maps t to an element of k or equivalently the fact that k x k we need only consider then the case that g is the extension of a semisimple group by a unipotent group both connected the semisimple case is perhaps even more interesting but we know that at least for we can not always expect bounded generation since for example z and z i do not have bounded generation by elementary matrices gs ta since the characteristic of k is if g is unipotent it is necessarily connected dg iv prop the derived group is then likewise unipotent dg iv prop and therefore connected the quotient is unipotent dg iv prop and commutative and is therefore a vector group dg iv prop the galois cohomology group h k vanishes se iii prop so the cohomology sequence for the short exact sequence g se i prop implies k g k k we identify these groups we do not waring s problem for unipotent algebraic groups distinguish between closed vector subgroups of at the level of algebraic groups over k and the corresponding of the vector space k if v is a subspace of we denote by v the inverse image of v in g regarded as an algebraic group lemma let g be a connected unipotent algebraic group and let h be a proper closed subgroup of then the normalizer of h in g is strictly larger than proof we use induction on dim the case dim g is trivial since this implies g is commutative so the normalizer of every subgroup is all of for general unipotent g the fact that the lower central series goes to implies that the center z of g is of positive dimension if z is not contained in h then zh ng h is strictly larger than otherwise replacing g and h by and respectively we see that is normal in for some n g strictly larger than h so h is normal in n proposition if g is a unipotent group over k then every proper closed subgroup h of g is contained in a normal subgroup n of codimension in g which contains the derived group of proof as k is of characteristic zero g is connected if h e the proposition asserts that g contains a codimension normal subgroup containing as h is a proper subgroup g is so is and the proposition amounts to the obvious statement that every vector group contains a normal subgroup of codimension for the general case applying the previous lemma we can replace h by a strictly larger group ng h unless h is normal in this operation can be repeated only finitely many times since ng h being strictly larger than h must be of strictly higher dimension since every closed subgroup of a unipotent group is unipotent and therefore connected thus we may assume h is normal in then is unipotent replacing g and h by and e respectively we are done from proposition we deduce that for unipotent groups we have the following simple criterion lemma let g be a unipotent group over a subvariety x of g resp a set fn of morphisms g is generating if and only if for each proper subspace v such that the projection of x to is of positive dimension resp the composition of some fi with the projection g is note that the question of whether a set of morphisms fi is generating depends only on the set of compositions of fi with the quotient map g it is also invariant under left or right translation of the fi by any element of g k michael larsen and dong quan ngoc nguyen lemma if fn is not generating then for all positive integers n k fn k n g k proof the image of k fn k n in k is the same as the image of fn n and is therefore a finite subgroup of an infinite group we record the following lemma which will be needed later lemma let g be a unipotent group over k its derived group and the derived group of if v is a proper subspace of then there exists a dense open subvariety g and for all k a dense open subvariety of the form g w such that for all k does not lie in v k proof without loss of generality we assume k is algebraically closed as the characteristic of k is g and are connected so is connected the composition g g of the commutator map and the quotient map has the property that its image generates and is therefore not contained in v it follows that the inverse image u of the complement of v is dense and open in g by chevalley s theorem the projection of u g g onto the first factor g is a constructible set containing the generic point it therefore contains an open dense the fiber over any point k is the condition on that v is linear on the image of in and is satisfied for at least one so defined by the condition v satisfies the properties claimed the unipotent waring problem over nonreal fields definition we say a field k is nonreal if it is of characteristic zero but not formally real is a sum of squares in k the main theorem of the section is the following theorem if g is a unipotent algebraic group over a nonreal field k and fn is a generating set of g then for some positive integer m k fn k m g k the proof occupies the rest of this section it depends on the following two propositions proposition theorem holds when g is a vector group proposition under the hypotheses of theorem there exists an integer m a sequence of elements gm g k a sequence of positive integers km for each i m a sequence of integers waring s problem for unipotent algebraic groups ki n and of ai j bi j k such that for each i m the hm g defined by hi x gi x ki ai ki x bi ki map and as morphisms to are generating assuming both propositions hold we can prove theorem by induction on dimension if g is commutative then proposition applies otherwise we apply proposition to construct hm letting denote the composition of fi with g proposition asserts that every element of g k k k is represented by a bounded product of elements of k k for each gi there exists which is a bounded product of elements of k fn k and lies in the same k of g k defining x x ki ai ki x bi ki it suffices to prove that every element of k is a bounded product of elements of k k as the hi are generating for the same is true for and the theorem follows by induction thus we need only prove propositions and to prove proposition we begin with a special case proposition if k is a characteristic zero field which is not formally real and d is a positive integer there exists an integer n such that every vector in k d is a sum of elements of x xd x k proof for each integer k let xkd z k s thus xid x k the projection map onto the last coordinate in particular xkd for all positive integers it is a theorem el theorem that for each positive integer d there exists m such that every element in k is the sum of m dth powers of elements of we proceed by induction on the theorem is trivial for d assume d x d k d and every it holds for d and choose m large enough that xm element of k is the sum of m d st powers in particular and we denote by x d the limit i xid clearly d d taking unions x d is a semiring let xjd and xid xjd xij x x d denote the projection map onto the first d coordinates and xm xm if xm then choosing w xm xm there exists an is chosen with such that v w if u xm ment v xm michael larsen and dong quan ngoc nguyen u v then either u v or u w is and either way t by there exists an element t with p x x k xm and we are done so by the induction hypothesis xm k x xm we may therefore assume xm which implies xm moreover if fails to be injective the same argument applies so we may assume that is an isomorphism of semirings and therefore an isomorphism of rings since the target k d is a ring thus we can regard as a ring homomorphism k d if ei then ei maps to an idempotent of k which can only be or since ed there exists i such that ei and it follows that factors through projection onto the ith coordinate thus there exists a ring endomorphism k k such that for all x xi as x which is absurd we now prove proposition proof let fj x x pmj x where d is the maximum of the degrees of the pij for i m j let n be chosen as in proposition we write pij x d x aijk xk for i m j given cn our goal is to find k j n n that satisfy the system of equations x aijk ci i j k by proposition by choosing suitably we can choose the values yjk n x independently for j n and k d while n by definition thus we can rewrite the system of equations as d x n x aijk yjk ci n n x i this is always solvable unless there is a relation among the linear forms on the left hand side in this system a sequence bm waring s problem for unipotent algebraic groups such that m x bi aijk for all j and k if this is true then m x x bi pij bi j i in other words defining tn bm tm fj is constant for all j contrary to assumption finally we prove proposition proof suppose we have already constructed hr let denote the composition of hi with the projection let w denote the vector space spanned by the set t i r t k suppose w then for all proper subspaces w w there exist i and t such that and t represent different classes in it follows that the composition of hi with the projection is and therefore hr is a generating set of morphisms to thus we may assume that w is a proper subspace of we apply lemma to deduce the existence of g k and a proper closed subspace v of such that for all g k v k the commutator of and is not in w k let x denote the composition of fi x with the quotient map g as the fi are generating there exists i such that for all but finitely many values x k x is not in v without loss of generality we assume i by proposition there exists a bounded product g fsm bm si n bi k such that bm we write x xd vd with vi k m by proposition there exist an k not all zero such that n x aki k thus x x x an x n which means that x x x fn an x goes to a constant coset without loss of generality we may assume let be an element of g k which can be realized as a product of at most n values of fi at elements of k and such that fbm bm x x fn an x michael larsen and dong quan ngoc nguyen belongs to k for x by proposition such a exists we choose x to be either or x fsm bm x fn an x either way k by x k for all x because the commutator x fsm bm lies in w for at most finitely many at least one of and is mod w by induction on the codimension of w the proposition follows the unipotent waring problem over totally imaginary number rings in this section k denotes a totally imaginary number field o its ring of integers and g a closed of the group scheme uk of unitriangular k k matrices thus the generic fiber gk will be a closed subgroup of uk over k and therefore unipotent moreover there is a filtration of g o by normal subgroups such that the successive quotients are finitely generated free abelian groups in particular it is by definition a finitely generated nilpotent group whose hirsch number is the sum of the ranks of these successive quotients a set fn of g is said to be generating if it is so over the main theorem in this section is the following integral version of theorem theorem if fn is a generating set of g then for some positive integer m o fn o m is a subgroup of finite index in g o we begin by proving results that allow us to establish that some power of a subset of a group gives a finite index subgroup of lemma let be a group a finite index subgroup of a subset of and m a positive integer if then there exists n such that is a finite index subgroup of proof without loss of generality we assume that is normal in consider the finite set z m n for each element choose a pair m m n representing it choose a to be greater than all values m appearing in such pairs let n be a multiple of m which is greater than for all positive integers k is a union of cosets of in and does not depend on the image of in is therefore a subset of a finite group and closed under multiplication it is therefore a subgroup and the lemma follows waring s problem for unipotent algebraic groups lemma let be a finitely generated nilpotent group and a normal subgroup of then every finite index subgroup of contains a finite index subgroup which is normal in proof we prove there exists a function f n n depending only on such that for any normal subgroup of every subgroup of of index n contains a normal subgroup of of index f n in replacing with the kernel of the left action of on we may assume without loss of generality that is normal in we prove the claim by induction on the total number of prime factors of if n p is prime it suffices to prove that there is an upper bound independent of on the number of normal subgroups of of index this is true because intersecting a fixed central series of with gives a central series of and every index p normal subgroup of is the inverse image in of an index p subgroup of the finitely generated abelian group if n has prime factors then for some prime factor p of n is a normal subgroup of index of a normal subgroup of index p in by the induction hypothesis contains a normal subgroup of of index f p in the index of in divides and applying the induction hypothesis we deduce that the existence of a normal subgroup of of index f i in proposition let be a finitely generated nilpotent group a normal subgroup of a subset of and positive integers such that contains a finite index subgroup of and the image of in contains a finite index subgroup of then there exists such that is a finite index subgroup of proof let denote the subgroup of generated by the intersection is of finite index in and the image is of finite index in so is of finite index in as a subgroup of a finitely generated nilpotent group it is also finitely generated and nilpotent replacing and by and respectively we assume without loss of generality that generates replacing with we may assume contains a finite index subgroup of by lemma we may assume that is a normal subgroup of let denote the image of in if is a finite index subgroup of then is the inverse image of this subgroup in and the and respectively proposition holds replacing by we reduce to the case that is finite we need only show that if contains a finite index subgroup of then is a finite index subgroup of for some n replacing by we may assume that and meets every fiber of in particular contains an element of michael larsen and dong quan ngoc nguyen so replacing with we may assume contains the identity so for i a positive integer let mi denote the maximum over all fibers of of the cardinality of the intersection of the fiber with thus the intersection of every fiber of with is at least mi since fiber size is bounded above by the sequence must eventually stabilize replacing with a suitable power we have thus is closed under multiplication as meets every fiber of in the same number of points implies which implies thus is a subgroup of of bounded index next we prove a criterion for a subgroup of g o to be of finite index proposition let g o g k then the hirsch number of satisfies k q dim gk if equality holds in then is of finite index in g o proof hirsch number is additive in short exact sequences let gk and let gk be a central series then we have a decreasing filtration of by gi k and each quotient is a free abelian subgroup of gi k k k dim gi every free r abelian subgroup of k has rank r k q with equality if and only if it is commensurable with or this implies applying the same argument to g o we get hg o k q dim gk if equality holds in then g o i o and its subgroup are commensurable and this implies that is of index y g o i o in we prove theorem by showing that h o fn o m contains a subset which is a group of hirsch number k q dim gk we first treat the commutative case proposition theorem holds if g is commutative proof first we claim that for all d there exist integers l m such that lod xm xdm xi o waring s problem for unipotent algebraic groups since this is of finite index in od replacing m by a larger integer also denoted m we can guarantee that every element in the group generated by x xd x o can be written as a sum of m elements to prove the claim we use proposition to show that each basis vector ei is a sum of m elements of x xd x k replacing each x in the representation of ei by dx for some sufficiently divisible positive integer d it follows that each ki ei can be written as a sum of m elements of x xd x o for suitable positive integers ki for each o we see from the i th difference of see wr theorem that x m i m i i i m thus every element of i o is in the subring o i of o generated by ith powers of elements of o a theorem of siegel see theorem vi implies that there exist such that every element of o i is a sum of ni ith powers of elements of o thus every element of i o is a sum of ni ith powers of elements of o and therefore every element of i ki oei is a sum of m ni elements of x xd x o letting l denote a positive integer divisible by d kd and replacing m by m nd we can write every element of lod as a sum of m elements of x xd x o restricting fj to the generic fiber we can write it as a vector of polynomials with the pij given by we can solve the system of equations in o whenever we can solve in yjk lo this p system is always solvable in k so it is solvable in lo whenever the ci j is sufficiently divisible thus there exists an integer d such that if n and the ci are divisible by d and n is sufficiently large then cm is a sum of n terms each of which belongs to o fn o let dom now g o g k k m as g o has a finite filtration whose quotients are finitely generated free abelian groups it must contain as a subgroup of finite index defining xi o fn o o fn o z in we have xi g o for all i and xi it follows that contains every in g o represented by any element of xi and therefore the sequence stabilizes to a subgroup of g o of rank m k q and of finite index in g o now we prove theorem proof we first observe that proposition remains true over o more precisely assuming that the morphisms fi are defined over o the elements gi can be taken to be in g o and ai j bi j o so the morphisms hi are defined michael larsen and dong quan ngoc nguyen over o instead of using proposition we use proposition the image of the element guaranteed by lemma may not lie in the lattice g k k k m but some positive integer multiple of will do so and the property of with respect to v is unchanged when it is replaced by a power the elements ai guaranteed by proposition may not lie in o but again we can clear denominators by multiplying by a suitable positive integer the element will exist as long as n this can be guaranteed by replacing n with a suitable positive integral multiple induction on dim now we proceed as in the proof of theorem usings by the induction hypothesis there exists n such that i hi o n contains a subgroup of k of hirsch number k q dim on the other hand by proposition there exists a bounded power of o o which contains a subgroup of k of hirsch number k q dim here for each i n x denotes the composition of fi x with the quotient map g the theorem follows from proposition and the additivity of hirsch numbers the easier unipotent waring problem we recall that the classical easier waring problem wr is to prove that for every positive integer n there exists m such that every integer can be written in the form anm ai z and to determine the minimum value of m for each in this section we prove unipotent analogues of the easier waring problem for arbitrary fields of characteristic zero and rings of integers of arbitrary number fields theorem if g is a unipotent algebraic group over a field k of characteristic zero and fn is a generating set of g then for some positive integer m k fn k em g k en theorem let k be a number field o its ring of integers and g a closed of the group scheme uk of unitriangular k k matrices if fn is a generating set of g then for some positive integer m o fn o en en is a subgroup of bounded index in g o the proof of theorem depends on variants of propositions and waring s problem for unipotent algebraic groups proposition if k is a field of characteristic zero and d is a positive integer there exists an integer n such that k d can be represented as k d z z n where n x xd x k proof this is il theorem proposition theorem holds when g is a vector group proof let fj x x pmj x where d is the maximum of the degrees of the pij for i m and j let n be chosen as in proposition writing pij x d x aijk xk for i m and j given cm our goal is to find suitable and k such that cm x n x fj in light of proposition for each j n one can let if n and let if n thus the above system is equivalent to the system of equations n d n x x x x i aijk ci by proposition by choosing k suitably we can choose the values yjk n x x independently for j n and k d while by definition thus we can rewrite the system of equations as d n x x aijk yjk ci i arguing as in the proof of proposition we see that the above system of equations is always solvable unless fj is constant modulo some proper subspace v of am for all j n each fj is constant where michael larsen and dong quan ngoc nguyen am am is the canonical projection this is impossible since the set of morphisms fn is generating proposition under the hypotheses of theorem there exists an integer m a sequence of elements gm g k a sequence of positive integers km for each i m a sequence of integers ki n a sequence of integers ei ki and sequences of elements ai ki bi ki k such that for each i m the hm g defined by hi x gi x ki ai ki x bi ki ei ki map and as morphisms to are generating proof using proposition and the same arguments as in proposition proposition follows immediately proof of theorem the proof of theorem is the same as that of theorem using propositions and we proceed as in the proof of theorem using induction on dim g theorem follows immediately next we prove an integral variant of proposition in greater generality than we need for theorem proposition let o be any integral domain whose quotient field k is of characteristic zero for all positive integers d there exist o and n z such that z z n where n x xd x o proof for each integer k set d yk k z z k k choose n as in proposition for each m d the basis vector em can be written in the form em n n x x d yj yjd xj xj xj for some xj yj replacing each xj yj in the above representation by for some o it follows that there exists a o such that d em yn n waring s problem for unipotent algebraic groups for each o we apply to prove that d m oem nm nm let d qd replacing n by n nd we deduce that d yn n the next result is a variant of proposition proposition theorem holds if g is commutative proof let n be chosen as in proposition restricting fj to the generic fiber we can write it as a vector of polynomials with the pij given by we can solve the system of equations whenever we can solve the system in yjk this system is always solvable in k so it is solvable in whenever the ci are sufficiently divisible thus there exists an integer d such that if the ci are divisible by d and n is sufficiently s large then cm is a sum of n terms each of which belongs to o en fn o let dom set o en fn o en for each i define xi z u in copies of u we have xi g o g k k m for all i and using the same arguments as in the proof of proposition is a subgroup of finite index in g o and therefore the sequence stabilizes to a subgroup of g o of rank m k q and of finite index in g o we now prove theorem proof of theorem we first observe that proposition remains true over o more precisely assuming that the morphisms fi are defined over o the elements gi can be taken to be in g o and ai j bi j o so the morphisms hi are defined over o now we proceed as in the proof of theorem using induction on dim by the induction hypothesis there exists an integer n such that o hn o en n is a subgroup of o of hirsch number k q dims on the other hand by proposition there exists a bounded power of en o o en which is a subgroup of o of hirsch number k q dim here for each i n x denotes the composition of fi x with the quotient map michael larsen and dong quan ngoc nguyen g the theorem follows by proposition and the additivity of hirsch numbers references agks avni nir gelander tsachik kassabov martin shalev aner word values in and adelic groups bull lond math soc no bi birch waring s problem for number fields acta arith ca car mireille le de waring pour les corps de fonctions luminy no carter david keller gordon bounded elementary generation of sln o amer j math carter david keller gordon elementary expressions for unimodular matrices comm algebra no ch chinburg ted infinite easier waring constants for commutative rings topology appl no dg demazure michel gabriel pierre groupes tome i groupes commutatifs avec un appendice corps de classes local par michiel hazewinkel masson cie paris publishing amsterdam el ellison william waring s problem for fields acta arith no gv gallardo luis vaserstein leonid the strict waring problem for polynomial rings j number theory no gs grunewald fritz schwermer joachim free nonabelian quotients of over orders of imaginary quadratic numberfields algebra no gt guralnick robert tiep pham huu effective results on the waring problem for finite simple groups amer j math no il im larsen michael waring s problem for rational functions in one variable preprint ka kamke verallgemeinerungen des satzes math ann no lst larsen michael shalev aner tiep pham huu the waring problem for finite simple groups annals of math no lw liu wooley trevor waring s problem in function fields reine angew math ro rosenlicht maxwell some basic theorems on algebraic groups amer j math se serre cohomologie galoisienne with a contribution by verdier lecture notes in mathematics no springerverlag york sh shalev aner word maps conjugacy classes and a noncommutative waringtype theorem annals of math no si siegel carl ludwig generalization of waring s problem to algebraic number fields amer j math siegel carl ludwig sums of powers of algebraic integers ann of math ta tavgen bounded generation of normal and twisted chevalley groups over the rings of proceedings of the international conference on algebra part novosibirsk contemp part amer math providence ri waring s problem for unipotent algebraic groups vo wr voloch felipe on the waring s problem acta arith no wooley trevor on simultaneous additive equations proc london math soc no wooley trevor on simultaneous additive equations ii reine angew math wooley trevor on simultaneous additive equations iii mathematika no wright edward maitland an easier waring problem london math soc department of mathematics indiana university bloomington indiana usa address mjlarsen department of applied and computational mathematics and statistics university of notre dame notre dame indiana usa address
| 4 |
nov fair till piotr mervin and kai institut softwaretechnik und theoretische informatik tu berlin germany wilker abstract we study the following multiagent variant of the knapsack problem we are given a set of items a set of voters and a value of the budget each item is endowed with a cost and each voter assigns to each item a certain value the goal is to select a subset of items with the total cost not exceeding the budget in a way that is consistent with the voters preferences since the preferences of the voters over the items can vary significantly we need a way of aggregating these preferences in order to select the socially most preferred valid knapsack we study three approaches to aggregating voters preferences which are motivated by the literature on multiwinner elections and fair allocation this way we introduce the concepts of individually best diverse and fair knapsack we study computational complexity including parameterized complexity and complexity under restricted domains of computing the aforementioned concepts of multiagent knapsacks introduction in the classic knapsack problem we are given a set of items each having a cost and a value and a budget the goal is to find a subset of items with the maximal sum of the values subject to the constraint that the total cost of the selected items must not exceed the budget in this paper we are studying the following variant of the knapsack problem instead of having a single objective value for each item we assume that there is a set of agents also referred to as voters who have potentially different valuations of the items when choosing a subset of items we want to take into account possibly conflicting preferences of the voters with respect to which items should be selected in this paper we discuss three different approaches to how the voters valuations can be aggregated multiagent knapsack forms an abstract model for a number of scenarios first observe that it is a natural generalization of the model for multiwinner elections to the this research was initiated within the student project research in teams organized by the research group algorithmics and computational complexity of tu berlin berlin germany supported by the dfg project damm ni case where the items come with different costs in the literature on multiwinner elections items are often called candidates multiwinner voting rules are applicable in a broad class of scenarios ranging from selecting a representative committee of experts through recommendation systems to resource allocation and facility location problems in each of these settings it is quite natural to consider that different can incur different costs further algorithms for multiagent knapsack can be viewed as tools for the participatory budgeting problem where the authorities aggregate citizens preferences in order to decide which of the potential local projects should obtain funding perhaps the most straightforward way to aggregate voters preferences is to select a subset a knapsack that maximizes the sum of the utilities of all the voters over all the selected items this we call selecting an individually best knapsack subject to differences in methods used for elicitating voters preferences has been taken by benabbou and perny and in the context of participatory budgeting by goel et al and benade et al however by selecting an individually best knapsack we can discriminate even large minorities of voters which is illustrated by the following simple example assume that the set of items can be divided into two subsets and that all items have the same unit cost and that of the voters like items from assigning the utility of to them and the utility of to the other items and the remaining of voters like only items from an individually best knapsack would contain only items from that is of the voters would be effectively disregarded in this paper we introduce two other approaches to aggregating voters preferences for selecting a collective knapsack one such we call selecting a diverse knapsack inspired by the rule from the literature on multiwinner voting informally speaking in this approach we aim at maximizing the number of voters who have at least one preferred item in the selected knapsack for the second which is the main focus of the paper and which we call selecting a fair knapsack use the concept of nash welfare from the literature on fair allocation nash welfare is a solution concept that implements a tradeoff between having an objectively efficient resource allocation knapsack in our case and having an allocation which is acceptable for a large population of agents indeed the properties of nash welfare have been recently extensively studied in the literature on fair allocation and this solution concept has been considered in the context of public decision making online resource allocation or transmission congestion control where it is referred to as proportional fairness thus our work introduces a new application the goal is to select a set of shared the concept of nash welfare in particular as a side note we will explain that our approach leads to a new class of multiwinner rules which can be viewed as generalizations of the proportional approval voting rule beyond the approval setting apart from introducing the new class of multiagent knapsack problems our contribution is the following an example often described in the literature is when an enterprise considers which set of products should be pushed to is natural to view such a problem as an instance of multiwinner elections with products corresponding to the and potential customers corresponding to the voters table overview of our results herein sp and sc abbreviate and preferences respectively and voters refers to when parameterized by the number of voters unary general ib knapsack diverse knapsack fair knapsack sp sc voters p thm p p fpt thm prop thm w thm thm thm we study the complexity of computing an optimal individually best diverse and fair knapsack this problem is in general hard except for the case of individually best knapsack with the utilities of the voters represented in unary encoding we study the parameterized complexity of the problem focusing on the number of voters considering this parameter is relevant for the case when the set of voters is in fact a relatively small group of experts acting on behalf of a larger population of agents redelegating the task of evaluating the items to the committee of experts is reasonable for several reasons for instance coming up with accurate valuations of items may require a specialized knowledge and a significant cognitive effort and so it would often be impossible to evaluate items efficiently and accurately among a large group of common people we show that for utilities of the voters computing a diverse knapsack is fpt when parameterized by the number of voters on the other hand the problem of computing a fair knapsack is w for the same parameter we study the complexity of the considered problems for and singlecrossing preferences we show that under unary encoding of voters utilities a diverse knapsack can be computed efficiently when the preferences are or interestingly computing fair knapsack stays even when the preferences are both and our results are summarized in table we additionally show that all three problems are in the case theorems and and prove intractability for the parameter being the budget proposition and corollary the model for any pair of natural numbers i j n i j by i j we denote the set i i j further by j we denote the set j let v vn be the set of n voters and a am be the set of m items the voters have preferences over the items which are represented as a utility profile u ui a i n a a for each i n and a a we use ui a to denote the utility that vi assigns to a this utility quantifies the extent to which vi enjoys a we assume that all utilities are nonnegative integers each item a a comes with a cost c a n and we are given a global budget b we call a knapsack a subset s of items whose total cost does not exceed b that is c s p c a b our goal is to select a knapsack that would be in some sense most preferred by the voters below we describe three representative rules which extend the preferences of the individual voters over individual items to their aggregated preferences over all knapsacks each such a rule induces a corresponding method for selecting the best knapsack our rules are motivated with concepts from the literature on fair division and on multiwinner elections individually best knapsack this is the knapsack which maximizes the total utility p sp of the voters from the selected items uib s vi ui a this defines perhaps the most straightforward way to select the knapsack we call it individually best because the formula uib s treats the items separately and does not take into account fairnessrelated issues indeed such a knapsack can be very unfair which is illustrated by the following example example let b be an integer and consider a set of n b voters and m b items all having a unit cost c a for each a a let us rename the items so that a ax y x y b and consider the following utility profile if i x ui ax y l if i x otherwise for some large l in this case the individually best knapsack is sib y y b that is it consists only of the items liked by a single voter at the same time there exists a much more fair knapsack sfair x b that for each voter v v contains an item liked by diverse knapsack this is the knapsack s that maximizes the utility udiv s defined as p udiv s vi ui a in words in the definition of udiv we assume that each voter cares only about his or her most preferred item in the knapsack this approach is inspired by the rule from the literature on multiwinner elections and by classic models from the literature on facility location we call such a knapsack diverse following the convention from the multiwinner literature intuitively such a knapsack represents the diversity of the opinions among the population of voters in particular if the preferences of the voters are very diverse such a knapsack tries to incorporate the preferences of as many groups of voters as possible at the cost of containing only one representative item for each similar group fair knapsack we use nash welfare as a solution concept q for formally call a knapsack s fair if it maximizes the product ufair s vi ui a alternatively logarithm of ufair we can represent fair knapsack as the one pby taking thep maximizing vi log ui a in section we referred the reader to the literature supporting the use of nash welfare in various settings let us complement these arguments with one additional observation when the utilities of the voters come from the binary set and costs of all items are equal to one then our multiagent knapsack framework boils down to the standard multiwinner elections model with approval preferences in such a case a very appealing rule proportional p p approval voting can be expressed as finding a knapsack maximizing h vi ui a where h i is the harmonic number this is almost equivalent to finding a fair knapsack maximizing the nash welfare since the harmonic function can be viewed as a discrete version of the logarithm thus fair knapsack can be considered a generalization of pav to the model with cardinal utilities and costs in particular as a side note observe that the notion of fair knapsack combined with positional scoring rules induces rules that can be viewed as an adaptations of pav to the ordinal model related work our work extends the literature on the mo knapsack problem that is on the variant of the classic knapsack problem with multiple independent functions valuating the items typically in the mo knapsack problem the goal is to find a the set of pareto optimal solution s according to multiple objectives defined through given functions valuating items our approach is different since we consider specific forms of aggregating the objectives in particular each of the concepts we individually best diverse and fair a pareto optimal solution for an overview of the literature on the mo knapsack problem with the focus on the analysis of heuristic algorithms we refer the reader to the survey by lust and teghem multidimensional md knapsack is yet another generalization of the original knapsack problem in the md knapsack we have multiple cost constraints each item comes with different costs for different constraints and the goal is to maximize a single objective while respecting all the constraints approximation algorithms for the problem with submodular objective functions have been considered by kulik et al sviridenko and lee et al further and puchinger et al provide an overview of heuristic algorithms for the problem finally florios et al consider algorithms for the multidimensional variant of the knapsack problem lu and boutilier studied a variant of the rule which includes knapsack constraints and so which is very similar to our diverse knapsack problem the p q typically nash welfare would be defined as vi ui a in our definition we add one to the p sum ui a in order to avoid pathological situations when the sum is equal to zero for some voters this also allows us to represent the expression we optimize as a sum of logarithms and thus to expose the close relation between the fair knapsack and the proportional approval voting rule difference is that i they consider utilities which are extracted from the voters preference rankings thus these utilities have a specific structure and ii in their model the items are not shared instead the selected items can be copied and distributed among the voters lu and boutilier consider a model with additional costs related to copying a selected item and sending it to a voter consequently their general model is more complex than our diverse knapsack they also considered a more specific variant of this model equivalent to winner determination under the rule the computational complexity of winner determination under the rule the variant of the diverse knapsack where the costs of all items are equal to one has been extensively studied in the computational social choice comsoc literature procaccia et al showed that the problem is the parameterized complexity of the problem was investigated by betzler et al and its computational complexity under restricted domains by betzler et al yu et al elkind and lackner skowron et al and peters and lackner lu and boutilier and skowron et al investigated approximation algorithms for the problem superpolynomial fpt approximation algorithms have been considered by skowron and faliszewski a variant of the diverse knapsack problem with the utilities satisfying a form of the triangle inequality is known under the name of the knapsack median problem see the work of byrka et al for a discussion on the approximability of the problem the method is a multiwinner election rule in short the multiagent knapsack model extends the multiwinner model by allowing the items to have different costs there is a broad class of multiwinner rules aggregating voter preferences in various ways in particular there exists a number of spectra of rules between the individually best and the objectives for an overview of other multiwinner rules which can be adapted to our setting see as we discussed in the introduction the multiagent variant of the knapsack problem has been often considered in the context of participatory budgeting yet to the best of our knowledge this literature focused on the simplest aggregation rule corresponding to our individually best knapsack approach another avenue has been explored by fain et al who studied rules that determine the level of funding provided to different projects items in our nomenclature rather than rules selecting subsets of projects with predefined funding requirements as we mentioned before nash welfare is an established solution concept used in the literature on fair allocation nguyen et al provided a thorough survey on the complexity of computing nash welfare in the context of allocating indivisible goods in the multiagent setting to the best of our knowledge our paper is the first work studying fairness solution concepts for the problem of selecting a collective knapsack computing collective knapsacks in this section we investigate the computational complexity of finding individually best diverse and fair knapsack formally we define the computational problem for individually best knapsack as individually best knapsack input an instance v a u c and a budget b task p compute p a knapsack s a such that c s vi ui a is maximum b and uib s we define the computational problems diverse knapsack and fair knapsack difference is only in the expression to maximize which for the two problems is udiv and ufair respectively we will use the same names when referring to the decision variants of these problems in such cases we will assume that one additional integer x is given in the input and that the decision question is whether there exists s with value uib s respectively udiv s or ufair s greater or equal to x and c s b we observe that the functions uib udiv and ufair when represented as a sum of logarithms are submodular thus we can use the algorithm of sviridenko with the following guarantees theorem there exists a algorithm for individually best knapsack diverse knapsack and fair knapsack in the remaining part of the paper we will focus on computing an exact solution to the three problems in particular we study the complexity under the following two restricted domains preferences let topi denote vi s most preferred item and let be an order of the items we say that a utility profile u is with respect to if for each a b a and each vi v such that a b topi or topi b a we have that ui b ui a preferences let be an order of the voters we say that a utility profile u is with respect to if for each two items a b a the set vi v ui b ui a forms a consecutive block according to we say that a profile u is if there exists an order of the items of the voters such that u is with respect to note that an order witnessing or can be computed in polynomial time see we will also study the parameterized complexity of the three problems for a given parameter p we say that an algorithm for a is fpt with respect to p if it solves each instance i of the problem in o f p poly time where f is some computable function in the parameterized complexity theory fpt algorithms are considered efficient there is a whole hierarchy of complexity classes but informally speaking a problem that is w or w is assumed not to be fpt and hence hard from the parameterized point of view see for more details on parameterized complexity individually best knapsack we first look at the simplest case of individually best knapsack theorem individually best knapsack is solvable in polynomial time when the utilities of voters are p p proof consider an instance v a u c b and let vi ui a we apply dynamic programming with table t where t i x denotes the minimal cost of s ai with value uib s at least equal to x we initialize t i for i m and t x for each x for i m we have t i x h i t i x min p c ai t i max x vj uj ai by precomputing p vj uj a for each a a we get a running time of o nm note that if the utilities are not encoded in unary then the problem is even for one voter see theorem diverse knapsack we now turn our attention to the problem of computing a diverse knapsack through a straightforward reduction from the standard knapsack problem we get that the problem is computationally hard even for profiles which are both and unless the utilities are provided in unary encoding theorem diverse knapsack is even for and utility profiles proof we present a reduction from knapsack let x xn x y be an instance of knapsack where each xi comes with p value xi and p weight xi the question is whether there exists s x with xi xi x and xi xi y we set our set of items a an with c ai xi for each i n we add n voters vn with aj i j ui aj j i j j i p p it is immediate that for each s we have that c a further i a xi xi i p p x if and only if max u a x x which proves the correctness a i j j j vi xj it is immediate to check that the utility profile is and note that computing a diverse knapsack is also for unary encoding as it generalizes the rule which is computationally hard for singlepeaked and profiles the rule is computable in polynomial time these known algorithms can be extended by considering dynamic programs with induction running over other dimensions to the case of the diverse knapsack theorem diverse knapsack is solvable in polynomial time when the utility profile is and encoded in unary proof consider an input instance v vn a am u c b where a is enumerated such that the order is note that such an ordering can be p p computed in polynomial time let vi ui a we apply dynamic programming with table t where t i x denotes the minimal cost of a subset s ai containing ai ai s with value at least equal to x udiv s x we define the helper function p c ai vj uj ai x f i x otherwise we initialize t x f x for all x then we set t i x min f i x c ai i t j x d i j p where d i j max ai aj let m i x t i x b then we can derive the value of the best diverse knapsack from max x i x m we inductively argue over i clearly the best diverse knapsack over item set has cost c consider t i x with i let ai ai be a set of items with value at least x of minimal cost containing ai then either ai ai or ai aj ai where j i with aj ai and there is no j i such that aj ai and j j i in the first case ai c ai f i x consider the second case clearly c aj c ai c ai let v aj ai and let v from the we have that ai aj ai aj if and clearly ai aj if hence the value of ai is greater than the value of aj by max ai aj d i j before we provide an analogous result for let us define a set of useful tools we will also use these tools later on when analyzing the parameterized complexity of the problem vn and a subset s a of items we define given a tuple of voters v an assignment as a surjection n an assignment is called connected if for every s s it holds that s i n s i x y for some x y n for s v our first tool we introduce the following auxiliary problem ordered diverse knapsack a u c where v vn is ordered and a budget b input an instance v task compute a knapsack s a such that c s b and uord pn maxconnected ui i is maximum if s a is a solution to diverse knapsack on v a u c then let vi vj v si arg uj a consider an ordering v where for each i the voters in vi are arbitrarily ordered then it is not difficult to see that the assignment i arg ui a is connected hence we obtain the following connection between diverse knapsack and ordered diverse knapsack on the voters v such that there is an s a observation there is an ordering v that forms a solution for ordered diverse knapsack and for diverse knapsack next we give a dynamic program for computing knapsacks that qualitatively lie between optimal knapsacks for ordered diverse knapsack and diverse knapsack we will specify what we mean by lying in between later on vn of the voters we let us an input v a u c b and an ordering v pfix n p set ui a we give a dynamic program with table t where t i x denotes some cost of a knapsack with a value assigned by voters from vi at least equal to x we set t x min c a a a a x if there is an a a such that a x and t x otherwise we define the helper function pi c a uj a x f i a x otherwise we set t i x min f i a x c a min t j max x pi a observation when the utilities are we can compute all entries in t in polynomial time lemma let s be a solution to diverse knapsack on v a u c and let x udiv s then t n x c s proof suppose that this is not the case that is t n x c s then we construct a knapsack s from t n x as follows let a a be an item that minimizes for t n x then make a s if t n x f n a x then t n x c a c s contradicting the fact that s is otherwise t n x c a t j max x n x a then we proceed towards a contradiction as before let a be an item that minimizes for t j then make s and continue the same reasoning we next give the relation to ordered diverse knapsack lemma let s be a solution to ordered diverse knapsack a u c where v vn is ordered and let x uord s then t n x c s on v proof assume s being enumerated let be an connected assignment p such that vi ui i x let n be such that ij p ij ui i for j by our definition of t we have moreover let xj s v that t c moreover we have t t c it follows inductively that t c s we have all ingredients at hand to prove our main results proposition diverse knapsack is solvable in polynomial time when utility profiles are and encoded in unary is a ordering on the voters v then there is an s a that forms proof if v a solution for ordered diverse knapsack and for diverse knapsack by lemmas and we are guaranteed that our algorithm will find it further we can use our tools to obtain an fpt algorithm with respect to the number of voters for unrestricted domains theorem diverse knapsack is in fpt when parameterized by the number of voters when the utilities are on the voters such that there proof by observation we know that there is an ordering v is an s a that forms a solution for ordered diverse knapsack and our dynamic for diverse knapsack together with lemmas and we obtain that for v program will find such hence for each ordering on the voters in v we compute t n x then we take the minimum over all observed values note that x is the largest value such that t n x b for some ordering on the voters in v altogether this yields a running time of o n poly n m o log n poly n m finally we complement theorem by proving a lower bound on the running time assuming eth proposition diverse knapsack with binary utilities and unary costs is w when parameterized by the budget b and unless the eth breaks there is no poly n m algorithm proof we give a reduction from dominating set an instance of dominating set consists of a graph g w e and an integer k the question is whether there exists a subset s of at most k vertices such that for each vertex w w there is an s s such that w ng s where ng s v w v s e s denotes the closed neighborhood of s in for each vertex w w we introduce a voter vw to v and an item aw to a of cost one we set uvw if ng w and uvw otherwise furthermore we set the budget b it is not difficult to see that there is a diverse knapsack s with c s b and udiv s n if and only if g k is a as b k and n the lower bounds follow fair knapsack let us now turn to the problem of computing a fair knapsack we first prove that the problem is even for restricted cases and then we study its parameterized complexity theorem fair knapsack is even for one voter for two voters and when all costs are equal to one if all utilities are in and all costs are equal to one proof we provide a reduction from the partition problem given a set s sn of nppositive integers the question is to decide whether there exists a p subset s s such that s given an instance s of partition where all integers p are divisible by two we construct an instance of fair knapsack as follows let t for each si s we introduce an item ai with cost si further we introduce one voter with utility ai si for each i n we set the budget b t and we ask if there exists a knapsack with a nash welfare w of at least t let s be a and let s s be p a solution then p the subset of items a ai a si s forms apfair knapsack as p s t b and c a the nash welfare is at least a s t conversely let the constructed instance of fair knapsack be a and let a be a fair knapsack denote to p p subset of integers in s corresponding p by s the s moreover the p items in a then it holds that s c a t p a t together both inequalities yield t and hence s forms a solution to s we provide a reduction from the exact partition problem given a set s sn of n positive integers k decide whether there is pand an integer a subset s s with k such that s given an instance s k of exact partition where all integers are divisible by two and byp k we construct an instance of fair knapsack as follows similarly as before we set t for each si s we introduce an item ai with cost further we introduce two voters and with utility functions ai t si and ai t tk si respectively we set the budget b k and ask for a knapsack with a nash welfare w at least equal to kt t let s k be a and let s s be a solution we claim that the subset ofpitems ai a si s forms an appropriate fair knapsack it holds that c a k b the nash welfare is at least x x a a kt x s kt x kt t kt t x s k kt t conversely let the constructed instance of fair knapsack be a and let a be a corresponding fair knapsack let s denote p the subset of integers in s corresponding to the items in then it holds that c a moreover for each item a a it holds that a a t and p hence a a p t the product a a is maximal t together it follows if a a leading to a t that k and hence s forms a solution for s k we provide a reduction from the exact regular set packing ersp problem there is a parameterized reduction from exact regular independent set given a set x set f fm of subsets of x with d for all i m and an integer k decide whether there exists a subset f f with k such that for each distinct f f f it holds f f let x xn f fm k be an instance of ersp where d for all i m we construct an instance of fair knapsack as follows let a ai fi f be the set of items each with cost equal to one further we introduce n voters with ui aj if xi fj and ui aj otherwise for all i n j m we set b k and the desired nash welfare to w this finishes the construction assume that x f k admits a solution f we claim that ai a fi f is a fair knapsack with the desired value of the nash welfare note that k by the construction each item a a contributes one to exactly d voters moreover each distinct a contribute to disjoint sets of voters hence y x ui a conversely let a be a fair knapsack and let f p fi f ai we claim that f forms a solution to x f k first observe that ui a let m i n xi then s y f be the set of elements covered by f note that x ui a for the second inequality observe that the function x is increasing on the interval y for every y hence we have that k and k thus f is a set of exactly k pairwise disjoint sets given that exact regular set packing ersp is w with respect to the size of the solution the proof of theorem implies the following corollary fair knapsack is w when parameterized by the budget even if all utilities are in and all costs are equal to one using a more clever construction we can show that for the combination of the two number of voters and the still get intractability theorem fair knapsack is w when parameterized by the number of voters and the budget even if the utilities and the budget are represented in unary encoding and the costs of all items are equal to one proof we provide a parameterized reduction from the clique problem which is known to be w with respect to the number of colors let i be an instance of clique in i we are given a graph g with the set of vertices v g and the set of edges e g a natural number k n and a coloring function f v g k that assigns one of k colors to each vertex we ask if g contains k pairwise connected vertices each having a different color without loss of generality we assume that k from i we construct an instance if of fair knapsack as follows we refer to figure for an illustration let t g we set the set of items to v g e g that is we associate one item with each vertex and with each edge we construct the set of voters as follows unless specified otherwise by default we assume that a voter assigns utility of zero to an item for each color we introduce one voter who assigns utility of t to each vertex with this color clearly there are k such voters for each pair of two different colors we introduce k voters each assigning utility t to each edge that connects two vertices with these two colors there are k such voters v g t t t t e g t t v t t v a v b v t i t i a v b v t t i t j t j t t j figure illustration of the instance obtained in the proof of theorem herein ncb denotes vertex b in color class c where each color class contains vertices in the presented example the vertices and are adjacent blocks containing a zero indicate that the corresponding entries are zero for each ordered pair of colors and with we introduce two vertices call them a and b with the following utilities consider the set of vertices with color and rename them in an arbitrary way so that they can be put in a sequence for each i voter a assigns utility i to vertex ni and utility t i to each edge that connects ni with a vertex with color voter b assigns utility t i to ni and utility i to each edge that connects ni with a vertex with color there are k such voters we set the cost of each item to one and the total budget to b k by a simple calculation one can check that the total number of voters is equal to k k k kb this completes our construction first observe that in total each item is assigned utility of kt from all the voters indeed each item corresponding to a vertex gets utility of t from exactly one voter from the first group and total utility of k t from k voters from the third group similarly each item corresponding to an edge gets utility of t from k voters from the second group and total utility of t from four voters from the third group thus independently of how we select b items the sum of the utilities they are assigned from the voters will always be the same that is bkt thus clearly the nash welfare would be maximized if the total utility assigned to the selected items by each voter is the same and equal to t only in such case the nash welfare would be equal to t kb we will show however that each voter assigns to the set of b items utility t if and only if k out of such items are k vertices with k different colors the remaining of such items are edges and each selected edge connects two selected vertices indeed it is easy to see that if the selected set of items has the structure as described above then each voter assigns to this set the utility of t we will now prove the other implication assume that for the set of b items s each voter assigns total utility of t by looking at the first group of voters we infer that k items from s correspond to the vertices and that these k vertices have different colors by looking at the second group of voters we infer that for each pair of two different colors s contains exactly one edge connecting vertices with such colors finally by looking at the third group of voters we infer that each edge from s that connects colors and is adjacent to the vertices from s with colors and this completes the proof on the other hand each instance i of fair knapsack with utilities represented in unary encoding is solvable in o n time it is in xp when parameterized by n where n is the number of voters and f is some computable function only depending on theorem for utilities represented in unary encoding fair knapsack is in xp when parameterized by the number of voters proof we provide an algorithm based on dynamic programing we construct a table t where for each sequence of n integers zn and i entry t zn i represents the lowest possible value of the budget x such that there exists a knapsack s with p the following properties i the total cost of all items in the knapsack is equal to x x c a ii the last index of an p item in the knapsack s is i i maxaj j and iii for each voter vj we have that uj a zj this table can be constructed recursively t zn i c ai min t ai ai zn un ai j we handle the corner cases by setting t i for each i and t zn i whenever zi for some i n clearly if n is fixed and if the utilities are represented in unary encoding then the table can be filled in polynomial time now it is sufficient to traverse the table and to find the q entry t zn i b which maximizes zj on the positive side with stronger requirements on the voters utilities that is if the number of different values over the utility functions is small we can strengthen theorem and prove the membership in fpt theorem fair knapsack is fpt when parameterized by the combination of the number of voters and the number of different values that a utility function can take proof we will use the classic result of lenstra which says that an integer linear program ilp can be solved in fpt time with respect to the number of integer variables we will also use a recent result of bredereck et al who proved that one can apply transformations of certain variables in an ilp and that such a modified program can be still solved in an fpt time we construct an ilp as follows let u be the set of values that a utility function can take for each vector z zn with zi u for each i we define az as the set of items a such that for each voter vi we have ui a zi intuitively az describes a subcollection of the items with the same type such items are indistinguishable when we look only at the utilities assigned by the voters they may vary only with their costs for each such a set az we introduce an integer variable xz which intuitively denotes the number of items from the optimal solution that belong to az further we construct a function fz such that fz x is the cost of the x cheapest items from az clearly fz is convex we formulate the following program maximize x log x zi xz n vi subject to x fz xz b n xz z z un the above program uses concave transformations logarithms for the maximized expression and convex transformations functions fz in the sides of the constraints so we can use the result of bredereck et al and claim that this program can be solved in an fpt time with respect to the number of integer variables this completes the proof fair knapsack under restricted domains in contrast to individually best knapsack and diverse knapsack both being solvable in polynomial time on restricted domains computing a fair knapsack remains on utility profiles that are even both and theorem fair knapsack is even on domains when the costs of all items are equal to one and the utilities of each voter come from the set proof we give a reduction from the problem given a universe u with elements and a set f of subsets of u the question is to decide whether there exist exactly k subsets in f that cover u without loss of generality we can additionally assume that each element in u appears in exactly three sets from f given an instance u en f fm of note ai am i i figure visualization of the utilities of the voters used in the proof of theorem the solid lines can be interpreted as plots depicting the utilities of the voters from different items for instance agent assigns utility of to the items am and utility of to the items note that agents and depicted in the figure correspond to the element such that fi fm that n m we compute an instance of the problem of computing a fair knapsack as follows the utilities of the voters are depicted in figure first for each i m we introduce two items ai and that correspond to set fi each with the cost of one further we introduce three different types of voters we add two voters and with ai and ai for all i m i i for each i m we add two voters and with uy i aj j i i j i i j and uy i aj uy i i i for each i n we add two voters and with uz i aj fi aj and uz i aj uz i where fi aj j ei for j m fi j m ei for j m we set the budget to b and the required nash welfare to w it is apparent that with the order this is profile is for single i i crossingness note that the utilities of agents xi yj zj are increasing over if i and decreasing if i hence the order of voters ym zm y y m zm witnesses we will prove that u f fm is a for if and only if the constructed instance of fair knapsack is a let f fbk f be an exact cover of u we claim that s abi i k is a fair knapsack first observe that c s b we consider the welfare for each of p the three typesp of voters separately for and we have a a i next consider the voters of type consider i m k x uy i abj uy i x uy i abj uy i x uy i abj uy i i bj j bj i j i bj m by symmetry pk uy i abj uy i i finally consider the voters of type consider a voter i m let j be the index such that ei recall exact cover we have x uz i a k x fi abj fi fi fi x fi abj fi k j x k j x bj ei bj m ei m ei k j x k j by symmetry p x k j x k j uz i a ei fbj hence we get in total that the nash welfare is equal to w let s be a fair knapsack with c s and with the nash welfare at least equal to w we will now show that the total utility that all voters assign to each item aj a is equal to indeed the two voters from assign to aj i i the total utility of similarly any pair of voters and assigns utility of to aj i i finally observe that whenever ei fj then voters and assign utility of to aj otherwise they assign utility of to aj since each set fj contains exactly elements we get that aj gets total utility of n from the voters from hence items contribute to the total utility and so for the nash welfare to be equal to w this total utility must be distributed as equally as possible among the voters specifically voters need to get the total utility of and voters must get the total utility of now we claim that for each i m ai s suppose this is not the case and let i m be the smallest index such that either i ai s s or ii ai s consider the first case i let j i aj s and i j aj s it holds that and it follows for i voter that x uy i aj aj case p ii works analogously and hence our claim follows p from this we infer that uy a for each i and m and that uxi a for i p each i thus for each voter z from it must be the case that uz a finally we will prove that f fbk fi f ai s i m forms a cover of u towards a contradiction suppose that there is an element ei u such that ei i is not covered by f we consider voter observe that since ei fbj for each j k we have x x uz i abj uz i k k thus we reached a contradiction and consequently we get that every element in u is covered by this completes the proof as we discussed in section if the voters utilities come from the binary set and if the costs of the items are equal to one then the problem of computing a fair knapsack is equivalent to computing winners according to proportional approval voting for this case with preferences peters showed that the problem can be formulated as an integer linear program with total unimodular constraints and thus it is solvable in polynomial time this makes our result interesting as it shows that by allowing slightly more general utilities coming from the set instead of the problem becomes already even if we additionally assume of the preferences this draws quite an accurate line separating instances which are computationally easy from those which are intractable conclusion in this paper we study three variants of the knapsack problem in multiagent settings one of these variants selecting an individually best knapsack has been considered in the literature before and this work introduces the other two concepts diverse and fair knapsack our paper establishes a relation between the knapsack problem and a broad literature including a literature on multiwinner voting and on fair allocation this way we expose a variety of ways in which the preferences of the voters can be aggregated in a number of scenarios that are captured by the abstract model of the multiagent knapsack our computational results are outlined in table in summary our results show that the problem of computing an or a diverse knapsack can be handled efficiently under some simplifying assumptions on the other hand we give multiple evidences that computing a fair knapsack is a hard problem thus this research provides theoretical foundations motivating and calls for studying approximation and heuristic algorithms for the problem of computing a fair knapsack references arrow social choice and individual values john wiley and sons revised editon ausiello d atri and protasi structure preserving reductions among convex optimization problems journal of computer and system sciences benabbou and perny solving knapsack problems using incremental approval voting in proceedings of the european conference on artificial intelligence pages benade nath procaccia and shah preference elicitation for participatory budgeting in proceedings of the aaai conference on artificial intelligence pages betzler slinko and uhlmann on the computation of fully proportional representation journal of artificial intelligence research black on the rationale of group journal of political economy bredereck faliszewski niedermeier skowron and talmon mixed integer programming with constraints tractability and applications to multicovering and voting technical report byrka pensyl rybicki spoerhase srinivasan and trinh an improved approximation algorithm for knapsack median using sparsification in proceedings of annual european symposium on algorithms pages cabannes participatory budgeting a significant contribution to participatory democracy environment and urbanization caragiannis kurokawa moulin procaccia shah and wang the unreasonable fairness of maximum nash welfare in proceedings of the acm conference on economics and computation pages chamberlin and courant representative deliberations and representative decisions proportional representation and the borda rule american political science review conitzer freeman and shah fair public decision making in proceedings of the acm conference on economics and computation pages cygan fomin kowalik lokshtanov marx pilipczuk pilipczuk and saurabh parameterized algorithms springer darmann and schauer maximizing nash product social welfare in allocating indivisible goods european journal of operational research downey and fellows fundamentals of parameterized complexity texts in computer science springer elkind and lackner structure in dichotomous preferences in proceedings of the international joint conference on artificial intelligence pages elkind lackner and peters structured preferences in endriss editor trends in computational social choice ai access fain goel and munagala the core of the participatory budgeting problem in proceedings of the conference on web and internet economics pages faliszewski skowron slinko and talmon multiwinner voting a new challenge for social choice theory in endriss editor trends in computational social choice ai access faliszewski skowron slinko and talmon multiwinner rules on paths from to pages zanjirani farahani and hekmatfar editors facility location concepts models and case studies springer fellows hermelin rosamond and vialette on the parameterized complexity of graph problems theoretical computer science florios mavrotas and diakoulaki solving multiobjective multiconstraint knapsack problems using mathematical programming and evolutionary algorithms european journal of operational research flum and grohe parameterized complexity theory freeman zahedi and conitzer fair and efficient social choice in dynamic settings pages the multidimensional knapsack problem an overview european journal of operational research garey and johnson computers and intractability a guide to the theory of freeman and company goel krishnaswamy sakshuwong and aitamurto knapsack voting voting mechanisms for participatory budgeting manuscript frank kelly charging and rate control for elastic traffic european transactions on telecommunications kulik shachnai and tamir approximations for monotone and nonmonotone submodular maximization with knapsack constraints mathematics of operations research lackner and skowron consistent rules technical report april lee mirrokni nagarajan and sviridenko submodular maximization under matroid and knapsack constraints in proceedings of the fortyfirst annual acm symposium on theory of computing pages lenstra integer programming with a fixed number of variables mathematics of operations research lu and boutilier budgeted social choice from consensus to personalized decision making in proceedings of the international joint conference on artificial intelligence pages lust and teghem the multiobjective multidimensional knapsack problem a survey and a new approach international transactions in operational research mirrlees an exploration in the theory of optimal income taxation review of economic studies monroe fully proportional representation american political science review moulin fair division and collective welfare the mit press nash the bargaining problem econometrica nguyen roos and rothe a survey of approximability and inapproximability results for social welfare optimization in multiagent resource allocation annals of mathematics and artificial intelligence niedermeier invitation to algorithms oxford university press peters and total unimodularity efficiently solve voting problems without even trying technical report peters and lackner preferences on a circle in proceedings of the aaai conference on artificial intelligence pages procaccia rosenschein and zohar on the complexity of achieving proportional representation social choice and welfare puchinger raidl and pferschy the multidimensional knapsack problem structure and algorithms informs journal on computing ramezani and endriss nash social welfare in multiagent resource allocation pages springer berlin heidelberg roberts voting over income tax schedules journal of public economics skowron and faliszewski fully proportional representation with approval ballots approximating the maxcover problem with bounded frequencies in fpt time in proceedings of the aaai conference on artificial intelligence pages skowron faliszewski and lang finding a collective set of items from proportional multirepresentation to group recommendation in proceedings of the aaai conference on artificial intelligence skowron faliszewski and slinko achieving fully proportional representation approximability result artificial intelligence skowron yu faliszewski and elkind the complexity of fully proportional representation for electorates theoretical computer science skowron faliszewski and lang finding a collective set of items from proportional multirepresentation to group recommendation artificial intelligence sviridenko a note on maximizing a submodular set function subject to a knapsack constraint operations research letters thiele om flerfoldsvalg in oversigt over det kongelige danske videnskabernes selskabs forhandlinger pages yu chan and elkind multiwinner elections under preferences that are on a tree in proceedings of the international joint conference on artificial intelligence pages
| 8 |
an informal overview of triples and systems nov louis rowen abstract we describe triples and systems expounded as an axiomatic algebraic umbrella theory for classical algebra tropical algebra hyperfields and fuzzy rings introduction the goal of this overview is to present an axiomatic algebraic theory which unifies simplifies and explains aspects of tropical algebra hyperfields and fuzzy rings in terms of familiar algebraic concepts it was motivated by an attempt to understand whether or not it is coincidental that basic algebraic theorems are mirrored in supertropical algebra and was spurred by the realization that some of the same results are obtained in parallel research on hyperfields and fuzzy rings our objective is to hone in on the precise axioms that include these various examples formulate the axiomatic structure describe its uses and review five papers in which the theory is developed the bulk of this survey concerns in which the axiomatic framework is laid out since the other papers build on it other treatments can be found in although we deal with general categorical issues ours is largely a hands on approach emphasizing a negation map which exists in all of the abovementioned examples and which often is obtained by means of a symmetrization functor although the investigation has centered on semirings having grown out of tropical considerations it also could be used to develop a parallel lie theory and more generally hopf theory acquaintance with basic notions one starts with a set t that we want to study called the set of tangible elements endowed with a partial additive algebraic structure which however is not defined on all of t this is resolved by embedding t in a larger set a with a fuller algebraic structure often t is a multiplicative monoid a situation developed by lorscheid when a is a semiring however there also are examples such as lie algebras lacking associative multiplication we usually denote a typical element of t as a and a typical element of a as b definition a t over a set t is an additive monoid a together with scalar multiplication t a a satisfying distributivity over t in the sense that a for a t bi a also stipulating that a t module over a multiplicative monoid t is a t asatisfying the extra conditions b b b t b a date november mathematics subject classification primary secondary key words and phrases bipotent category congruence dual basis homology hyperfield linear algebra matrix metatangible morphism negation map module polynomial prime projective tensor product semifield semigroup semiring split supertropical algebra superalgebra surpassing relation symmetrization system triple tropical file name over monoids has been a subject of recent interest cf generally when t is not a monoid hopf theory can play an interesting role examined in turns out that distributivity over elements of t is enough to run the theory since one can re define multiplication on a to make it distributive as seen in theorem this rather easy result applies for instance to hyperfields such as the phase hyperfield rowen for the sake of this exposition we assume that a is a t and t a next we introduce a formal negation map which we describe in after some introductory examples such that t t generates a additively creating a t a t when a formal negation map is not available at the outset we can introduce it in two ways to be elaborated shortly declare the negation map to be the identity as in the supertropical case cf apply symmetrization to get the switch map of second kind cf often is applicable where t could take the role of the thin the element b b is called a we write for b a and usually require that t a can not be tangible in classical algebra the only is itself and for all accordingly we call a triple t definition when for some in t examples from classical mathematics might provide some general intuition about employing a to study t a rather trivial example t is the multiplicative subgroup of a field a or a could be a graded associative algebra with t its multiplicative submonoid of homogeneous elements a deeper example tied to bases is given in example but we are more interested in the situation involving semirings which are not rings some motivating examples the supertropical semiring where t is the set of tangible elements the symmetrized semiring and the power set of a hyperfield where t is the hyperfield itself since hyperfields are so varied they provide a good test for this theory semirings in general without negation maps are too broad to yield as decisive results as we would like which is the reason that negation maps and triples are introduced in the first place since we need to correlate two structures t and a as well as the negation map which could be viewed as a unary operator it is convenient to work in the context of universal algebra which was designed precisely for the purpose of discussing diverse structures together more recently these have been generalized to lawvere theories and operads but we do not delve into these aspects to round things out given a triple we introduce a surpassing relation to replace equality in our theorems in classical mathematics is just equality the quadruple a t is called a t motivating examples a satisfies all the axioms of ring except the existence of a element and of negatives a semiring is a with we elaborate the main examples motivating this theory idempotent semirings tropical geometry has assumed a prominent position in mathematics because of its ability to simplify algebraic geometry while not changing certain invariants often involving intersection numbers of varieties thereby simplifying difficult computations outstanding applications abound including the main original idea as expounded in was to take the limit of the logarithm of the absolute values of the coordinates of an affine variety as the base of the logarithm goes to the underlying algebraic structure reverted from c to the algebra rmax an ordered multiplicative monoid in which one defines a b to be max a b this is a and is clearly additively bipotent in the sense that a b a b such algebras have been studied extensively some time ago cf definition a semigroup a has characteristic k if k a a for all a a with k minimal a has characteristic if a does not have characteristic k for any k idempotent in particular bipotent semirings have characteristic and their geometry has been studied intensively as cf but logarithms can not be taken over the complex numbers and the algebraic structure of bipotent semirings is often without direct interpretation in tropical geometry so attention of tropicalists passed to the field of puisseux series which in characteristic also is an algebraically closed field but now with a natural valuation thereby making available tools of valuation theory cf the collection presents such a valuation theoretic approach thus one looks for an alternative to the algebra an informal overview of triples and systems properties of the characteristic are described in most of our major examples have characteristic but some interesting examples have characteristic or more supertropical semirings izhakian overcame many of the structural deficiencies of a algebra t by adjoining an extra copy of t called the ghost copy g in definition and modifying addition more generally a supertropical semiring is a semiring with ghosts r g t g together with a projection r g satisfying the extra properties a a b a b b r such that a b b supertropicality a b a if a b the supertropical semiring is standard if is mysteriously although lacking negation the supertropical semiring provides affine geometry and linear algebra quite parallel to the classical theory by taking the negation map to be the identity where the ghost ideal g takes the place of the element in every instance the classical theorem involving equality f g is replaced by an assertion that f g ghost called ghost surpassing in particular when g this means that f itself is a ghost for example an irreducible affine variety should be the set of points which when evaluated at a given set of polynomials is ghost not necessarily leading to a version of the nullstellensatz in theorem a link between decomposition of affine varieties and factorization of polynomials illustrated in one indeterminate in remark and theorem a version of the resultant of polynomials that can be computed by the classical sylvester matrix and theorem and theorem matrix theory also can be developed along supertropical lines the supertropical theorem theorem says that the characteristic polynomial evaluated on a matrix is a ghost a matrix is called singular when its permanent the tropical replacement of determinant is a ghost in theorem the row rank column rank and submatrix rank of a matrix in this sense are seen to be equal solution of tropical equations is given in supertropical singularity also gives rise to semigroup versions of the classical algebraic group sl as illustrated in also cf valuation theory is handled in a series of papers starting with also cf generalized further in and note that supertropical semirings almost are bipotent in the sense that for any in t this turns out to be an important feature in triples hyperfields and other related constructions another algebraic construction is hyperfields which are multiplicative groups in which sets replace elements when one takes sums hyperfields have received considerable attention recently in part because of their diversity and in fact viro s tropical hyperfield matches izhakian s construction but there are important nontropical hyperfields such as the hyperfield of signs the phase hyperfield and the triangle hyperfield whose theories we also want to understand along similar lines in hyperfield theory one can replace zero by the property that a given set contains an intriguing phenomenon is that linear algebra over some classes of hyperfields follows classical lines as in the supertropical case but the hyperfield of signs provides easy counterexamples to others as discussed in symmetrization this construction uses gaubert s symmetrized algebras which he designed for linear algebra as a prototype we start with t take a tb t t and define the switch map by the reader might already recognize this as the first step in constructing the integers from the natural numbers where one identifies with if but the trick here is to recognize the equivalence relation without modding it out since everything could degenerate in the nonclassical applications equality often is replaced by the assertion rowen c c for some c t the symmetrized t also can be viewed as a tb via the twist action utilized in to define and study the prime spectrum fuzzy rings dress introduced fuzzy rings a while ago in connection with matroids and these also have been seen recently to be related to hypergroups in negation maps triples and systems these varied examples and their theories which often mimic classical algebra lead one to wonder whether the parallels among them are happenstance or whether there is some straightforward axiomatic framework within which they can all be gathered and simplified unfortunately semirings may lack negation so we also implement a formal negation map to serve as a partial replacement for negation definition a negation map on a t a is a map t t together with a semigroup isomorphism a a both of order written a a satisfying ab a b a b t b a obvious examples of negation maps are the identity map which might seem trivial but in fact is the one used in supertropical algebra the switch map in the symmetrized algebra the usual negation map a in classical algebra and the hypernegation in the definition of hypergroups accordingly we say that the negation map is of the first kind if a a for all a t and of the second kind if a a for all a t as indicated earlier the take the role customarily assigned to the zero element in the supertropical theory the are the ghost elements in definition the are called balanced elements and have the form a a when t the element determines the negation map since b b when t a several important elements of a then are e e e e e e the most important for us is for fuzzy rings e but e need not absorb in multiplication rather in any with negation definition implies ae a a definition a is a collection a t where a is a t with t a and is a negation map a triple is a a t where t generates a our structure of choice a system is a quadruple a t where a t is a triple and is a t relation definition the main t relations are defined by a b if b a for some on sets is the relation has an important theoretical role replacing and enabling us to define a broader category than one would obtain directly from universal algebra cf one major reason why can formally replace equality in much of the theory is found in the transfer principle of given in the context of systems in theorem example the four main examples are the standard supertropical triple a t where a t g as before and is the identity map we get the system by taking to be an informal overview of triples and systems the symmetrized triple tb where a a with componentwise addition and tb t t with multiplication tb ab ab given by here we take to be the switch map which is of second kind again is the fuzzy triple appendix a for any t module a with an element t satisfying we can define a negation map on t and a given by a a in particular this enables us to view fuzzy rings as systems again is the same argument shows that tracts are systems the hyperfield p t t where t is the original hyperfield p t is its power set with componentwise operations and on the power set is induced from the hypernegation here is although we introduced since t need not generate a for example taking a p t for the phase hypergroup we are more concerned with triples and furthermore in a one can take the generated by t more triples and systems related to tropical algebra are presented in structures other than monoids also are amenable to such an approach this can all be formulated axiomatically in the context of universal algebra as treated for example in once the natural categorical setting is established it provides the context in which tropicalization becomes a functor thereby providing guidance to understand tropical versions of an assortment of mathematical structures ground triples versus module triples classical structure theory involves the investigation of an algebraic structure as a small category for example viewing a monoid as a category with a single object whose morphisms are its elements and homomorphisms then are functors between two of these small categories on the other hand one obtains classical representation theory via an abelian category such as the class of modules over a given ring analogously there are two aspects of triples we call a triple resp system a ground triple resp ground system when we study it as a small category with a single object in its own right usually a semidomain ground triples have the same flavor as lorscheid s blueprints albeit slightly more general and with a negation map whereas representation theory leads us to module systems described below in and this situation leads to a fork in the road the first path takes us to a structure theory based on functors of ground systems translating into homomorphic images of systems via congruences in especially prime systems in which the product of congruences is nontrivial ground systems often are designated in terms of the structure of a or t such as semiring systems or hopf systems or hyperfield the paper has a different flavor dealing with matrices and linear algebra over ground systems and focusing on subtleties concerning cramer s rule and the equality of row rank column rank and submatrix rank the second path takes us to categories of module systems in we also bring in tensor products and hom functors this is applied to geometry in in we develop the homological theory relying on work done already by grandis under the name of n and homological category without the negation map and there is a parallel approach of connes and consani in contents of systems the emphasis in is on ground systems one can apply the familiar constructions and concepts of classical algebra direct sums definition matrices involutions polynomials localization and tensor products remark to produce new triples and systems the simple tensors a b where a b t comprise the tangible elements of the tensor product the properties of tensors and hom are treated in much greater depth in and localization is analyzed for module systems in rowen basic properties of triples and systems let us turn to important properties which could hold in triples one crucial axiom for this theory holding in all tropical situations and most related theories is definition the t a t is uniquely negated if a b for a b t implies b a by definition hyperfield triples are uniquely negated we can hone in further to obtain two of the principal concepts of definition a uniquely negated t a t is if the sum of two tangible elements is tangible unless they are of each other a special case a t is if a b a b whenever a b t with b a in other words a b a b for all a b t the stipulation in the definition that b a is of utmost importance since otherwise all of our main examples would fail proposition shows how to view a triple as a hypergroup thereby enhancing the motivation of transferring hyperfield notions to triples and systems in general any triple satisfying t is uniquely negated the supertropical triple is bipotent as is the modification of the symmetrized triple described in example the krasner hyperfield triple which is just the supertropicalization of the boolean semifield b and the triple arising from the hyperfield of signs which is just the symmetrization of b are but the phase hyperfield triple and the triangle hyperfield triple are not even metatangible although the latter is idempotent but as seen in theorem hyperfield triples satisfy another different property of independent interest definition definition a surpassing relation in a system is called t if a b c implies b a c for a b t the category of hyperfields as given in can be embedded into the category of uniquely negated t systems theorem reversibility enables one to apply systems to matroid theory although we have not yet embarked on that endeavor in earnest the height of an element c a sometimes called width in the literature is the pt minimal t such that c ai with each ai t we say that has height by definition every element of a triple has finite height the height of a is the maximal height of its elements when these heights are bounded for example the supertropical semiring has height as does the symmetrized semiring of an idempotent semifield t some unexpected examples of systems sneak in when the triple has height as described in examples in the case is presented that could be the major axiom in the theory of ground systems over a group t leading to a bevy of structure theorems on systems starting with the observation lemma that for a b t either a b a b a and thus b or b b in proposition the following assertions are seen to be equivalent for a triple a t containing i t t a ii a is of height iii a is with e we obtain the following results for a system a t theorem if a is not then with a of characteristic when is of the first kind theorem every element has the form or mc for c t and m the extent to which this presentation is unique is described in theorem theorem distributivity follows from the other axioms theorem the surpassing relation must almost be theorem a key property of fuzzy rings holds theorem reversibility holds except in one pathological situation an informal overview of triples and systems theorem a criterion is given in terms of sums of squares for a t to be isomorphic to a symmetrized triple one would want a classification theorem of systems that reduces to classical algebras the standard supertropical semiring the symmetrized semiring layered semirings power sets of various hyperfields or fuzzy rings but there are several exceptional cases nonetheless theorem comes close namely if is of the first kind then either a has characteristic and height with bipotent or a t is isomorphic to a layered system if is of the second kind then either t is with a of height except for an exceptional case a is isometric to a symmetrized semiring when a is real or a is classical more information about the exceptions are given in remark continues with some rudiments of linear algebra over a ground triple to be discussed shortly in tropicalization is cast in terms of a functor on systems from the classical to the nonclassical this principle enables one in to define the right tropical versions of classical algebraic structures including exterior algebras lie algebras lie superalgebras and poisson algebras contents of linear algebra over systems the paper was written with the objective of understanding some of the diverse theorems in linear algebra over semiring systems we define a set of vectors vi a n i i to be t if p n for some nonempty subset i i and t and the row rank of a matrix to be vi a the maximal number of t rows a tangible vector is a vector all of whose elements are in a tangible matrix is a matrix all of whose rows are tangible vectors the of an n n matrix a ai j is x qn where ai i a matrix a is nonsingular if t the submatrix rank of a is the largest size of a nonsingular square submatrix of a in corollary we see that the submatrix rank of a matrix over a t triple is less than or equal to both the row rank and the column rank this is improved in theorem theorem i let a t be a system for any vector v the vector y adj a v satisfies ay in particular if is invertible in t then x adj a v satisfies v ax the existence of a tangible such x is subtler one considers valuations of systems a g and their fibers a t a g for g g we call the system t if any ascending chain of fibers stabilizes theorem corollary in a t system a if is invertible then for any vector v there is a tangible vector x with adj a such that ax v one obtains uniqueness of x theorem ii using a property called strong balance after translating some more of the concepts of into the language of systems we turn to the question raised privately for hyperfields by baker question a when does the submatrix rank equal the row rank our initial hope was that this would always be the case in analogy to the supertropical situation however gaubert observed that a nonsquare counterexample to question a already can be found in and the underlying system even is here the kind of negation map is critical a rather general counterexample for triples of the second kind is given in proposition the essence of the example already exists in the sign although the counterexample as given is a nonsquare matrix it can be modified to an n n matrix for any n this counterexample is minimal in the sense that question a has a positive answer for n and for matrices under a mild assumption cf theorem nevertheless positive results are available a positive answer for question a along the lines of theorems is given in theorem for systems satisfying certain technical conditions in rowen theorem we show that question a has a positive answer for square matrices over triples of first kind of height and this seems to be the correct framework in which we can lift theorems from classical algebra a positive answer for all rectangular matrices is given in theorem but with restrictive hypotheses that essentially reduce to the supertropical situation contents of basic categorical considerations the paper elaborates on the categorical aspects of systems with emphasis on important functors the functor triple c s definition embraces important constructions including the symmetrized triple and polynomial triples via the convolution product given before definition as in the emphasis on is for t to be a cancellative multiplicative monoid even a group which encompasses many major applications this slights the lie theory and indeed one could consider hopf systems as discussed briefly in and in more depth in motivation can be found in an issue that must be confronted is the proper definition of morphism cf definitions in categories arising from universal algebra one s intuition would be to take the homomorphisms those maps which preserve equality in the operators we call these morphisms however this approach loses some major examples of hypergroups applications in tropical mathematics and hypergroups cf definition tend to depend on the surpassing relation definition so we are led to a broader definition called in definition definition often provide the correct venue for studying ground systems on the other hand proposition gives a way of verifying that some morphisms automatically are strict the situation is stricter for module systems over ground triples the sticky point here is that the semigroups of morphisms mor a b in our t categories are not necessarily groups so the traditional notion of abelian category has to be replaced by definition and these lack some of the fundamental properties of abelian categories the tensor product is only functorial when we restrict our attention to strict morphisms proposition module systems in both cases in the theory of semirings and their modules homomorphisms are described in terms of congruences so congruences should be a focus of the theory the null congruences contain the diagonal and not necessarily zero and lead us to null morphisms definition an alternate way of viewing c is given in in hom is studied congruences in terms of transitive modules of m together with its dual and again one only gets all the desired categorical properties such as the adjoint isomorphism lemma when considering strict morphisms in this way the categories comprised of strict morphisms should be amenable to a categorical view to be carried out in for geometry and and in for homology again at times with a hopfian flavor the functors between the various categories arising in this theory are described in also with an eye towards valuations of triples contents of geometry the paper focuses on module systems leading to a geometric theory over ground systems and is comprised of the following parts group and lie semialgebra systems the symmetrization functor from lorscheid s blueprints to symmetrized triples localization theory for module triples over semiring triples a geometrical category theory including a representation theorem for n with negation schemes of semiringed spaces sheaves of module systems a hopf semialgebra approach to ground systems and module systems as in classical algebra the prime systems definitions play an important role in affine geometry via the zariski topology so it is significant that we have a version of the fundamental theorem of algebra in theorem which implies that a polynomial system over a prime system is prime corollary an informal overview of triples and systems contents of homology this is work in progress we start with a version of split epics weaker than the classical definition definition an epic m n if there is an n n m such that in this case we also say that and n is a of a module system m m tm is the t sum of subsystems and if and every a m can be written a for ai mi this leads to projective module systems definition a t system p is projective if for any strict epic h m of t systems every morphism f p lifts to a morphism p m in the sense that p is if for any t system strict epic h m every morphism f p to a morphism p m in the sense that f their fundamental properties are then obtained including the basis lemma proposition leading to resolutions and dimension then one obtains a homology theory in the context of homological categories and derived functors in connection to the recent work of connes and consani references adiprasito huh and katz hodge theory of matroids notices of the ams akian gaubert and guterman linear independence over tropical semirings and beyond in tropical and idempotent mathematics litvinov and sergeev eds contemp math akian gaubert and guterman tropical cramer determinants revisited contemp math amer math soc akian gaubert and rowen linear algebra over systems preprint baker and bowler matroids over hyperfields aug baker and bowler matroids over partial hyperstructures preprint baker and payne ed nonarchimedean and tropical geometry simons symposia berkovich analytic geometry first steps in geometry lectures from the arizona winter school university lecture series vol american mathematical society providence berkovich algebraic and analytic geometry over the field with one element bertram and easton the tropical nullstellensatz for congruences advances in mathematics bourbaki commutative algebra paris and reading butkovic the linear algebra of combinatorics lin alg and appl connes and consani homological algebra in characteristic one mar cortinas haesemeyer walker and weibel toric varieties monoid schemes and reine angew math costa sur la des publ math decebren a dress duality theory for finite and infinite matroids with coefficients advances in mathematics a dress and wenzel algebraic tropical and fuzzy geometry beitrage zur algebra und contributions to algebra und geometry etingof gelaki nikshych and ostrik tensor categories mathematical surveys and monographs volume american mathematical society gaubert des dans les diodes des mines de paris gaubert and plus methods and applications of max linear algebra in reischuk and morvan editors number in lncs lubeck march springer giansiracusa jun and lorscheid on the relation between hyperrings and fuzzy rings golan the theory of semirings with applications in mathematics and theoretical computer science volume longman sci grandis homological algebra in strongly settings world scientific grothendieck produits tensoriels topologiques et espaces nucleaires memoirs of the amer math no henry symmetrization of as hypergroups arxiv preprint itenberg kharlamov and shustin welschinger invariants of real del pezzo surfaces of degree math annalen no itenberg mikhalkin and shustin tropical algebraic geometry oberwolfach seminars verlag basel izhakian tropical arithmetic algebra of tropical matrices preprint at arxiv izhakian knebusch and rowen supertropical semirings and supervaluations j pure and appl rowen izhakian knebusch and rowen layered tropical mathematics journal of algebra izhakian knebusch and rowen categories of layered semirings commun in algebra izhakian knebusch and rowen algebraic structures of tropical mathematics in tropical and idempotent mathematics litvinov and sergeev eds contemporary mathematics ams preprint at izhakian niv and rowen supertropical sl linear and multilinear algebra to appear izhakian and rowen supertropical algebra advances in mathematics izhakian and rowen supertropical matrix algebra israel j izhakian and rowen supertropical matrix algebra ii solving tropical equations israel j izhakian and rowen supertropical polynomials and resultants of algebra jacobson basic algebra ii freeman jensen and payne combinatorial and inductive methods for the tropical maximal rank conjecture combin theory ser a joo and mincheva prime congruences of idempotent semirings and a nullstellensatz for tropical polynomials to appear in selecta mathematica jun algebraic geometry over hyperrings arxiv jun cech cohomology of semiring schemes arxiv preprint jun valuations of semirings arxiv preprint to appear in journal of pure and applied algebra jun mincheva and homology of systems in preparation jun and categories with negation arxiv jun and rowen geometry of systems katsov tensor products of functors siberian j math trans from sirbiskii mathematischekii zhurnal no katsov toward homological characterization of semirings conjecture and perfectness in a semiring context algebra universalis lorscheid the geometry of blueprints part i algebraic background and scheme theory adv math pp lorscheid a blueprinted view on absolute arithmetic and european mathematical society maclagan and sturmfels introduction to tropical geometry american mathematical society graduate studies in mathematics mckenzie mcnulty and taylor algebras lattices and varieties vol wadsworth and brooks mikhalkin enumerative tropical algebraic geometry in amer math soc patchkoria on exactness of long sequences of homology semimodules journal of homotopy and related structures vol plus akian cohen gaubert nikoukhah and quadrat max l max et sa ou l des french max algebra and its symmetrization or the algebra of balances acad sci paris ii phys chim sci univers sci terre no ren shaw and sturmfels tropicalization of del pezzo surfaces advances in mathematics rowen symmetries in tropical algebra rowen algebras with a negation map pages hopf algebras and associated group actions slides of lecture at acc conference combinatorics of group actions saint john s newfoundland august et de spec z journal pp viro hyperfields for tropical geometry hyperfields and dequantization arxiv department of mathematics university israel address rowen
| 0 |
jun random forests for industrial device functioning diagnostics using wireless sensor networks elghazel guyeux farhat hakem medjaher zerhouni and bahi abstract in this paper random forests are proposed for operating devices diagnostics in the presence of a variable number of features in various contexts like large or monitored areas wired sensor networks providing features to achieve diagnostics are either very costly to use or totally impossible to spread out using a wireless sensor network can solve this problem but this latter is more subjected to flaws furthermore the networks topology often changes leading to a variability in quality of coverage in the targeted area diagnostics at the sink level must take into consideration that both the number and the quality of the provided features are not constant and that some politics like scheduling or data aggregation may be developed across the network the aim of this article is to show that random forests are relevant in this context due to their flexibility and robustness and to provide first examples of use of this method for diagnostics based on data provided by a wireless sensor network introduction in machine learning classification refers to identifying the class to which a new observation belongs on the basis of a training set and quantifiable observations known as properties in ensemble learning the classifiers are combined to solve a particular computational intelligence problem many research papers encourage adapting this solution to improve the performance of a model or reduce the likelihood of selecting a weak classifier for instance dietterich argued that averaging the classifiers outputs guarantees a better performance than the worst classifier this claim was theoretically proven correct by fumera and roli in addition to this and under particular hypotheses the fusion of multiple classifiers can improve the performance of the best individual classifier two of the early examples of ensemble classifiers are boosting and bagging in boosting algorithm the distribution of the training set changes adaptively based on the errors generated by the previous classifiers in fact at each step a higher degree of importance is accorded to the misclassified instances at the end of the training a weight is accorded to each classifier regarding its individual performance indicating its importance in the voting process as for bagging the distribution of the training set changes stochastically and equal votes are accorded to the classifiers for both classifiers the error rate decreases when the size of the committee increases in a comparison made by tsymbal and puuronen it is shown that bagging is more consistent but unable to take into account the heterogeneity of the instance space in the highlight of this conclusion the authors emphasize the importance of classifiers integration combining various techniques can provide more accurate results as different classifiers will not behave in the same manner faced to some particularities in the training set nevertheless if the classifiers give different results a confusion may be induced it is not easy to ensure reasonable results while combining the classifiers in this context the use of random methods could be beneficial instead of combining different classifiers a random method uses the same classifier over different distributions of the training set a majority vote is then employed to identify the class in this article the use of random forests rf is proposed for industrial functioning diagnostics particularly in the context of devices being monitored using a wireless sensor network wsn a prerequisite in diagnostics is to consider that data provided by sensors are either flawless or simply noisy however deploying a wired sensor network on the monitored device is costly in some situations specifically in large scale moving or hardly accessible areas to monitor such situations encompass nuclear power plants or any structure spread in deep water or in the desert wireless sensors can be considered in these cases due to their low cost and easy deployment wsns monitoring is somehow unique in the sense that sensors too are subjected to failures or energy exhaustion leading to a change in the network topology thus monitoring quality is variable too and it depends on both time and location on the device various strategies can be deployed on the network to achieve fault tolerance or to extend the wsn s lifetime like nodes scheduling or data aggregation however the diagnostic processes must be compatible with these strategies and with a device coverage of a changing quality the objective of this research work is to show that rf achieve a good compromise in that situation being compatible with a number of sensors which may be variable over time some of them being susceptible to errors more precisely we will explain why random methods are relevant to achieve accurate diagnostics of an industrial device being monitored using a wsn the functioning of rf will then be recalled and applied in the monitoring context algorithms will be provided and an illustration on a simulated wsn will finally be detailed the remainder of this article is organized as follows section summarizes the related work in section we overview the research works in industrial diagnostics we present the random forest algorithm in section and give simulation results in section this research work ends with a conclusion section where the contribution is summarized and intended future work is provided related work many research works have contributed in improving the classification s accuracy for instance tree ensembles use majority voting to identify the most popular class they have the advantage of transforming weak classifiers into strong ones by combining their knowledge to reduce the error rate usually the growth of each tree is governed by random vectors sampled from the training set and bagging is one of the early examples of this in this method each tree is grown by randomly selecting individuals from the training set without replacing them the use of bagging can be motivated by three main reasons it enhances accuracy with the use of random features it gives ongoing estimates of the generalization error strength and correlation of combined trees and it is also good for unstable classifiers with large variance meanwhile freund introduced the adaptive boosting algorithm adaboost which he defined as a deterministic algorithm that selects the weights on the training set for input to the next classifier based on the wrong classifications in the previous classifiers the fact that the classifier focuses on correcting the errors at each new step remarkably improved the accuracy of classifications shortly after in randomness was again used to grow the trees the split was defined at each node by searching for the best random selection of features in the training set ho introduced the random subspace in which he randomly selects a subset of vectors of features to grow each tree diettrich introduced the random split selection where at each node a split is randomly selected among k best splits for these methods and like bagging a random vector sampled to grow a tree is completely independent from the previous vectors but is generated with the same distribution random split selection and introducing random noise into the outputs both gave better results than bagging nevertheless the algorithms implementing ways of the training set such as adaboost outperform these two methods therefore breiman combined the strengths of the methods detailed above into the random forest algorithm in this method individuals are randomly selected from the training set with replacement at each node a split is selected by reducing the dispersion generated by the previous step and consequently lowering the error rate this algorithm is further detailed in section overview of diagnostics with their constantly growing complexity current industrial systems witness costly downtime and failures therefore an efficient health ment technique is mandatory in fact in order to avoid expensive shutdowns maintenance activities are scheduled to prevent interruptions in system operation in early frameworks maintenance takes place either after a failure occurs corrective maintenance or according to predefined time intervals periodic maintenance nevertheless this still generates extra costs due to too soon or too late maintenances accordingly considering the actual health state of the operating devices is important in the decision making process maintenance here becomes and is only performed after the system being diagnosed in a certain health state diagnostics is an understanding of the relationship between what we observe in the present and what happened in the past by relating the cause to the effect after a fault takes place and once detected an anomaly is reported in the system behavior the fault is then isolated by determining and locating the cause or source of the problem doing so the component responsible for the failure is identified and the extent of the current failure is measured this activity should meet several requirements in order to be efficient these requirements are enumerated in the following early detection in order to improve industrial systems reliability fault detection needs to be quick and accurate nevertheless diagnostic systems need to find a reasonable between quick response and fault tolerance in other words an efficient diagnostic system should differentiate between normal and erroneous performances in the presence of a fault isolability fault isolation is a very important step in the diagnostic process it refers to the ability of a diagnostic system to determine the source of the fault and identify the responsible component with the isolability attribute the system should discriminate between different failures when an anomaly is detected a set of possible faults is generated while the completeness aspect requires the actual faults to be a subset of the proposed set resolution optimization necessitates that the set is as small as possible a tradeoff then needs to be found while respecting the accuracy of diagnostics robustness and resources it is highly desirable that the diagnostic system would degrade gracefully rather than fail suddenly for this finality the system needs to be robust to noise and uncertainties in addition to this a between system performance and computational complexity is to be considered for example diagnostics require low complexity and higher storage capacities faults identifiability a diagnostics system is of no interest if it can not distinguish between normal and abnormal behaviors it is also crucial that not only the cause of every fault is identified but also that new observations of malfunctioning would not be misclassified as a known fault or as normal behavior while it is very common that a present fault leads to the generation of other faults combining the effects of these faults is not that easy to achieve due to a possible on the other hand modeling the faults separately may exhaust the resources in case of large processes clarity when diagnostic models and human expertise are combined together the decision making support is more reliable therefore it is appreciated that the system explains how the fault was triggered and how it propagated and keeps track on the relationship this can help the operator use their experience to evaluate the system and understand the decision making process adaptability operating conditions external inputs and environmental conditions change all the time thus to ensure relevant diagnostics at all levels the system should adapt to changes and evolve in the presence of new information existent diagnostic models have several limitations some of which are summarized in table diagnostic model markovian process bayesian networks neural networks fuzzy systems drawbacks is not considered stages of degradation process can not be accounted for volume of data is required for the training assumptions are not always practical transitions are not considered reliance on accurate thresholds state transitions are needed for efficient results to predict unanticipated states amount of data for the training is necessary with every change of conditions is needed to reduce inputs complexity with every new entry experts are required are as good as the developers understanding table limitations of diagnostic models the degradation process can be considered as a stochastic process the evolution of the degradation is a random variable that describes the different levels of the system s health state from good condition to complete deterioration the deterioration process is multistate and can be divided into two main categories space the device is considered failed when the predefined threshold is reached space the degradation process is divided into a finite number of discrete levels as maintenance relies on reliable scheduling of maintenance activities an understanding of the degradation process is required for this finality in this paper we consider the space deterioration process random forests the rf algorithm is mainly the combination of bagging and random subspace algorithms and was defined by leo breiman as a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest this method resulted from a number of improvements in tree classifiers accuracy this classifier maximizes the variance by injecting randomness in variable selection and minimizes the bias by growing the tree to a maximum depth no pruning the steps of constructing the forest are detailed in algorithm algorithm random forest algorithm input labeled training set s number of trees t number of features f output learned random forest rf initialize rf as empty for i in t do bootstrap s initialize the root of tree i repeat if current node is terminal then affect a class go to the next unvisited node if any else select the best feature f among f split f add leftchild rightchild to tree i end if until all nodes are visited add tree i to the forest end for in a rf the root of a tree i contains the instances from the training subset sorted by their corresponding classes a node is terminal if it contains instances of one single class or if the number of instances representing each class is equal in the alternative case it needs to be further developed no pruning for this purpose at each node the feature that guarantees the best split is selected as follows the information acquired by choosing a feature can be computed through a the entropy of shannon which measures the quantity of information entropy p c x p log p where p is the number of examples associated to a position in the tree c is the total number of classes denotes the fraction of examples associated to a position in the tree and labelled class k p is the proportion of elements labelled class k at a position b the gini index which measures the dispersion in a population gini x c x p where x is a random sample c is the number of classes denotes the fraction of examples associated to a position in the tree and labelled class k p is the proportion of elements labelled class k at a position the best split is then chosen by computing the gain of information from growing the tree at given position corresponding to each feature as follows gain p t f p n x pj f pj where p corresponds to the position in the tree t denotes the test at branch n pj is the proportion of elements at position p and that go to position pj f p corresponds to either entropy p or gini p the feature that provides the higher gain is selected to split the node the optimal training of a classification problem can be tree ensembles have the advantage of running the algorithm from different starting points and this can better approximate the classifier in his paper leo breiman discusses the accuracy of random forests in particular he gave proof that the generalized error although different from one application to another always has an upper bound and so random forests converge the injected randomness can improve accuracy if it minimizes correlation while maintaining strength the tree ensembles investigated by breiman use either randomly selected inputs or a combination of inputs at each node to grow the tree these methods have interesting characteristics as their accuracy is at least as good as adaboost they are relatively robust to outliers and noise they are faster than bagging or boosting they give internal estimates of error strength correlation and variable importance they are simple and the trees can be grown in parallel there are four different levels of diversity which were defined in level being the best and level the worst level no more than one classifier is wrong for each pattern level the majority voting is always correct level at least one classifier is correct for each pattern level all classifiers are wrong for some pattern rf can guarantee that at least level two is reached in fact a trained tree is only selected to contribute in the voting if it does better than random the error rate generated by the corresponding tree has to be less than or the tree will be dropped from the forest in verikas et al argue that the most popular classifiers support vector machine svm multilayer perceptron mlp and relevance vector machine rvm provide too little insight about the variable importance to the derived algorithm they compared each of these methodologies to the random forest algorithm to find that in most cases rf outperform other techniques by a large margin experimental study data collection in this paper we consider two sets of experiments the sensor network is constituted by nodes sensing respectively the levels of temperature sensors pressure and humidity on the industrial device under consideration set of experiment in this set of experiments we consider that no level of correlation is introduced betweent the different features moreover we suppose that at time t under normal conditions temperature sensors follow a gaussian law of parameter while these parameters are mapped to in case of a malfunction of the industrial device finally these sensors return the value when they break down the gaussian parameters are when both the industrial device and the pressure sensors are in normal conditions the parameters are changed to in case of industrial failure while the pressure sensors return when they are themselves broken down finally the humidity sensors produce data following a gaussian law of parameter when they are sensing a device these parameters are set to in case of device failure while malfunctioning humidity sensors produce the value set of experiment for this set a linear correlation is injected between the studied features under normal conditions temperature sensors follow a gaussian law of parameter while these parameters are mapped to in case of a malfunction of the industrial device finally these sensors return the value when they break down when both the industrial device and the pressure sensors are in normal conditions the value of pressure is computed as x where x is the value of temperature the parameters are changed to in case of industrial failure while the pressure sensors return when they are themselves broken down for a device the humidity sensors produce data in the form of x these parameters are set to in case of device failure while malfunctioning humidity sensors produce the value for both data sets the probability that a failure occurs at time t follows a bernoulli distribution of parameter t five levels of functioning are attributed to each category of sensors depending on the abnormality of the sensed data these levels are defined thanks to thresholds which are and degrees for the temperature a temperature lower than is normal while a sensed value larger than is highly related to a malfunctioning and bars for the pressure parameter and finally and percents for the humidity data is generated as follows for each time unit t during the industrial device monitoring for each category c temperature pressure humidity of sensors for each sensor s belonging to category c if s has not yet detected a device failure s picks a new data according to the gaussian law corresponding to a device which depends on both t and c a random draw from the exponential law detailed previously is realized to determine if a breakdown occurs on the location where s is placed else s picks a new datum according to the bernoulli distribution of a category c sensor observing a malfunctioning device the global failure level f t of a set of sensed data produced by the wireless sensor network at a given time t is defined as follows for each sensed datum dti i let fit be the functioning level related to its category pressure temperature or humidity then f t max fit i random forest design figure example of a tree in the random forest the random forest constituted in this set of experiments by trees is defined as follows for each tree ti i a sample of of dates is extracted the root of the tree ti is the tuple j f n j where x is the cardinality of the finite set x thus its coordinate corresponds to the number of times the device has been in the global failure n in this sample of observation dates the category c having the largest gain for the dates in the root node is selected the dates are divided into five sets depending on thresholds related to then edges labeled by both c and failure levels are added to ti as depicted in figure they are directed to at most new vertices containing the tuples j f n and di j has a c level equal to li in other words we only consider in this node a of dates having their functioning level for category c equal to and we divide the into subsets depending on their global functioning levels the tuple is constituted by each cardinality of these subsets see fig the process is continued with this vertex as a new root the reduced set of observed dates and the categories minus it is stopped when either all the categories have been regarded or when tuple of the node has at least components equal to providing a diagnostic on a new set of observations finally given a new set of observations at a given time the diagnostics of the industrial device is obtained as follows let t be a tree in the forest t will be visited starting from its root until reaching a leaf as described below all the edges connected to the root of t are labeled with the same category c but with various failure levels the selected edge e is the one whose labeled level of failure regarding c corresponds to the of failure of the observations if the obtained node n following edge e is a leaf then the global level of failure of the observations according to t is the coordinate of the unique non zero component of the tuple if not the tree walk is continued at item with node n as new root the global diagnostics for the given observation is a majority consensus of all the responses of all the trees in the forest numerical simulations the training set is obtained by simulating observations for successive times which results in instances the resulting data base is then used to train trees that will constitute the trained random forest figure presents the delay between the time the system enters a failure mode and the time of its detection this is done in the absence of correlations between the different features the time value of delay the negative values and positive value refer to in time predictions early predictions and late predictions of failures respectively the plotted values are the average result per number of simulations which varies from to with time sensor nodes start to fail in order to simulate missing data packets as a result the rf algorithm was able to detect of the failures either in time or before their occurrence for each of the performed simulations we calculated the average number of errors in fault detection produced by the trees in the forest figure shows that this error rate remained below through the simulation this error rate includes both too early and too late detections when certain sensor nodes stop functioning this leads to a lack on information which has an impact on the quality of predictions this explains a sudden increase in the error rate with time we can conclude from the low error rate in the absence of some data packets that increasing the number of trees in the rf helps improve the quality and accuracy of predictions as described in section a correlation was introduced between the features figure shows the number of successful diagnostics when the number of tree estimators in the forest changes as shown in this figure the rf method guarantees a success rate when the number of trees is limited to as this number grows the accuracy of the method increases to reach when the number of trees is around figure delay in failure detection with respect to the number of simulations figure error rate in diagnostics with respect to the number of simulations conclusion instead of using wired sensor networks for a diagnostics and health management method it is possible to use wireless sensors such a use can be motivated by cost reasons or due to specific particularities of the monitored device in the context of a changing number and quality of provided features the use of random forests may be of interest these random classifiers were recalled with details in this article and the reason behind their use in the context of a wireless sensors network monitoring was explained finally algorithms and first examples of use of these random forests for diagnostics using a wireless sensor network were provided the simulation results showed that the algorithm guarantees a certain level of accuracy figure number of successful diagnostics with respect to the number of trees even when some data packets are missing in future work the authors intention is to compare various tools for diagnostics to the random forests either when considering wireless sensor networks or wired ones comparisons will be carried out both theoretical and practical aspects the algorithm of random forests for its part will be extended to achieve prognostics and health management too finally the method for diagnosing an industrial device will be tested on a life size model to illustrate the effectiveness of the proposed approach references yali amit and donald geman shape quantization and recognition with randomized trees neural computation leo breiman bagging predictors machine learning leo breiman using adaptive bagging to debias regressions technical report statics department ucb leo breiman random forests machine learning sourabh dash and venkat venkatasubramanian challenges in the industrial applications of fault diagnostic systems computers and chemical engineering thomas dietterich an experimental comparison of three methods for constructing ensembles of decision trees bagging boosting and randomization machine learning freund and schapire experiments with a new boosting algorithm in proceedings of the thirteenth international conference on machine learning pages giorgio fumera and fabio roli a theoretical and experimental analysis of linear combiners for multiple classifier systems ieee transactions on pattern analysis and machine intelligence tin kam ho the random subspace method for constructing decision forests ieee transactions on pattern analysis and machine intelligence shigeru kanemoto norihiro yokotsuka noritaka yusa and masahiko kawabata diversity and integration of rotating machine health monitoring methods in chemical engineering transactions number pages milan italy ramin moghaddass and ming zuo an integrated framework for online diagnostic and prognostic health monitoring using a multistate deterioration process reliability engieneering and system safety robert schapire a brief introduction to boosting in proceedings of the sixteenth international joint conference on artificial intelligence sharkey and sharkey combining diverse neural nets the knowledge egineering review alexey tsymbal and seppo puuronen bagging and boosting with dynamic integration of classifiers in the european conference on principles and practice of knowledge discovery in data bases pkdd pages tumer and ghosh error correlation and error reduction in ensemble classifiers connection science verikas gelzinis and bacauskiene mining data with random forests a survey and results of new tests pattern recognition
| 2 |
improvements to deep convolutional neural networks for lvcsr tara brian george george hagen tomas aleksandr bhuvana dec ibm watson research center yorktown heights ny department of computer science university of toronto tsainath bedk gsaon hsoltau tberan saravkin bhuvana asamir gdahl abstract deep convolutional neural networks cnns are more powerful than deep neural networks dnn as they are able to better reduce spectral variation in the input signal this has also been confirmed experimentally with cnns showing improvements in word error rate wer between relative compared to dnns across a variety of lvcsr tasks in this paper we describe different methods to further improve cnn performance first we conduct a deep analysis comparing limited weight sharing and full weight sharing with features second we apply various pooling strategies that have shown improvements in computer vision to an lvcsr speech task third we introduce a method to effectively incorporate speaker adaptation namely fmllr into features fourth we introduce an effective strategy to use dropout during sequence training we find that with these improvements particularly with fmllr and dropout we are able to achieve an additional relative improvement in wer on a broadcast news task over our previous best cnn baseline on a larger bn task we find an additional relative improvement over our previous best cnn baseline introduction deep neural networks dnns are now the in acoustic modeling for speech recognition showing tremendous improvements on the order of relative across a variety of small and large vocabulary tasks recently deep convolutional neural networks cnns have been explored as an alternative type of neural network which can reduce translational variance in the input signal for example in deep cnns were shown to offer a relative improvement over dnns across different lvcsr tasks the cnn architecture proposed in was a somewhat vanilla architecture that had been used in computer vision for many years the goal of this paper is to analyze and justify what is an appropriate cnn architecture for speech and to investigate various strategies to improve cnn results further first the architecture proposed in used multiple convolutional layers with full weight sharing fws which was found to be beneficial compared to a single fws convolutional layer because the locality of speech is known ahead of time proposed the use of limited weight sharing lws for cnns in speech while lws has the benefit that it allows each local weight to focus on parts of the signal which are most confusable previous work with lws had just focused on a single lws layer in this work we do a detailed analysis and compare multiple layers of fws and lws second there have been numerous improvements to cnns in computer vision particularly for small tasks for example using lp or stochastic pooling provides better generalization than max pooling used in second using overlapping pooling and pooling in time also improves generalization to test data furthermore cnns that is combining outputs from different layers of the neural network has also been successful in computer vision we explore the effectiveness of these strategies for larger scale speech tasks third we investigate using better features for cnns features for cnns must exhibit locality in time and frequency in it was found that features were best for cnns however speaker adapted features such as feature space maximum likelihood linear regression fmllr features typically give the best performance for dnns in the fmllr transformation was applied directly to a correlated space however no improvement was observed as fmllr transformations typically assume uncorrelated features in this paper we propose a methodology to effectively use fmllr with features this involves transforming into an uncorrelated space applying fmllr in this space and then transforming the new features back to a correlated space finally we investigate the role of rectified linear units relu and dropout for hf sequence training of cnns in was shown to give good performance for ce trained dnns but was not employed during hf however is critical for speech recognition performance providing an additional relative gain of over a dnn during ce training the dropout mask changes for each utterance however during hf training we are not guaranteed to get conjugate directions if the dropout mask changes for each utterance therefore in order to make dropout usable during hf we keep the dropout mask fixed per utterance for all iterations of conjugate gradient cg within a single hf iteration results with the proposed strategies are first explored on a english broadcast news bn task we find that there is no difference between lws and fws with multiple layers for an lvcsr task second we find that various pooling strategies that gave improvements in computer vision tasks do not help much in speech third we observe that improving the cnn input features by including fmllr gives improvements in wer finally fixing the dropout mask during the cg iterations of hf lets us use dropout during hf sequence training and avoids destroying the gains from dropout accrued during ce training putting together improvements from fmllr and dropout we find that we are able to obtain a relative reduction in wer compared to the cnn system proposed in in addition on a larger bn task we can also achieve a relative improvement in wer the rest of this paper is organized as follows section describes the basic cnn architecture in that serves as a starting point to the proposed modifications in section we discuss experiments with pooling fmllr and for hf section presents results with the proposed improvements on a and bn task finally section concludes the paper and discusses future work basic cnn architecture in this section we describe the basic cnn architecture that was introduced in as this will serve as the baseline system which we improve upon in it was found that having two convolutional layers and four fully connected layers was optimal for lvcsr tasks we found that a pooling size of was appropriate for the first convolutional layer while no pooling was used in the second layer furthermore the convolutional layers had and feature maps respectively while the fully connected layers had hidden units the optimal feature set used was filterbank coefficients including delta double delta using this architecture for cnns we were able to achieve between relative improvement over dnns across many different lvcsr tasks in this paper we explore feature architecture and optimization strategies to improve the cnn results further preliminary experiments are performed on a english broadcast news task the acoustic models are trained on hours from the and english broadcast news speech corpora results are reported on the ears set unless otherwise noted all cnns are trained with and results are reported in a hybrid setup analysis of various strategies for lvcsr optimal feature set convolutional neural networks require features which are locally correlated in time and frequency this implies that linear discriminant analysis lda features which are very commonly used in speech can not be used with cnns as they remove locality in frequency mel fb features are one type of speech feature which exhibit this locality property we explore if any additional transformations can be applied to these features to further improve wer table shows the wer as a function of input feature for cnns the following can be observed using to help map features into a canonical space offers improvements using fmllr to further the input does not help one reason could be that fmllr assumes the data is well modeled by a diagonal model which would work best with decorrelated features however the mel fb features are highly correlated using delta and d dd to capture further timedynamic information in the feature helps using energy does not provide improvements in conclusion it appears mel fb is the optimal input feature set to use this feature set is used for the remainder of the experiments unless otherwise noted feature mel fb mel fb mel fb fmllr mel fb d dd mel fb d dd energy wer table wer as a function of input feature number of convolutional vs fully connected layers most cnn work in image recognition makes use of a few convolutional layers before having fully connected layers the convolutional layers are meant to reduce spectral variation and model spectral correlation while the fully connected layers aggregate the local information learned in the convolutional layers to do class discrimination however the cnn work done thus far in speech introduced a novel framework for modeling spectral correlations but this framework only allowed for a single convolutional layer we adopt a spatial modeling approach similar to the image recognition work and explore the benefit of including multiple convolutional layers table shows the wer as a function of the number of convolutional and fully connected layers in the network note that for each experiment the number of parameters in the network is kept the same the table shows that increasing the number of convolutional layers up to helps and then performance starts to deteriorate furthermore we can see from the table that cnns offer improvements over dnns for the same input feature set of convolutional vs fully connected layers no conv full dnn conv full conv full conv full wer table wer as a function of of convolutional layers number of hidden units cnns explored for image recognition tasks perform weight sharing across all pixels unlike images the local behavior of speech features in low frequency is very different than features in high frequency regions addresses this issue by limiting weight sharing to frequency components that are close to each other in other words low and high frequency components have different weights filters however this type of approach limits adding additional convolutional layers as filter outputs in different pooling bands are not related we argue that we can apply weight sharing across all time and frequency components by using a large number of hidden units compared to vision tasks in the convolutional layers to capture the differences between low and high frequency components this type of approach allows for multiple convolutional layers something that has thus far not been explored before in speech table shows the wer as a function of number of hidden units for the convolutional layers again the total number of parameters in the network is kept constant for all experiments we can observe that as we increase the number of hidden units up to the wer steadily decreases we do not increase the number of hidden units past as this would require us to reduce the number of hidden units in the fully connected layers to be less than in order to keep the total number of network parameters constant we have observed that reducing the number of hidden units from results in an increase in wer we were able to obtain a slight improvement by using hidden units for the first convolutional layer and for the second layer this is more hidden units in the convolutional layers than are typically used for vision tasks as many hidden units are needed to capture the locality differences between different frequency regions in speech number of hidden units wer table wer as a function of of hidden units limited full weight sharing in speech recognition tasks the characteristics of the signal in lowfrequency regions are very different than in high frequency regions this allows a limited weight sharing lws approach to be used for convolutional layers where weights only span a small local region in frequency lws has the benefit that it allows each local weight to focus on parts of the signal which are most confusable and perform discrimination within just that small local region however one of the drawbacks is that it requires setting by hand the frequency region each filter spans furthermore when many lws layers are used this limits adding additional sharing convolutional layers as filter outputs in different bands are not related and thus the locality constraint required for convolutional layers is not preserved thus most work with lws up to this point has looked at lws with one layer alternatively in a full weight sharing fws idea in convolutional layers was explored similar to what was done in the image recognition community with that approach multiple convolutional layers were allowed and it was shown that adding additional convolutional layers was beneficial in addition using a large number of hidden units in the convolutional layers better captures the differences between low and high frequency components since multiple convolutional layers are critical for good performance in wer in this paper we explore doing lws with multiple layers specifically the activations from one lws layer have locality preserving information and can be fed into another lws layer results comparing lws and fws are shown in table note these results are with stronger features as opposed to previous lws work which used simpler for both lws and fws we used convolutional layers as this was found in to be optimal first notice that as we increase the number of hidden units for fws there is an improvement in wer confirming our belief that having more hidden units with fws is important to help explain variations in frequency in the input signal second we find that if we use lws but match the number of parameters to fws we get very slight improvements in wer it seems that both lws and fws offer similar performance because fws is simpler to implement as we do not have to choose filter locations for each limited weight ahead of time we prefer to use fws because fws with parameters hidden units per convolution layer gives the best tradeoff between wer and number of parameters we use this setting for subsequent experiments pooling experiments pooling is an important concept in cnns which helps to reduce spectral variance in the input features similar to we explore method fws fws fws fws lws lws hidden units in conv layers params wer table limited full weight sharing pooling in frequency only and not time as this was shown to be optimal for speech because pooling can be dependent on the input sampling rate and speaking style we compare the best pooling size for two different hr tasks with different characteristics namely speech switchboard telephone conversations swb and speech english broadcast news bn table indicates that not only is pooling essential for cnns for all tasks is the optimal pooling size note that we did not run the experiment with no pooling for bn as it was already shown to not help for swb no pooling bn table wer pooling type of pooling pooling is an important concept in cnns which helps to reduce spectral variance in the input features the work in explored using max pooling as the pooling strategy given a pooling region rj and a set of activations rj the operation for is shown in equation sj max ai one of the problems with is that it can overfit the training data and does not necessarily generalize to test data two pooling alternatives have been proposed to address some of the problems with lp pooling and stochastic pooling lp pooling looks to take a weighted average of activations ai in pooling region rj as shown in equation p x p sj ai p can be seen as a simple form of averaging while p corresponds to one of the problems with average pooling is that all elements in the pooling region are considered so areas of may downweight areas of high activation lp pooling for p is seen as a tradeoff between average and lp pooling has shown to give large improvements in error rate in computer vision tasks compared to max pooling stochastic pooling is another pooling strategy that addresses the issues of max and average pooling in stochastic pooling first a set of probabilities p for each region j is formed by normalizing the activations across that region as shown in equation ai pi p ak sj al where l p a multinomial distribution is created from the probabilities and the distribution is sampled based on p to pick the location l and corresponding pooled activation al this is shown by equation stochastic pooling has the advantages of but prevents overfitting due to the stochastic component stochastic pooling has also shown huge improvements in error rate in computer vision given the success of lp and stochastic pooling we compare both of these strategies to on an lvcsr task results for the three pooling strategies are shown in table stochastic pooling seems to provide improvements over max and lp pooling though the gains are slight unlike vision tasks in appears that in tasks such as speech recognition which have a lot more data and thus better model estimates generalization methods such as lp and stochastic pooling do not offer great improvements over max pooling method max pooling stochastic pooling lp pooing wer table results with different pooling types overlapping pooling the work presented in did not explore overlapping pooling in frequency however work in computer vision has shown that overlapping pooling can improve error rate by compared to pooling one of the motivations of overlapping pooling is to prevent overfitting table compares overlapping and pooling on an lvcsr speech task one thing to point out is that because overlapping pooling has many more activations in order to keep the experiment fair the number of parameters between and overlapping pooling was matched the table shows that there is no difference in wer between overlapping or pooling again on tasks with a lot of data such as speech regularization mechanisms such as overlapping pooling do not seem to help compared to smaller computer vision tasks method pooling no overlap pooling with overlap wer table pooling with and without overlap pooling in time most previous cnn work in speech explored pooling in frequency only though did investigate cnns with pooling in time but not frequency however most cnn work in vision performs pooling in both space and time in this paper we do a deeper analysis of pooling in time for speech one thing we must ensure with pooling in time in speech is that there is overlap between the pooling windows otherwise pooling in time without overlap can be seen as subsampling the signal in time which degrades performance pooling in time with overlap can thought of as a way to smooth out the signal in time another form of regularization table compares pooling in time for both max stochastic and lp pooling we see that pooling in time helps slightly with stochastic and lp pooling however the gains are not large and are likely to be diminished after sequence training it appears that for large tasks with more data regularizations such as pooling in time are not helpful similar to other regularization schemes such as lp pooling and pooling with overlap in frequency method baseline pooling in time max pooling in time stochastic pooling in time lp wer table pooling in time incorporating into cnns in this section we describe various techniques to incorporate speaker adapted features into cnns fmllr features since cnns model correlation in time and frequency they require the input feature space to have this property this implies that commonly used feature spaces such as linear discriminant analysis can not be used with cnns in it was shown that a good feature set for cnns was filter bank coefficients maximum likelihood linear regression fmllr is a popular technique used to reduce variability of speech due to different speakers the fmllr transformation applied to features assumes that either features are uncorrelated and can be modeled by diagonal covariance gaussians or features are correlated and can be modeled by a full covariance gaussians while correlated features are better modeled by gaussians matrices dramatically increase the number of parameters per gaussian component oftentimes leading to parameter estimates which are not robust thus fmllr is most commonly applied to a decorrelated space when fmllr was applied to the correlated feature space with a diagonal covariance assumption little improvement in wer was observed covariance matrices stcs have been used to decorrelate the feature space so that it can be modeled by diagonal gaussians stc offers the added benefit in that it allows a few full covariance matrices to be shared over many distributions while each distribution has its own diagonal covariance matrix in this paper we explore applying fmllr to correlated features such as by first decorrelating them such that we can appropriately use a diagonal gaussian approximation with fmllr we then transform the fmllr features back to the correlated space so that they can be used with cnns the algorithm to do this is described as follows first starting from correlated feature space f we estimate an stc matrix s to map the features into an uncorrelated space this mapping is given by transformation sf next in the uncorrelated space an fmllr m matrix is estimated and is applied to the stc transformed features this is shown by transformation msf thus far transformations and demonstrate standard transformations in speech with stc and fmllr matrices however in speech recognition tasks once features are decorrelated with stc further transformation fmllr fbmmi are applied in this decorrelated space as shown in transformation the features are never transformed back into the correlated space however for cnns using correlated features is critical by multiplying the fmllr transformed features by an inverse stc matrix we can map the decorrelated fmllr features back to the correlated space so that they can be used with a cnn the transformation we propose is given in transformation msf the information captured in each layer of a neural network varies from more general to more specific concepts for example in speech lower layers focus more on speaker adaptation and higher layers focus more on discrimination in this section we look to combine inputs from different layers of a neural network to explore if complementarity between different layers could potentially improve results further this idea known as neural networks has been explored before for computer vision specifically we look at combining the output from fullyconnected and convolutional layers this output is fed into more layers and the entire network is trained jointly this can be thought of as combining features generated from a and network note for this experiment the same input feature features were used for both dnn and cnn streams results are shown in table a small gain is observed by combining dnn and cnn features again much smaller than gains observed in computer vision however given that a small improvement comes at the cost of such a large parameter increase and the same gains can be achieved by increasing feature maps in the cnn alone see table we do not see huge value in this idea it is possible however that combining cnns and dnns with different types of input features which are complimentary could potentially show more improvements order hf optimization method is critical for performance gains with sequence training compared to optimization though not as important for rectified linear units relu and dropout have recently been proposed as a way to regularize large neural networks in fact was shown to provide a relative reduction in wer for dnns on a english broadcast news lvcsr task however subsequent hf sequence training that used no dropout erased some of these gains and performance was similar to a dnn trained with a sigmoid and no dropout given the importance of for neural networks in this paper we propose a strategy to make dropout effective during hf sequence training results are presented in the context of cnns though this algorithm can also be used with dnns training one popular order technique for dnns is hf optimization let denote the network parameters l denote a loss function denote the gradient of the loss with respect to the parameters d denote a search direction and b denote a hessian approximation matrix characterizing the curvature of the loss around the central idea in hf optimization is to iteratively form a quadratic approximation to the loss and to minimize this approximation using conjugate gradient cg l d l t d t d b d during each iteration of the hf algorithm first the gradient is computed using all training examples second since the hessian can not be computed exactly the curvature matrix b is approximated by a damped version of the matrix g where is set via then conjugate gradient cg is run for until the relative progress made in minimizing the cg objective function falls below a certain tolerance during each cg iteration products are computed over a sample of the training data results dropout results with the proposed fmllr idea are shown in table notice that by applying fmllr in a decorrelated space we can achieve a improvement over the baseline system this gain was not possible in when fmllr was applied directly to correlated features dropout is a popular technique to prevent during neural network training specifically during the operation in neural network training dropout omits each hidden unit randomly with probability this prevents complex between hidden units forcing hidden units to not depend on other units specifically using dropout the activation yl at layer l is given by equation where is the input into layer l wl is the weight for layer l b is the bias f is the activation function relu and r is a binary mask where each entry is drawn from a bernoulli p distribution with probability p of being since dropout is not used during decoding the factor used during training ensures that at test time when no units are dropped out the correct total input will reach each layer wl bl yl f feature proposed fmllr wer table wer with improved fmllr features rectified linear units and dropout at ibm two stages of neural network training are performed first dnns are trained with a stochastic gradient descent sgd ce criterion second dnn weights are using a objective function since speech is a task this objective is more appropriate for the speech recognition problem numerous studies have shown that sequence training provides an additional relative improvement over a ce trained dnn using a combining hf dropout conjugate gradient tries to minimize the quadratic objective function given in equation for each cg iteration the damped gaussnetwon matrix g is estimated using a subset of the training data this subset is fixed for all iterations of cg this is because if the data used to estimate g changes we are no longer guaranteed to have conjugate search directions from iteration to iteration recall that dropout produces a random binary mask for each presentation of each training instance however in order to guarantee good conjugate search directions for a given utterance the dropout mask per layer can not change during cg the appropriate way to incorporate dropout into hf is to allow the dropout mask to change for different layers and different utterances but to fix it for all cg iterations while working with a specific layer and specific utterance although the masks can be refreshed between hf iterations as the number of network parameters is large saving out the dropout mask per utterance and layer is infeasible therefore we randomly choose a seed for each utterance and layer and save this out using a randomize function with the same seed guarantees that the same dropout mask is used per utterance results we experimentally confirm that using a dropout probability of p in the and layers is reasonable and the dropout in all other layers is zero for these experiments we use hidden units for the fully connected layers as this was found to be more beneficial with dropout compared to hidden units results with different dropout techniques are shown in table notice that if no dropout is used the wer is the same as sigmoid a result which was also found for dnns in by using dropout but fixing the dropout mask per utterance across all cg iterations we can achieve a improvement in wer finally if we compare this to varying the dropout mask per cg training iteration the wer increases further investigation in figure shows that if we vary the dropout mask there is slow convergence of the loss during training particularly when the number of cg iterations increases during the later part of hf training this shows experimental evidence that if the dropout mask is not fixed we can not guarantee that cg iterations produce conjugate search directions for the loss function sigmoid relu no dropout relu dropout fixed for cg iterations relu dropout per cg iteration wer table wer of hf sequence training dropout training is that it is more closely linked to the speech recognition objective function compared to using this fact we explore how many iterations of ce are actually necessary before moving to hf training table shows the wer for different ce iterations and the corresponding wer after hf training note that hf training is started and lattices are dumped using the ce weight that is stopped at notice that just by annealing two times we can achieve the same wer after hf training compared to having the ce weights converge this points to the fact that spending too much time in ce is unnecessary once the weights are in a relatively decent space it is better to just jump to hf sequence training which is more closely matched to the speech objective function ce iter times annealed ce wer hf wer table hf seq training wer per ce iteration results in this section we analyze cnn performance with the additions proposed in section namely fmllr and relu dropout results are shown on both a and hr english broadcast news task english broadcast news experimental setup following the setup in the hybrid dnn is trained using speakeradapted features as input with a context of frames a dnn with hidden units per layer and a sixth softmax layer with output targets is used all dnns are followed by ce training and then hf the feature system is also trained with the same architecture but uses output targets a pca is applied on top of the dnn before softmax to reduce the dimensionality from to using these features we apply gmm training followed by feature and discriminative training using the bmmi criterion in order to fairly compare results to the dnn hybrid system no mllr is applied to the dnn featurebased system the old cnn systems are trained with features and a sigmoid the proposed systems are trained with the fmllr features described in section and discussed in section dropout fixed per cg dropout varied per cg results loss hf iteration fig loss with dropout techniques finally we explore if we can reduce the number of ce iterations before moving to sequence training a main advantage of sequence table shows the performance of proposed feature and hybrid systems and compares this to dnn and old cnn systems the proposed cnn hybrid system offers between a relative improvement over the dnn hybrid and a relative improvement over the old cnn hybrid system while the proposed cnnbased feature system offers a modest improvement over the old feature system this slight improvements with featurebased system is not surprising all we have observed huge relative improvements in wer on a hybrid sequence trained dnn with output targets compared to a hybrid dnn however after features are extracted from both systems the gains diminish down to relative systems use the neural network to learn a feature transformation and seem to saturate in performance even when the hybrid system used to extract the features improves thus as the table shows there is more potential to improve a hybrid system as opposed to a system model hybrid dnn old hybrid cnn proposed hybrid cnn features old features proposed features table wer on broadcast news hours hr english broadcast news hinton deng yu dahl mohamed jaitly a senior vanhoucke nguyen sainath and kingsbury deep neural networks for acoustic modeling in speech recognition ieee signal processing magazine vol no pp lecun and bengio convolutional networks for images speech and in the handbook of brain theory and neural networks mit press mohamed jiang and penn applying convolutional neural network concepts to hybrid model for speech recognition in proc icassp sainath mohamed kingsbury and ramabhadran deep convolutional neural networks for lvcsr in proc icassp deng and yu a deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion in proc icassp sermanet chintala and lecun convolutional neural networks applied to house numbers digit classification in pattern recognition icpr international conference on experimental setup we explore scalability of the proposed techniques on hours of english broadcast news development is done on the darpa ears set testing is done on the darpa ears evaluation set the dnn hybrid system uses fmllr features with a context and use five hidden layers each containing sigmoidal units the feature system is trained with output targets while the hybrid system has output targets results are reported after hf sequence training again the proposed systems are trained with the fmllr features described in section and discussed in section results table shows the performance of the proposed cnn system compared to dnns and the old cnn system while the proposed feature system did improve wer over the old cnn wer performance slightly deteriorates after cnnbased features are extracted from the network however the cnn offers between a relative improvement over the dnn hybrid system and between a relative improvement over the old features systems this helps to strengthen the hypothesis that hybrid cnns have more potential for improvement and the proposed fmllr and techniques provide substantial improvements over dnns and cnns with a sigmoid and features model hybrid dnn features old features proposed features proposed hybrid cnn references table wer on broadcast news hrs conclusions in this paper we explored various strategies to improve cnn performance we incorporated fmllr into cnn features and also made dropout effective after hf sequence training we also explored various pooling and weight sharing techniques popular in computer vision but found they did not offer improvements for lvcsr tasks overall with the proposed ideas we were able to improve our previous best cnn results by relative zeiler and fergus stochastic pooling for regularization of deep convolutional neural networks in proc of the international conference on representaiton learning iclr krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in advances in neural information processing systems lecun huang and bottou learning methods for generic object recognition with invariance to pose and lighting in proc cvpr gales maximum likelihood linear transformations for hmmbased speech recognition computer speech and language vol no pp kingsbury sainath and soltau scalable minimum bayes risk training of deep neural network acoustic models using distributed optimization in proc interspeech dahl sainath and hinton improving deep neural networks for lvcsr using rectified linear units and dropout in proc icassp waibel hanazawa hinton shikano and lang phoneme recognition using neural networks ieee transactions on acoustics speech and signal processing vol no pp gales covariance matrices for hidden markov models ieee transactions on speech and audio processing vol pp kingsbury optimization of sequence classification criteria for acoustic modeling in proc icassp hinton srivastava krizhevsky sutskever and salakhutdinov improving neural networks by preventing coadaptation of feature detectors the computing research repository corr vol martens deep learning via optimization in proc intl conf on machine learning icml sainath kingsbury and ramabhadran bottleneck features using deep belief networks in proc icassp
| 9 |
model identification via physics engines for improved policy search oct shaojun zhu andrew kimmel kostas bekris and abdeslam boularias this paper presents a practical approach for identifying unknown mechanical parameters such as mass and friction models of manipulated rigid objects or actuated robotic links in a succinct manner that aims to improve the performance of policy search algorithms key features of this approach are the use of physics engines and the adaptation of a bayesian optimization framework for this purpose the physics engine is used to reproduce in simulation experiments that are performed on a real robot and the mechanical parameters of the simulated system are automatically so that the simulated trajectories match with the real ones the optimized model is then used for learning a policy in simulation before safely deploying it on the real robot given the limitations of physics engines in modeling objects it is generally not possible to find a mechanical model that reproduces in simulation the real trajectories exactly moreover there are many scenarios where a policy can be found without having a perfect knowledge of the system therefore searching for a perfect model may not be worth the computational effort in practice the proposed approach aims then to identify a model that is good enough to approximate the value of a locally optimal policy with a certain confidence instead of spending all the computational resources on searching for the most accurate model empirical evaluations performed in simulation and on a real robotic manipulation task show that model identification via physics engines can significantly boost the performance of policy search algorithms that are popular in robotics such as trpo power and pilco with no additional data i introduction this paper presents an approach for model identification by exploiting the availability of physics engines that are used for simulating dynamics of robots and objects they interact with there are many examples of popular physics engines that are becoming increasingly efficient physics engines take as inputs mechanical and mesh models of objects in a particular environment in addition to forces and torques applied to them at different and return predictions of how the objects would move the accuracy of the predicted motions depends on several factors the first one is the limitation of the mathematical model used by the engine coulomb s law of friction the second factor is the accuracy of the numerical algorithm used for solving the differential equations of motion finally the prediction depends on the accuracy of the mechanical parameters of the robot and the objects models such as mass friction and elasticity in this work we focus on this the puter authors science are with the department of rutgers university new jersey comusa the baxter robot needs to pick up the bottle but it can not reach it while the motoman robot can the motoman gently pushes the object locally without risking to lose it from observed motions mechanical properties of the object are identified via a physics engine the object is then pushed into baxter s workspace using a policy learned in simulation with the identified property parameters fig last factor and propose a method for improving the accuracy of mechanical parameters used in physical simulations for motivation consider the setup illustrated in figure where a static robot motoman assists another one baxter to reach and pick up a desired object a bottle the object is known and both robots have the capability to pick it up however the object can be reached only by motoman and not by baxter and due to the considerable distance between the two static robots the intersection of their reachable workspace is empty which restricts the execution of a direct in this case the motoman robot must learn an action such as rolling or sliding that would move the bottle to a distant target zone if the robot simply executes the maximum velocity push on the object the result causes the object to fall off the table similarly if the object is rolled too slowly it could end up stuck in the region between the two robot s workspaces and neither of them could reach it both outcomes are undesirable as they would ruin the autonomy of the system and require a human intervention to reset the scene or perform the action this example highlights the need for identifying an object s mechanical model to predict where the object would end up on the table given different initial velocities an optimal velocity could be derived accordingly using simulations the technique presented in this paper aims at improving the accuracy of the mechanical parameters used in physics engines in order to perform a given robotic task given recorded real trajectories of the object in question we search for the best model parameters so that the simulated trajectories are as close as possible to the observed real trajectories this search is performed through an anytime bayesian optimization where a probability distribution belief on the optimal model is repeatedly updated when the time consumed by the optimization exceeds a certain preallocated time budget the optimization is halted and the model with the highest probability is returned a policy search subroutine takes over the returned model and finds a policy that aims to perform the task the policy search subroutine could be a control method such as lqr or a reinforcement learning rl algorithm that runs on the physics engine with the identified model instead of the real world for the sack of and also for safety the obtained policy is then deployed on the robot run in the real world and the new observed trajectories are handed back again to the model identification module to repeat the same process the question that arises here is how accurate should the identified model be in order to find the optimal policy instead of spending a significant amount of time searching for the most accurate model it would be useful to stop the search whenever a model that is sufficiently accurate for the task at hand is found answering this question exactly is difficult because that would require knowing in advance the optimal policy for each model in which case the model identification process can be stopped simply when there is a consensus among the most likely models on which policy is optimal our solution to this problem is motivated by a key quality that is desired in robot rl algorithms to ensure safety most robot rl algorithms constrain the changes in the policy between two iterations to be minimal and gradual for instance both policy search reps and trust region policy optimization trpo algorithms guarantee that the kl distance between an updated policy and a previous one in a learning loop is bounded by a predefined constant therefore one can in practice use the previous best policy as a proxy to verify if there is a consensus among the most likely models on the best policy in the next iteration of the policy search this is justified by the fact that the new policies are not too different from the previous one in a policy search the model identification process is stopped whenever the most likely models predict almost the same value for the previous policy in other terms if all models that have reached a high probability in the anytime optimization predict the same value for the previous policy then any of these models could be used for searching for the next policy while the current paper does not provide theoretical guarantees of the proposed method our empirical evaluations show that it can indeed improve the performance of several rl algorithms the first part of the experiments is performed on systems in the mujoco simulator the second part is performed on the robotic task shown in figure ii r elated w ork two approaches exist for learning to perform tasks in systems with unknown parameters and ones methods search for a policy that best solves the task without explicitly learning the system s dynamics methods are accredited with the recent success stories of rl in video games for example in robot learning the reps algorithm was used to successfully train a robot to play table tennis the power algorithm is another policy search approach widely used for learning motor skills the trust region policy optimization trpo algorithm is arguably the rl technique policy search can also be achieved through bayesian optimization and has been used for gait optimization where central pattern generators are a popular policy parameterization methods however tend to require a lot of training data and can also jeopardize the safety of a robot approaches are alternatives that explicitly learn the unknown parameters of the system and search for an optimal policy accordingly there are many examples of approaches for robotic manipulation some of which have used simulation to predict the effects of pushing flat objects on a smooth surface a nonparametric approach was employed for learning the outcome of pushing large objects for rl the pilco algorithm has been proven efficient in utilizing a small amount of data to learn dynamical models and optimal policies several cognitive models that combine such bayesian inference with approximate knowledge of newtonian physics have been proposed recently a common characteristic of many methods is the fact that they learn a transition function using a purely statistical approach without taking advantage of the known equations of motion of narmax is an example of popular model identification techniques that are not specifically designed for dynamics in contrast to these methods we use a physics engine and concentrate on identifying only the mechanical properties of the objects instead of learning laws of motion from scratch there is also work on identifying sliding models of objects using optimization it is not clear however how these methods would perform since they are tailored to specific tasks such as pushing planar objects unlike the proposed general approach an increasingly popular alternative addresses these challenges through learning this involves the demonstration of successful examples of physical interaction and learning a direct mapping of the sensing input to controls while a desirable result these approaches usually require many physical experiments to effectively learn some recent works also proposed to use physics engines in combination with experiments to boost policy search algorithms although these methods do not explicitly identify mechanical models of objects a key contribution of the current work is linking the modelidentification process to the policy search process instead of searching for the most accurate model we search for a model that is accurate enough to predict the value function of a policy that is not too different from the searched policy therefore the proposed approach can be used in combination with any policy search algorithm that guarantees smooth changes in the learned policy physical interaction force xt t simulation error ek position f xt x model distribution simulate with model simulation error model distribution position f xt simulate with model simulation error simulation error x p r x x simulate with model k error p r x x x x model distribution x x x n ia or ss err au d r g ve ro er er obs e th ith te w da ss up oce pr t n ia or ss err au d r g ve ro er er obs e th ith te w da ss up oce pr use the final model distribution to find force in new state xt actual observed pose at time t p r error pose at time position f xt learning a mechanical model of an object bottle through physical simulations the key idea is to search for a model that closes the gap between simulation with a physics engine and reality by using an anytime bayesian optimization the search stops when the models with the highest probabilities predict similar values for a given policy this process is repeated in this figure after each t after every action in practice it is more efficient to do the model identification only after a certain number of fig iii p roposed a pproach we start with an overview of the model identification and policy search system we then present the main algorithm and explain the model identification part in more details a system overview and notations figure shows an overview of the proposed approach the example is focused on the manipulation application but the same approach is used to identify physical properties of actuated robotic links in object manipulation problems figure the targeted mechanical properties correspond to the object s mass and the static and kinetic friction coefficients of different regions of the object the surface of an object is divided into a regular grid this allows to identify the friction parameters of each part of the grid these physical properties are all concatenated in a single vector and represented as a vector where is the space of all possible values of the physical properties is discretized with a regular grid resolution the proposed approach returns a distribution p on discretized instead of a single point since model identification is generally an problem in other terms there are multiple models that can explain an observed movement of an object with equal accuracies the objective is to preserve all possible explanations and their probabilities the online model identification algorithm takes as input a prior distribution pt for t on the discretized space of physical properties pt is calculated based on the initial distribution and a sequence of observations xt for instance in the case of object manipulation xt is the pose position and orientation of the manipulated object at time t and is a vector describing a force applied by the robot s fingertip on the object at time applying a force results in changing the object s pose from xt to the algorithm returns a distribution on the models the robot s task is specified by a reward function r that maps x into real numbers a policy returns an action x for state x the value v of policy h given model is defined as v r xt where h is a fixed horizon is a given starting state and f xt is the predicted state at time t after simulating force in state xt using physical parameters for simplicity we focus here only on systems with deterministic dynamics b main algorithm given a reward function r and a simulator with model parameters there are many techniques that can be used for searching for a policy that maximizes value v for example one can use differential dynamic programming ddp monte carlo mc methods or simply run a modelfree rl algorithm on the simulator if the system is highly nonlinear and a good policy can not be found with former methods the choice of a particular policy search method is open and depends on the task the main loop of our system is presented in algorithm this consists in repeating three main steps data collection using the real robot model identification using a simulator and policy search in simulation using the best identified model model identification the process explained in algorithm consists of simulating the effects of forces on the object in states xi under t initialize distribution p over to a uniform distribution initialize policy repeat execute policy for h iterations on the real robot and collect new data xi for i t t h t t h run algorithm with collected data and reference policy for updating distribution p initialize a policy search algorithm trpo with and run the algorithm in the simulator with the model arg p to find an improved policy until timeout algorithm main loop various values of parameters and observing the resulting states for i the accompanying implementation is using the bullet and mujoco physics engines for this purpose the goal is to identify the model parameters that make the outcomes of the simulation as close as possible to the real observed outcomes in other terms the following optimization problem is solved de f arg min e t f xi wherein xi and are the observed states of the object at times i and i is the force that moved the object from xi to and f xi the predicted state at time t after simulating force in state xi using simulations are computationally expensive it is therefore important to minimize the number of simulations evaluations of function e while searching for the optimal parameters that solve equation we solve this problem by using the entropy search technique this method is wellsuited for our purpose because it explicitly maintains a belief on the optimal parameters unlike other bayesian optimization methods such as expected improvement that only maintain a belief on the objective function in the following we explain how this technique is adapted to our purpose and show why keeping a distribution on all models is needed for deciding when to stop the optimization the error function e does not have an analytical form it is gradually learned from a sequence of simulations with a small number of parameters to choose these parameters efficiently in a way that quickly leads to accurate parameter estimation a belief about the actual error function is maintained this belief is a probability measure over the space of all functions e rd r and is represented by a gaussian process gp with mean vector m and covariance matrix the mean m and covariance k of the gp are learned from data points e e where is a selected vector of physical properties of the object and e is the accumulated distance between actual observed states and states that are obtained from simulation using input data xi for i t a discretized space of possible values of physical properties a reference policy minimum and maximum number of evaluated models kmin kmax model confidence threshold value error threshold output probability distribution p over sample uniform l k stop f alse repeat calculating the accuracy of model lk for i to t do simulate xi using a physics engine with physical parameters and get the predicted next state f xi lk lk end l l lk calculate gp m k on error function e where e l using data l l monte carlo sampling sample en gp m k in foreach do n p e j n end selecting the next model to evaluate checking the stopping condition arg p log p k k if k kmin then arg p calculate the values v with all models that have a probability p by using the physics engine for simulating trajectories with models if then stop true end end if k kmax then stop true end until stop true algorithm model identification the probability distribution p on the identity of the best physical model returned by the algorithm is computed from the learned gp as de f p p arg min e z pm k e h e e de e rd h is the heaviside step function h e e if e e and h e e otherwise and pm k e is the probability of a function e according to the learned gp mean m and covariance intuitively p is the expected number of times that happens to be the minimizer of e when e is a function distributed according to gp density pm k distribution p from equation does not have a closedform expression therefore a monte carlo mc sampling is employed for estimating the process samples vectors e containing values that e could take according to the learned gaussian process in the discretized space then p is estimated by counting the ratio of sampled vectors of the values of simulation error e where happens to make the lowest error as indicated in equation in algorithm finally the computed distribution p is used to select the next vector to use as a physical model in the simulator this process is repeated until the entropy of p drops below a certain threshold or until the algorithm runs out of the allocated time budget the entropy of p is given as log pmin when the entropy of p is close to zero the mass of distribution p is concentrated around a single vector corresponding to the physical model that best explains the observations hence next should be selected so that the entropy would decrease after adding the data point e to train the gp and p using the new mean m and covariance k in equation entropy search methods follow this reasoning and use mc again to sample for each potential choice of a number of values that e could take according to the gp in order to estimate the expected change in the entropy of p and choose the parameter vector that is expected to decrease the entropy of p the most the existence of a secondary nested process of mc sampling makes this method unpractical for our online optimization instead we present a simple heuristic for choosing the next in this method that we call greedy entropy search the next is chosen as the point that contributes the most to the entropy of p arg max log p this selection criterion is greedy because it does not anticipate how the output of the simulation using would affect the entropy of nevertheless this criterion selects the point that is causing the entropy of p to be high that is a point with a good chance p of being the but also with a high uncertainty p log p we found out from our first experiments that this heuristic version of entropy search is more practical than the original entropy search method because of the computationally expensive nested mc sampling loops used in the original method the stopping condition of algorithm depends on the predicted value of a reference policy the reference policy is one that will be used in the main algorithm algorithm as a starting point in the policy search with the identified model that is also the policy executed in the previous round of the main algorithm many policy search algorithms such as reps and trpo guarantee that the kl divergence between consecutive policies and is minimal therefore if the difference for two given models and is smaller than a threshold then the difference v should also be smaller than a threshold that is a function of and kl a full proof of this conjecture is the subject of an upcoming work in practice this means that if and are two models with high probabilities and v then there is no point in continuing the bayesian optimization to find out which one of the two models is actually the most accurate because both models will result in similar policies the same argument could be used when there are more than two models with high probabilities in some tasks such as the one in the motivation example in figure the policy used for data collection is significantly different from the policy used for actually performing the task the policy used to collect data consists in moving the object slowly without risking to make it move away from the reachable workspace of the motoman otherwise a human intervention would be needed the optimal policy on the other hand consists in striking the object with a certain high velocity therefore the policy can not be used as a proxy for the optimal policy in algorithm instead we use the actual optimal policy with respect to the most likely model arg v it turns out that finding the optimal policy for a given model in this specific task can be performed quickly in simulation by searching in the space of discretized striking velocities this is not the case in more complex systems where searching for an optimal policy is computationally expensive which is the reason we use the previous best policy as a surrogate for the next best policy when checking the stopping condition iv e xperimental r esults the proposed model identification vgmi approach is validated both in simulation and on a real robotic manipulation task and compared to other rl methods a experiments on rl benchmarks in simulation setup the simulation experiments are done in openai gym figure with the mujoco physics simulator the space of unknown physical models is described below inverted pendulum a pendulum is connected to a cart which moves linearly the dimensionality of space is two one for the mass of the pendulum and one for the cart swimmer the swimmer is a planar robot space has three dimensions one for the mass of each link hopper the hopper is a planar robot thus dimensionality of the parameter space is four the walker is a planar biped robot thus dimensionality of the parameter space is seven for each of the environments we use the simulator with default mass as the real system and increase or decrease the masses by ten to fifty percent randomly to create inaccurate simulators to use as prior models in this section all the policies are trained with trust region policy optimization trpo implemented in rllab the policy network has two hidden layers with neurons each swimmer hopper fig openai gym systems used in the experiments entropy search greedy entropy search trajectory error meters time seconds fig model identification in inverted pendulum environment using two variants of entropy search we start by comparing greedy entropy search ges with the original entropy search es on the problem identifying the mass parameters of the inverted pendulum system rollout trajectories are collected using optimal policies learned with the real system given inaccurate simulators and the control sequence from rollouts we try to identify the mass parameters which enables the simulator to generate trajectories most close to the real ones figure shows that ges converges faster than es similar behaviors were observed on the other systems but not reported here for space s sake we refer to the main algorithm detailed in algorithm as in this section starts with the inaccurate simulator and vgmi gradually increases the accuracy of the simulator we compare against a trpo trained directly with the real system and b trpo trained with inaccurate simulators depending on problem difficulty we vary the number of iterations for policy optimization for trpo both with the real system and with the inaccurate simulators we run inverted pendulum for interactions swimmer for iterations hopper for iterations and for iterations for we run vgmi as detailed in algorithm every iterations h in algorithm we run iterations for inverted pendulum iterations for swimmer iterations for hopper and iterations for all the results are the mean and variance of independent trials for statistical significance results we report performance both in terms of the number of rollouts on the real system and the total training time the number of rollouts represents the data efficiency of the policy search algorithms and corresponds to the actual number of trajectories in the real system the total training time is the total simulation and policy optimization time used for trpo to converge for it also includes the time spent on model identification figure shows the mean cumulative reward per rollout trajectory on the real systems as functions of the number of rollouts used for training for all four tasks requires less rollouts the rollouts are used by vgmi to identify the optimal mass parameter of the simulator for policy search while they are used directly for policy search by trpo the results show that the models identified by vgmi are accurate enough for trpo to find a good policy by using the same amount of data figure shows the cumulative reward per trajectory on the real system as a function of the total time in seconds we also report the performance of trpo when trained with inaccurate simulators which is worse then when it is trained directly on the real system the real system here is also a simulator but with different physical parameters this clearly shows the advantage of model identification from data for policy search is slower than trpo because of all the extra time spent by on model identification and policy search in the learned simulator in summary vgmi boosts the of trpo by identifying parameters of the objects and using a physics engine with the identified parameters to search for a policy before deploying it on the real system on the other hand vgmi adds a computational burden to trpo manipulation experiments on a real robot setup the task in this experiment is to push the bottle one meter away from one side of a table to the other as shown in figure the goal is to find an optimal policy with parameter representing the pushing velocity of the robotic hand the pushing direction is always towards the target position and the hand pushes the object at its geometric center during data collection no human effort is needed to reset the scene the velocity and pushing direction are controlled such that the object is always in the workspace of the robotic hand specifically a pushing velocity limit is set and the pushing direction is always towards the center of the workspace the proposed approach iteratively searches for best pushing velocity by uniformly sampling different velocities in simulation and identifies the object model parameters the mass and the friction coefficient using trajectories from rollouts by running vgmi as in algorithm in this experiment we run vgmi after each rollout h in algorithm the method is compared to two reinforcement learning methods power and pilco for power the reward function is r where dist is the distance between the object position after pushing and the desired target fig cumulative reward per trajectory as a function of the number of trajectories on the real system trajectories on a second simulator with identified models are not counted here as they do not occur on the real system fig cumulative reward per trajectory as a function of total time in seconds including search and optimization times fig examples of experiment where the motoman pushes the object into baxter s workspace figure provides the real robotic experiment with a motoman robot the proposed method achieves both lower final object location error and fewer number of object drops comparing to alternatives the reduction in object drops is especially important for autonomous robot learning as it minimizes human effort during learning the approach such as power results in higher location error and more object drops pilco performs better than power as it also learns a dynamical model in addition to the policy but the model may not be as accurate as a physics engine with identified parameters as only a very simple policy search method is used for vgmi the performance is expected to be better is more advanced policy search methods such as combining power with vgmi power pilco vgmi power pilco vgmi of times object falls off the table results two metrics are used for evaluating the performance the distance between the final object location after being pushed and the desired goal location the number of times the object falls off the table a video of these experiments can be found in the supplementary video or on https location error meters position for pilco the state space is the object position number of trials number of trials fig pushing policy optimization results using a motoman robot our method vgmi achieves both lower final object location error and fewer object drops comparing to alternatives best viewed in color c onclusion this paper presents a practical approach that integrates a physics engine and bayesian optimization for model identification to increase the data efficiency of reinforcement learning algorithms the model identification process is taking place in parallel with the reinforcement learning loop instead of searching for the most accurate model the objective is to identify a model that is accurate enough so as to predict the value function of a policy that is not too different from the current optimal policy therefore the proposed approach can be used in combination with any policy search algorithm that guarantees smooth changes in the learned policy both simulated and real robotic manipulation experiments show that the proposed technique for model identification can decrease the number of rollouts needed to learn optimal policy future works include performing an analysis of the properties for the proposed model identification method such as expressing the conditions under which the inclusion of the model identification approach reduces the needs for physical rollouts and the in convergence in terms of physical rollouts it is also interesting to consider alternative physical tasks such as locomotion challenges which can benefit by the proposed framework r eferences erez tassa and todorov simulation tools for robotics comparison of bullet havok mujoco ode and physx in ieee international conference on robotics and automation icra pp bullet physics engine online available mujoco physics engine online available dart physics egnine online available http physx physics engine online available havok physics engine online available sutton and barto introduction to reinforcement learning ed cambridge ma usa mit press bertsekas and tsitsiklis programming ed athena scientific kober j bagnell and peters reinforcement learning in robotics a survey international journal of robotics research july mnih kavukcuoglu silver a rusu veness bellemare graves riedmiller fidjeland ostrovski petersen beattie sadik antonoglou king kumaran wierstra legg and hassabis control through deep reinforcement learning nature vol no pp online available http peters and relative entropy policy search in proceedings of the aaai conference on artificial intelligence aaai pp kober and peters policy search for motor primitives in robotics in advances in neural information processing systems pp schulman levine abbeel jordan and moritz trust region policy optimization in proceedings of the international conference on machine learning blei and bach eds jmlr workshop and conference proceedings pp online available http pdf calandra seyfarth peters and deisenroth bayesian optimization for learning gaits under uncertainty annals of mathematics and artificial intelligence amai vol no pp ijspeert central pattern generators for locomotion control in animals and robots a neural networks vol no pp dogar hsiao ciocarlie and srinivasa grasp planning through clutter in robotics science and systems viii july lynch and mason stable pushing mechanics controllability and planning ijrr vol merili veloso and akin of complex passive mobile objects using experimentally acquired motion models autonomous robots pp scholz levihn isbell and wingate a model prior for mdps in proceedings of the international conference on machine learning icml zhou paolini j bagnell and mason a convex polynomial model for planar sliding identification and application in ieee international conference on robotics and automation icra stockholm sweden may pp deisenroth rasmussen and fox learning to control a manipulator using reinforcement learning in robotics science and systems rss hamrick battaglia griffiths and j tenenbaum inferring mass in complex scenes by mental simulation cognition vol pp chang ullman torralba and j tenenbaum a compositional approach to learning physical dynamics under review as a conference paper for iclr battaglia pascanu lai rezende and koray interaction networks for learning about objects relations and physics in advances in neural information processing systems ljung system identification ed theory for the user upper saddle river nj usa prentice hall ptr yu leonard and rodriguez shape and pose recovery from planar pushing in international conference on intelligent robots and systems iros hamburg germany september october pp agrawal nair abbeel malik and levine learning to poke by poking experiential learning of intuitive physics nips fragkiadaki agrawal levine and malik learning visual predictive models of physics for playing billiards in iclr ullman goodman and j tenenbaum learning physics from dynamical scenes in proceedings of the thirtysixth annual conference of the cognitive science society wu yildirim lim freeman and tenenbaum galileo perceiving physical object properties by integrating a physics engine with deep learning in advances in neural information processing systems pp byravan and fox learning rigid body motion using deep neural networks corr vol finn and levine deep visual foresight for planning robot motion icra zhang wu zhang freeman and j tenenbaum a comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding corr vol li azimi leonardis and fritz to fall or not to fall a visual approach to physical stability prediction vol lerer gross and fergus learning physical intuition of block towers by example in proceedings of the international conference on machine learning icml new york city ny usa june pp pinto gandhi han y park and gupta the curious robot learning visual representations via physical interactions corr vol li leonardis and fritz visual stability prediction and its application to manipulation corr vol denil agrawal kulkarni erez battaglia and de freitas learning to perform physics experiments via deep reinforcement learning yu liu and turk preparing for the unknown learning a universal policy with online system identification corr vol online available http marco berkenkamp hennig schoellig krause schaal and trimpe virtual real trading off simulations and physical experiments in reinforcement learning with bayesian optimization in ieee international conference on robotics and automation icra singapore singapore may june pp online available https hennig and schuler entropy search for global optimization journal of machine learning research vol pp rasmussen and williams gaussian processes for machine learning the mit press brockman cheung pettersson schneider schulman tang and zaremba openai gym arxiv preprint duan chen houthooft schulman and abbeel benchmarking deep reinforcement learning for continuous control in icml pp
| 2 |
september multivariate density modeling for retirement finance christopher rook abstract prior to the financial crisis mortgage securitization models increased in sophistication as did products built to insure against losses layers of complexity formed upon a foundation that could not support it and as the foundation crumbled the housing market followed that foundation was the gaussian copula which failed to correctly model correlations of derivative securities in duress in retirement surveys suggest the greatest fear is running out of money and as retirement decumulation models become increasingly sophisticated large financial firms and may guarantee their success similar to an investment bank failure the event of retirement ruin is driven by outliers and correlations in times of stress it would be desirable to have a foundation able to support the increased complexity before it forms however the industry currently relies upon similar gaussian or lognormal dependence structures we propose a multivariate density model having fixed marginals that is tractable and fits data which are skewed multimodal of arbitrary complexity allowing for a rich correlation structure it is also ideal for a retirement plan by fitting historical data seeded with black swan events a preliminary section reviews all concepts before they are used and fully documented source code is attached making the research lastly we take the opportunity to challenge existing retirement finance dogma and also review some recent criticisms of retirement ruin probabilities and their suggested replacement metrics table of contents introduction i literature review ii preliminaries iii univariate density modeling iv multivariate density modeling covariances multivariate density modeling vi real compounding return on a diversified portfolio vii retirement portfolio optimization viii conclusion references data surveys ix appendix with source code keywords variance components em algorithm ecme algorithm maximum likelihood pdf cdf information criteria finite mixture model constrained optimization retirement decumulation probability of ruin glidepaths financial crisis contact a financial security that is purchased for at time with all distributions reinvested yields a value at time t called the adjusted price say pt for t the total return at time t is rt pt and the total compounding return is so that pt if the inflation rate between times and t is it then where rt is the real return at time the real price at time t is the value pt such that rt which upon solving yields pt in an efficient market real prices are governed by a geometric random walk grw that is ln pt ln st where st n a value of represents a drift and is the expected price increase sufficient to compensate the investor for risk between times and in a random walk the next value is the current value plus a random normal step st and the best predictor of it is the current value exponentiating both sides of the grw model yields the alternative form pt e where e lognormal under strict conditions the normally distributed step st can be justified decompose the time between and t into a series of smaller segments say d d and let be independent and identically distributed iid random variables rvs for the compounding real return between times and d so that the ln are also iid rvs the compounding real return at time t is ln r r e where st n when d by the central limit theorem clt here t can represent years and d days so that the compounding yearly return is the product of compounding daily returns the lognormal assumption for breaks down when the are not iid for d and there is ample research to indicate that the correlation between returns increases as the time length decreases we also find that compounding returns on liquid securities used in retirement finance are often better fit by the normal probability density function pdf than the lognormal suggesting that short term real compounding returns may not be iid see a further complication is that the normal pdf is generally considered tractable whereas the lognormal pdf is not for example a diversified portfolio of equities and bonds with real returns et and bt respectively has compounding real return where is the equity ratio unfortunately no known pdf exists for the sum of correlated lognormal rvs and we are left to approximate it for a given see rook kerman for an implementation despite the benefits of using normal rvs to model compounding real returns in finance many practitioners and researchers will not due primarily to the lack of skewness and heavy tail but also because the normal pdf can generate negative prices the spectacular failure of gaussian copulas during the financial crisis reinforces the skepticism unfortunately those who reject the normal pdf do not benefit from finance models optimized using it this research is motivated by the dilemma particularly the desire for skewed multimodal tractable pdfs to model the compounding real return on a diversified portfolio in finance applications of particular interest is the claim by karl pearson that the moments of a lognormal pdf are virtually indistinguishable from a mixture of normals mclachlan peel i literature review during the housing boom residential mortgages were packaged and sold as securities the price of a security is the present value of future cash flows which here are the mortgage payments the products were partitioned into tranches so that as borrowers defaulted holders suffered first followed by midlevel and then mackenzie spears cash flows and timings are needed to price a tranche which is a function of which loans have defaulted by each time point default times can be modeled using an exponential pdf with the probability of default before some time returned by its cumulative distribution function cdf the probability of simultaneous defaults before given times is computed from the copula or multivariate cdf and depends on the correlation between default times there is no way to estimate the true correlation between default times of residential borrowers due to lack of data li suggested translating the copula on simultaneous defaults to an equivalent expression using normal rvs the correlation between these rvs is pulled from a measure on the underlying debt instrument for which the normal assumption is reasonable and sufficient data exists using these correlations the gaussian copula can return the probability of simultaneous defaults before specific times samples on the correlated exponential failure times can then be simulated from the gaussian copula and used to value the security loan pools held mortgages with equity tranches acting like a stock and senior tranches like a safe bond low interest rates led to excess liquidity and produced an insatiable appetite from pension and sovereign wealth funds for senior tranches which yielded more than treasurys kachani a fatal flaw in the system was that economists have assumed for decades that financial data originates from regimes and correlations change during crises hamilton since housing busts follow housing booms it was unwise in hindsight to measure correlation with one value as witnessed defaulttime correlations increase in a crisis and senior tranches sold as safe bonds behaved more like a stock which devastated the insurers who by had underwritten trillion of credit default contracts up from trillion in blame for the crisis has focused on the gaussian copula salmon with a takeaway being that normal returns are not appropriate in finance nocera researchers and practitioners who warned against using the normal distribution were vindicated paolella subsequently declared the race is on to find more suitable multivariate pdfs for financial applications and provides an overview of mixture densities which are often used to model economic regimes and form the basis for this research the pdf we develop is a multivariate normal mixture having fixed normal mixture marginals it is tractable when used in discrete time retirement decumulation models and intuitive to understand in we detail the needed and in we fit generic univariate normal mixtures to sets of returns in we form the multivariate pdf and add correlations in finally in and we derive the real compounding return on a diversified portfolio and use it within optimal decumulation models supporting proofs derivations and a full implementation are included in the appendix ii preliminaries foundational concepts needed for the density model developed in thru are presented here probability density cumulative distribution functions let x be a continuous rv and f x a function such that f x x with the function f x is a valid pdf for x casella berger the cdf for x is defined as f x p x x by the fundamental theorem of calculus anton x f x f f x that is the pdf of an rv x is the derivative of its cdf note that x may be defined on a subset of and f usually depends on a vector of parameters say which may represent the mean and variance of x other common expressions for the pdf include f x f x f and it may also be denoted by fx to indicate the rv governed written as x fx for a single rv x f x is a univariate pdf but the above also applies n to an vector of rvs xn defined on n and the multivariate cdf of here f is a multivariate pdf with f is defined as f p where similar to the univariate case differentiating a multivariate cdf yields the multivariate pdf that is f and the marginal pdf of one rv say is obtained by integrating out all other rvs that is f finite mixture densities let x be a continuous rv and let f x f g x be g functions that satisfy the univariate pdf conditions in also let be probabilities g such that then f x x f g x also satisfies the pdf conditions in and is called a finite mixture density titterington et if x f x then f x p x x cdf of x let i x g x is the be the rth moment for f i x then e xr r thus e x and v x e e x when xn is an vector of rvs on n f g is a multivariate mixture pdf and satisfies the multivariate pdf conditions set forth in a mixture pdf f x has two distinct interpretations f x is a function that accurately models the pdf s for an rv x or the rv x originates from component density f i x with probability g and the components have labels while parameter estimation is unaffected by the interpretation the underlying math is during parameter estimation we adopt the interpretation that simplifies the math each component density f i x may depend on a parameter vector and let be the vector of component probabilities when iid observations from f x are drawn say xt t the objective is to estimate the parameters g and once estimated the pdf is fully specified and can be used under interpretation above components have meaning mixture pdfs model serially correlated data via the components each component is considered a state and there are g states at each time if we assume that observation depends only on the prior observation xt and that the probabilities of transitioning between states are stationary then state transitions evolve over time as a markov chain hillier lieberman define the gxg matrix i g as the conditional probability of being in state j at time given that we are in state i at time serially correlated data originating from a mixture pdf thus requires estimation of the transition probabilities in addition to and where is now interpreted as the unconditional probability of being in state i at time t used at time mclachlan peel in the g states of a mixture pdf are called regimes and the process by which dependent observations transition between states over time is termed regime switching hamilton as noted the underlying math differs between interpretations and above under interpretation components have labels thus observations from a mixture pdf can be viewed as coming in pairs xt zt where xt is the actual value and zt is the component that generated it it is common to replace zt by a vector for t g which has a in the component slot and s elsewhere at time can be expressed as or see figure i figure i mixture data collection when components have labels note each ztj or and for this representation applies to dependent and independent data mixture pdfs for dependent data are termed hidden markov models hmm rabiner juang because the state vector generally can not be observed it is hidden thus a critical task in hmm model building is determining which state generated each observation xt under interpretation the pdf incorporates the rv zt observed as as f xt f xt or f xt ztg f xt ztg ztg which is given by for for for the regimes in hamilton were time series models in this research they are pdfs or more compactly for example suppose n returns on a financial security are observed over time and appear symmetric around some overall mean do not exhibit serial correlation but do include black swan at a frequency greater than their corresponding tail probabilities under either a normal or lognormal pdf taleb an intuitive tractable pdf for such returns is tukey s contaminated normal pdf huber which is a mixture of two normals with equal means but unequal variances it can be used to thicken the tail of a normal pdf the density with larger variance generates outliers and has a smaller we can proceed intuitively by partitioning the returns into two sets with one holding the and the other holding the outliers a normal pdf can be fit to each set using mles for example with the mixture weights set to if x is an rv representing these returns then x f x f x f x where f i x n and after replacing all parameters by their estimates suppose of the returns originate from a common pdf which is n and from a gray swan pdf outliers which is n where is the overall mean estimated by see figure ii figure ii example of tukey s contaminated normal pdf by labeling components we are using interpretation note that a mixture of normal pdfs is generally not normally distributed and not symmetric this being an exception a note of caution is not to mistake the mixture pdf in figure ii with the rv z where n and n as clearly z is normally distributed in practice a normal mixture can model any pdf and is tractable mclachlan peel for example a mixture of two normals can closely approximate the observed outliers are called gray swan events black swans are extreme events that have not occurred before taleb discourages use of the normal pdf in and refers to the lognormal pdf as a bad compromise fama discussed using mixture pdfs to explain in stock prices and taleb also used mixtures in practice to add a heavy tail to the normal pdf as an alternative to the lognormal pdf this procedure is for illustration only such methods were common prior to the advent of the em algorithm see johnson et as will be seen mixture pdfs are almost always calibrated today using either the em algorithm or a gradient procedure lognormal pdf titterington et lastly since the mixture pdf in figure ii does not model black swan events it may be deemed unsatisfactory a solution could be to add a component labeled black swan as say n with small probability such as then adjust and so that the central limit theorem clt let xt f x t with e xt and v xt for large t x n per the central limit theorem freund in words the sum of iid rvs from any pdf f x is approximately normal denoted for large samples and the is t unfortunately this rule does not always apply to mixture pdfs consider f x f x f x where and and let f x n and f x n if x f x then e x and v x from in an iid sample of size from f x we are unlikely to draw an observation from f x leaving in violation of the clt which ensures f x and x n x n such a sample was generated from x with all observations originating from f x this value is within a of the clt pdf mean thus is a valid value repeating the process times should produce an iid sample of size from the clt pdf it does not as all values are the clt pdf caution is therefore advised when invoking the clt on rvs from mixture pdfs for example let rd f r be an rv for the daily real on the s p index d where of trading the amount b invested on january will grow to e b the annual real compounding return is a and ln a r the clt when the rd are iid d by definition the rv y e r on december in real dollars ln r which is normal per a would then be lognormal making the historical collection of annual real s p index returns a lognormal random sample this hypothesis was tested and rejected using the test rook kerman one explanation is that daily returns are not independent a claim supported by academic studies of index returns baltussen et another is that daily returns are not identically distributed or that daily returns originate from a mixture pdf with d not large enough for the clt approximation the density of a future observation under certain assumptions the pdf of a future value can be derived before it is observed let xt n be compounding returns on a financial security at time and unknown suppose xt is the observed value of xt for t with reflecting the unobserved next value note that x such an approach is considered for a retirement plan full sample standard the daily real compounding return can be approximated by rd i where rd is the total daily return calculated as end value start value value and i is the annual inflation rate while short term daily index returns have historically exhibited positive serial correlation baltussen et suggest the ubiquity of index products may have eliminated the signal or even turned it negative x is the sample mean and n where x and are independent rvs ross since x and x x are also independent x n thus has form n t x is the x x x t s s x x t s x x s t x n which follows a student s with degrees of freedom ross denoted by consequently x is a rv with degrees of it has been established that x sample variance and and x where or a future observation has the pdf of a scaled student s distribution centered at the sample mean in a simulation study this would be the preferred pdf for evaluating a financial plan simulating from a student s with degrees of freedom is if we can simulate from the normal distribution first generate a n random value for random value the numerator then generate square and sum additional n values to construct a for the finally use the ratio from the definition of a rv given above law kelton the pdf of a future value is which can be derived by differentiating the cdf see the cdf for a future observation is f p x p x s which is given by freund x s t t t t t where is the gamma function finally the pdf for a future observation is x s which is derived using and the chain rule as t the pdf for future values x t t x x s t s t for x can be derived similarly the multivariate pdf of n future values is the product of the univariate pdfs assuming independence note that the technique just described breaks the distributions of x and here are not approximations the clt is not involved therefore are valid for any sample size the normal numerator and denominator must be independent rvs if xn n then x ross down when a asset is added let xt n and yt n be compounding returns on two uncorrelated financial securities at times interest is in modeling a future unobserved value on a diversified portfolio using these securities say where it follows that n x x t y thus y n as with asset suppose a quantity q exists such that a function of it and has a known pdf that is not a function of and it can be solved for the numerator of this would suggest a solution to the behren problem which is a famous unsolved problem in statistics casella berger maximum likelihood ml estimation let xt f x t be continuous rvs and xt be the observed value of xt the likelihood of xt is f xt which is the pdf of the observed value the vector holds unknown parameters such as where e xt and v xt a likelihood value is not a probability and can be however it is a similar measure since values with higher likelihoods are more likely to be observed the likelihood function is the pdf written as a function of xt extending this to the entire sample the multivariate pdf of xt evaluated at x xt is f f xt which can be written as xt the likelihood of the entire sample an appealing estimate for is that which maximizes and its value denoted is called the maximum likelihood estimator mle since the natural log function is increasing maximizing and ln are equivalent problems and it is often easier to deal with the latter if the rvs xt are independent t then f xt ln ln and mles possess many desirable statistical qualities such as consistency efficiency asymptotic normality and invariance to functional transformations thus are often considered the gold standard for parameter estimation such qualities however depend on certain regularity conditions being satisfied hogg et see finding parameter estimates in statistics therefore enters the purview of engineering disciplines that specialize in constrained optimization techniques see the likelihood function for an iid sample originating from a mixture pdf depends on the interpretation see let f and f be the likelihoods of under interpretations and respectively then the behren problem tests for equal means in two normal populations with unequal and unknown variances let n and n be independent samples under ho x q to make this or any other fully specified pdf is unknown y n a ln ln ln and using ln z ln ln the goal is to collect data and maximize the from either or with respect to obtaining the mle unfortunately the for interpretation in can not be maximized directly because the component indicator rvs zt ztg are missing not observable the loglikelihood from interpretation in can be maximized directly however this function is unpleasant for a variety of reasons for example it can have multiple local maximums thus finding stationary points does not guarantee an mle with its desirable properties it also is unbounded for normal components thus given any value no matter how large we can always find a setting for such that ln see appendix a when maximizing the mixture for normal components we will therefore restrict the parameter space for to a region where ln is finite and search for all local maximums declaring as the argmax to restrict the parameter space for a mixture pdf let be the variance for component i the variance ratio constraint is max c where c is a given constant mclachlan peel a good choice for c will eliminate spurious maximizers which are optimal values that lack a meaningful interpretation and can occur when one component fits a small of observations the em algorithm several researchers had been using a process to obtain mles in studies with missing data the process was observed to possess many interesting properties and became formalized with proofs and a name by dempster et al in what has become one of the most influential statistics papers ever written the procedure termed the em algorithm generates mles as follows let xt yt f xt yt and suppose xt is observed as xt but yt is missing for when xt yt are iid over time the joint pdf of xt and yt is f y depending on the marginal pdf of xt may be obtained by integrating yt out of f xt yt when yt is continuous or summing it out when yt is discrete see that is f xt or f xt for t then the em algorithm does not require iid observations and can also be used to estimate parameters in hmm models see f this results in likelihood functions for one that includes the missing data and one that does not y f y and f the computes the mle of as begin initialize to starting values compute ey ln maximize ey ln end only when taking expectations wrt use with respect to go to using in the expectations terminate when stops increasing use below some threshold the replaces the missing yt t with constants as a result of taking expectations and the therefore only has unknown the value of will not decrease while iterating and it will end at the local maximum nearest to the starting values for given is bounded in this region if has multiple local maximums we use a variety of starting values and take as the argmax this value will exhibit the desirable qualities noted in mclachlan peel the can be used to find mles for a wide variety of models the trick is to reformulate so that some rvs appear missing applications to mixture pdfs is straightforward under interpretation the component indicator rvs zt ztg are missing see the multivariate pdf of xt and zt is given in and the corresponding marginal of xt is f xt which is the pdf used in interpretation see the zt are discrete rvs the likelihood function for an iid sample xt from a mixture pdf including the missing data rvs is is from and the corresponding likelihood without missing data from the uses ez ln by their expected value notice that ln e ln is linear in the e z ln which replaces the ztg see thus ln where e zti e computed using all available data along with the current settings for since zti is a discrete rv that equals when xt originates from component i and otherwise e p which is e z p for t and when is given this value is completely known and replaces e zti in the resulting function with only unknown is optimized in the initial values for randomly using simulation mclachlan peel and since the strategy is to apply the to a variety of starting can be set may have many local optimums values and select as the argmax regularity conditions statistical tests models and theorems are built upon sets of assumptions and in statistical inference these assumptions are called the regularity conditions hogg et appendix a describe such conditions and it is usually the case that only a subset need be satisfied for a given result the regularity condition applies to pdfs and deals with uniqueness namely for pdf f if then f x f x this condition clearly holds for n pdfs since changing the mean or variance changes the distribution however consider the mixture pdf f x f x f x where f i x n the vector of unknown parameters is define and note that but f x f x violating regularity condition in general mixture pdfs do not satisfy all regularity conditions and caution is advised when using results that requires them the likelihood ratio test lrt let x be an arbitrary statistical model and x be the same model with some parameters dropped for example x may be a linear regression model with holding the coefficients and x is the corresponding reduced model that excludes some predictor variables the principle of parsimony favors statistical models with fewer parameters and the likelihood ratio test lrt checks for a significant difference between the likelihood value of a full model its reduced version if the likelihoods are not significantly different the reduced model x having fewer parameters is preferred the lrt tests for equivalence of the likelihoods namely ho vs ha where and are the mles for and respectively the test statistic is given by hogg et note that since adding parameters to a model does not reduce its likelihood and under ho the test statistic is close to zero since ln when takes a large positive value and ho is therefore rejected when c for some critical value under ho and select regularity conditions including the see where v is the number of parameters dropped from to create a test of ho with type i error probability will define c such that p c and a type i error means ho is rejected when true we are interested in using the lrt to test for the optimal of components in a mixture pdf however is not under ho since the applicable regularity conditions are violated mclachlan suggests approximating the null distribution of with a bootstrap the hypothesis test is ho data vector ha data vector under ho we estimate the pdf of f x where is the mle of originates from a mixture pdf say f x originates from a mixture pdf say f x as f x where is the mle of and under ha we estimate the pdf as a value for the test statistic from is computed using the corresponding likelihood functions of these pdfs if we implicitly assume that f x generated our sample vector a value from the distribution of under ho can be simulated by generating a random sample from f x and fitting both a and a mixture pdf using mles the sample should be of the same size as our data vector repeating this process k times will simulate values which estimate of the distribution of under ho and the for this test is approximated by of matrices let xn be rvs for the compounding return on n financial assets at a given time point if v xi cov xi xj and corr xi xj are the variances covariances and correlations for i n then v e e is the vc matrix of written as v the diagonals are the variances and the are the covariances note that v is square and symmetric so that v but not every square symmetric matrix is a vc matrix to qualify all variances and correlations must satisfy and the values must also make statistical sense for example the following do not and the strong positive correlation of with both and implies that and should also have a strong positive correlation it turns out that all conditions for a square symmetric matrix to be a valid vc matrix are met if it is that is v for any constant vector an wothke since v v the condition simply means that any linear combination of the rvs xi n is an rv with variance a matrix is if all eigenvalues are the eigenvalues of v are the constants that satisfy n meyer since v has determinant it is singular which implies the equation v can be solved by a ui thus and ui come in pairs the ui with length is referred to as the eigenvector for note that v so that ui which reveals why each must be for v to be otherwise has v z note that the determinant of a matrix is the product of its eigenvalues and since each which ensures v exists guttman a matrix with determinant thus can not be also a correlation of or is not allowed in a vc matrix if two rvs xi and xj are perfectly correlated we can question why both are needed but beyond that v can not be set and all other elements in an to and note that v z v a where z since v z with when use constants and repairing a broken vc matrix a vc matrix that is not is said to be broken and can occur for a variety of reasons including missing data estimation procedures and iterative optimization methods if encountered we can end the analysis with an error or repair the broken vc matrix and continue we take the latter approach and perform a ridge repair wothke a ridge is added to v by multiplying the diagonal by a constant k start with and increase until the modified matrix say vr with diagonals is the entire matrix is then divided by k which revert the diagonals back to and forces the covariances to which approach along with the correlations as k increases a diagonal matrix with elements is the scaled matrix will be since when v v and k useful derivatives for vc matrices let t xnt be compounding returns on n financial assets at times t where t f xt n v with v and xit the terms in v from are unknown parameters that can be estimated as mles after collecting data see mles maximize xt which is the multivariate pdf of the data the multivariate normal pdf for v the multivariate pdf for the entire iid rv sample v e t is is guttman t e the function for the unknown parameters v is given by ln v ln t ln v the mle for v is found by maximizing wrt a critical point is where the vector of derivatives equals which would reveal the maximum or it could be found iteratively using a gradient method also requiring said let aij be an nxn matrix having a in the and positions and a in all other positions and let vij be the vc matrix from with a in the and positions it follows that v and via the product rule v v and v by definition v v v v so that v thus v v et denote and the above holds for all elements the determinant of v v v v searle can be expressed using cofactor expansion with respect to the row n as meyer as noted in iterative optimization methods are a source of broken vc matrices as we may step into an infeasible region where vij is v v with the row and column removed thus matrix with t the determinant is a function of the aij say chain rule n n dimensions and for v n is symmetric v where v let a a a t anton if with using since v v for i aij be a via the a and then v so that v and is v with a in position and a in all other row i and column j positions searle et j serial correlation let xt t be rvs for the compounding return on a financial asset at time if the unconditional mean and variance are constants let e xt and v xt t respectively when xi xj for the returns are serially correlated and can not be assumed iid serially correlated data are modeled using a time series model such as the autoregressive ar or moving average ma process the ar model of order p is zt and the ma model of order q is zt where zt xt and n t box et the autoregressive moving average arma model includes ar p and ma q terms the appropriate model for a data set is identified by the signature of the autocorrelation acf and partial autocorrelation pacf functions the acf at lag k is corr xt and the pacf at lag k is the correlation remaining after accounting for all lag k correlations namely in an ar p model the classic ar p signature has the pacf cut off abruptly and the acf decay after lag p with the pacf being the classic ma q signature has the acf cut off abruptly but the pacf decay after lag q with the pacf being one of these patterns often emerges perhaps after log or power transforms and usually max p q nau a series with fixed and is said to be stationary a time series that drifts either up or down over time has a trend and is not stationary since e xt is not fixed a series can often be made stationary by differencing box et usually or differences will suffice where dt zt for t and dt for t are the differences when dt ar zt is referred to as a random walk rw each observation in a rw is the prior value plus a random step governed by an alternative to differencing for a trend is to fit a regression with time as a predictor then model the stationary residuals define the backshift operator b as bkzt so that the ar p model can be written as zt or b zt where b is the characteristic the notation v is ours and introduced since dealing only with nxn matrices simplifies the code a new nxn matrix v will nd with a in position and in all other row r and column s positions be introduced when taking derivatives which is v polynomial an ar p process is stationary if all roots of b are in magnitude box et the ma q model is stationary by design since e zt e xt zt and cov zt are the fixed and for k q the covariance was derived using the model s definition as cov cov cov for k q and for k q the correlation is corr zt consider the ar process zt since and each centered observation can be rewritten as zt and repeating without end yields zt the ar model thus only works if otherwise is infinite and e zt v zt are not defined further if and an alternate form for the ar model is zt with e zt e xt the ar model has characteristic polynomial b with root b and is hence stationary when or justifying the condition noted for a fixed e zt and v zt the conditional variance of a new observation xt given all prior observations is v v v by definition of the ar model using the alternative ar model form the unconditional variance of each xt given is zt v the ar model does not imply that each new value xt depends only on the prior value in fact it assumes a correlation between xt and all prior observations which is a reason why p is rarely needed in practice to see this use the ar alternative form and note that cov zt cov cov since the are iid the correlation between values k time points apart is corr xt cov xt v x v x exponential decline in k the rw is an ar model with unit root having alternative form zt the unconditional variance is v zt v and since v zt v the rw is not stationary v zt as t parameters in an ar model can be estimated by least squares with adjustments to account for serial correlation maximum likelihood or the equations method of moments box et the likelihood function of an observed sample is the multivariate pdf of the data written as a function of the parameters and using zt this is the likelihood from with and v is the vc matrix with having diagonals and for i j a time series mean reverts if and v as in words future centered values come from a pdf with fixed and as k increases stationary time series thus mean revert due to their e zt and fixed v zt but rws do not mean reversion strength is measured by speed or halflife this is the k required for e the time before the process s conditional mean let so that and s resulting in iff otherwise equals of the last observed value in an ar process so that e and the is k such that or k ln ln tsay mean reversion speed thus strengthens as and weakens as which is intuitive since is the damping factor applied to prior values the fastest mean reverting ar process has which is a random sample without serial correlation the of implies the process generates then instantly reverts to a mean finite pdf as the ar process approaches a rw with and does not mean revert thus in an ar process as serial correlation strengthens mean reversion weakens and vice versa this is intuitive since as mean reversion strengthens new values increasingly depend on the mean but as serial correlation strengthens new values increasingly depend on the prior value data from time series have been retrieved and tested for serial correlation see table i appendix b conclusions are draft and subject to relevant diagnostics box et the correct significance level p type i error for testing multiple simultaneous hypotheses to control the error rate p type i error is discussed below market efficiency is the default and each test is formed as ho no serial correlation ha serial correlation evidence is needed to reject market efficiency for liquid assets as it may suggest a profitable arbitrage trade for retirement advisors who can predict the path of future prices a type error occurs if we reject ho when it is true we falsely reject efficiency for a security table i tests for serial correlation in time series data returns annual inflation rate real s p total s p real small cap equity total small cap equity real total real real gold return process ar rs rs rs rs rs ar data returns annual cash no interest real s p rp real s p cap rp real small rp avg real s p avg total s p diff shiller cape ratio diff log shiller cape ratio s p avg real earnings process ar rs rs ar ar arma abbreviations rs random sample g rw geometric random walk rp risk premium is for test ho no serial correlation ha serial correlation not for test ho ar p vs ha ar possible serial correlation near unit roots by design let xt f x t with e xt v xt avg returns are yt for if f x n xt yt xt yt as cov yt if f x ln ln xt is similar possible steps unit root test statistics respectively with critical value can not reject hypothesis of unit root note since cape ratio grw may be more plausible drifts are rw grw preliminary model would be arma on detrended data yt arma yt the goal is to test these hypotheses such that the family of conclusions is replicable with confidence where p type i error in general with n independent tests and p type i error for each test the probability of making k n type i errors is p b k n with b binomial n thus the probability of making type i errors in n independent tests is p b n p b for the tests conducted above with the probability of making type i errors is b being confident on each of independent tests translates into a chance of replicating the family of conclusions with new data to replicate the family with confidence p type i error we adjust the used for each test the bonferonni adjustment uses and does not require independent tests but does ensure p type i error westfall et with the adjusted for each test in table i is the conclusions in table i lead to the table ii concerns about current retirement finance dogma table ii questionable claims in the retirement finance literature claim shiller s cape ratio annual can be used to time markets should sell when the cape ratio is high and buy when it is low relative to its historical average the hypothesis that the annual cape ratio behaves as a g rw can not be rejected random walks have unit roots and do not mean revert the best predictor of any future value in a random walk is the current value drift use logs for grw annual s p returns real or total mean revert therefore are serially correlated and should be fit using an autoregressive model serial correlation and mean reversion are opposites as one strengthens the other weakens annual s p returns exhibit no serial correlation they are random samples and mean revert shiller s cape ratio annual can be used to predict average future a linear regression predicting average future s p returns using cape values has a highly significant the average s p return is strongly serial correlated see table i and appendix b fitting a regression line through these points is inappropriate and will result in underestimated variances and inflated type i error rates which commonly lead to false claims that predictors are significant findings are preliminary the fitted linear models are subject to appropriate diagnostics see box et this is by design let xt f x t with e xt v xt average returns are constructed as yt for so that cov yt v v for k and cov yt for k a similar claim is made about safe withdrawal rates swr in retirement via regression with cape and the exact same concern arises constrained optimization linear programming a linear program lp is an optimization problem where the objective and all constraints are linear functions of the decision variables lps are solved using the simplex algorithm and the standard form for an lp with n decision variables and k linearly independent constraints is jensen bard maximize subject to z cnxn feasible region where objective function aknxn bk xi n constraints all lps come in pairs with the dual being an equivalent minimization problem when the primary lp is solved the dual is solved and whereas the primary lp has n decision variables and k constraints the dual has k decision variables and n constraints and since min z max any lp can be solved assuming k the simplex algorithm recognizes that when z is linear global solutions must occur at corner points of the feasible a constraint binds when becomes for given values of the decision variables when m constraints bind m decision variables are fixed by the constraints and there are k m ways to select the these constraints the remaining decision variables must equal at a corner point and there are n n m ways this can occur the total of corner points is thus solution the simplex algorithm partitions a as k m n n m n k k with each a potential such that is mxm and of full rank and let be the corresponding vector partitions it follows that and reflects the m decision variables fixed by the binding constraints the objective z becomes which is a constant less a linear combination of the decision variables in if any coefficients in this linear combination are negative the problem is unbounded and has no solution simply increase the corresponding decision variable in to increase the objective z to have a solution all coefficients for in z must be and when this occurs all decision variables in must equal to maximize z a constant less a quantity is maximized when that quantity equals finally when and if all constraints are satisfied then is a basic feasible solution bfs to the lp jensen bard the simplex algorithm starts with a corner point of the feasible region and cycles through adjacent corner points bases defined by such that z does not decrease thus the lp can be solved with a small number of evaluations and the algorithm ends at a global maximizer note that when k n both and vanish so that each basis is formed by setting decision variables equal to and solving for the remaining variables if any quantity is random then is a stochastic linear program slp while solving slps using theory is involved kall mayer simulation can be a practical alternative for example suppose b bk is a set of rvs with b fb b e b since any solution is a function of b it must also be an rv say x fx x e x when fb b is a heuristic slp solution is obtained by generating random values for b say bi bki where sample i yields an lp bfs say xi xni the solution is then taken as and as or e e since each xi satisfies it follows that simulated solutions therefore asymptotically satisfy the constraint set in expectation an exceedingly large number of problems can be formulated and solved as lps and some programs can also be closely approximated by an lp the solution to an lp may or may not be unique but it is global the feasible region of an lp is referred to as a polyhedron which sits in the quadrant since all decision variables are a technically incorrect but useful visualization tool when is that four people of different heights are holding a flat board objective function is a plane over a stop sign laying flat on the ground feasible region in quadrant the highest point on the board inside the sign will be directly above a corner of the sign it can not be above an interior point here b may reflect any randomly occurring quantity such as supply demand temperature sales revenue profit etc quadratic programming when the objective function in is of the form z ax b xx c x the surface being maximized is quadratic not linear and the resulting optimization is referred to as a quadratic program qp if bij i n z is said to be separable as the objective separates into a sum of variable functions z x hillier lieberman a separable qp can be approximated by an lp to begin convert the qp to a minimization problem noting that max z min then write each function in as x x posing no issue since minimizing and c x a which adds the constant a to a are equivalent problems proceed by partitioning each into equidistant constants which define s line segments that trace out x these values are chosen and allow xi to be replaced by a weight vector where any xi is reachable by a weighted sum of the constants using namely xi function x and each where the approximation sharpens as s increases when adjacent weights are used jensen bard see figure iii since the objective is min adjacent weights must be used in an optimal solution compare the blue dot objectives for the dashed and red lines in figure iii figure iii separable qp approximation by an lp s minimization objective the qp max z z minimize subject to where ax c x is then approximated by the following lp linear in bm k n n objective function feasible region constraints the standard form lp in requires all constraints to have the form fm x bm k where fm x is linear in x express an equality constraint fm x bm using only as fm x bm fm x bm fm x bm x classical convex programming a classical convex program ccp is a optimization having objective of either minimizing a convex or maximizing a concave function over a set of linear equality constraints jensen bard a function f is convex iff f f n and if f is convex then is concave further max max x thus maximizing a concave function and a function are equivalent problems as x is since ln x is concave lovasz vempala a ccp has the desirable property that local optimums are global optimums thus the problem reduces to finding any local optimum the following formulation is of interest f xn f xn minimize or maximize subject to convex objective function concave objective function feasible region aknxn bk xi n where constraints as with lps redundant constraints are removed if k n the feasible region is empty and there is no solution if k n the feasible region consists of the point such that f if k n f which is also the solution if k the solution is and then the unconstrained solution solves the constrained problem most likely none of the above will hold and we are left to optimize with k n and rank to solve this problem in we locate the critical point of the lagrangian l defined as l xn incorporates all constraints into a x a x a x b the lagrangian l and k new decision variables are introduced k called the lagrange multipliers the solution occurs at l namely a x b a x b l a x a x a x a x this system can be solved using newton s method which approximates l linearly in the neighborhood of solving yields as l l l l l setting the approximation and repeating the process generates the iterative solution a solution would write as with k k and rank in the objective making it an unconstrained function of only after solving k then solve and replace we use the above to determine from derivatives and convergence occurs when l the symmetric matrix of l is called the bordered hessian which is given by l a a where a a implies implies thus removed and implies a a a since f is convex same for concave is n k with rank and l thus l thus must be invertible note that is the hessian of f newton s method uses l a is and since redundant constraints have been implies therefore when f is convex is concave and the bordered hessian is thus invertible border general programming a general program nlp seeks to minimize or maximize a smooth function subject to g bi k where is not necessarily convex or concave and g n is generic such problems can have several local optimums and the goal is to find the best among these little can be said about nlp problems in general and the optimization strategy depends on the nature of the problem in some cases the lagrangian can be used to find local optimums in others a metaheuristic such as tabu search simulated annealing or a genetic algorithm can be used hillier lieberman if all else fails we can generate random values and evaluate when the constraint set is satisfied keeping a record of the optimal value a better approach would generate random values that satisfy all constraints alternatively we can take a random setting that is infeasible and project it to a point inside the feasible region then evaluate that point at random starts can be effective when many local optimums exist and strategies for generating values have been developed for specific problems such as mixture likelihoods see mclachlan peel copula modeling let x be any continuous rv with pdf f x and cdf f x p x x copula modeling is based on a fact that initially surprises but is intuitive upon reflection namely that f x uniform the proof is straightforward let u f x then fu u p u p f x u p x u f u u for u random variables having the same cdf are identically distributed and the cdf for u is from a uniform consider maximizing a generic likelihood function that includes the vc matrix v from a constraint on the variances and covariances is that v must be an alternative to discarding any point with a broken v matrix is to repair it distribution let xn be rvs for the compounding return on n financial securities at a given time point the marginal pdf and cdf of xi are fi xi and fi xi respectively and the multivariate pdf and cdf are f and f p f xn p xn respectively see as above let ui of fi xi where ui uniform and xi ui the cdf of un is g g un p un un since g is a valid cdf its derivative is the multivariate pdf of namely g g g g see note the relationship between f and g nelson f f xn the multivariate pdf of p xn xn p un xn p un fn xn g fn xn can then be derived by differentiating f xn using the chain rule on fi xi as f f xn f g fn xn g f multivariate pdf g f multivariate copula pdf from product of marginal pdfs the literature refers to g as the copula and g as the copula density the term indicates a coupling between the multivariate and marginal pdfs for a set of rvs nelson when xn are independent un are also independent and g so that f xn n n the copula term therefore models the dependence between a set of rvs with the breakthrough being that marginal pdfs can be modeled separately this quality is appealing particularly for the field of retirement finance when modeling the multivariate pdf for a set of real compounding returns the marginal return on a given security should not depend on which other securities are involved for this reason copula modeling is a standard for multivariate pdf modeling in finance and many forms have been proposed for example if g is gaussianinduced then the copula will model dependence after mapping to normal rvs since the unknown parameters in a copula exist in the likelihood they can be estimated as mles when building a multivariate pdf we can choose between candidates copulas by taking the one with smallest error using the empirical copula which is constructed via the empirical cdf in this research we propose a generic tractable alternative to copula modeling that is well suited to retirement finance assuming marginal pdfs fn xn have been arbitrarily fit to the data set a distribution with corresponding cdf h is selected to model the dependence structure of rvs xn for example during the housing boom securitized mortgage products modeled default times of named residential borrowers as exponential rvs see here fi xi would be an exponential pdf for further the dependence structure was chosen as gaussian so that h becomes the multivariate pdf in requires g for full specification and we say that it is induced by the choice of h using transformations un hn yn where hj is the corresponding marginal cdf for element j following the standard procedure for transforming rvs freund the multivariate pdf g has form g h h where h is the corresponding pdf for our chosen dependence structure all in the jacobian term are zero since each transformation involves only one rv the diagonal terms are derived for n as follows using rules from calculus we can treat as a ratio when hi is invertible thus h the copula density g thus takes form of when induced by the dependence structure of cdf h g h h h when h is gaussian h and hi are univariate standard normal pdfs the copula density g from is used in to complete the multivariate pdf of our sample data f xn note that sklar s theorem guarantees the pdf in has marginals fi xi when using g nelson copula parameters can then be estimated as mles with only covariance terms unknown since it is assumed that the univariate marginals are fully specified tran et caution that a straight forward optimization of may work for basic copula forms but can fail when using commercial software and a complicated dependence structure giordoni et solve a problem similar to the one we address but in different ways namely using a normal mixture copula along with normal mixture marginals and also with a adaptive estimator that attempts to smooth out the differences between normal mixture marginals and the implied marginals of a multivariate normal mixture fit directly to the data information criteria in addition to the lrt from fitted models in statistics can be compared using a metric called the information criteria ic such values quantify the information lost by a model the true state of nature smaller ic values are preferred as they indicate less information loss models with too many parameters are said to lack parsimony when choosing amongst candidate models ic metrics attempt to strike a balance between the likelihood value and of parameters among the most widely used ic metrics is akaike s information criteria aic calculated as akaike aic parameters ln the parameters counted must be free a model with and constrained linearly by only counts as parameter since estimating estimates thus parameters total parameters independent constraints the term ln is the using the mle see the aic works well when interest is in controlling type i errors in subsequent hypothesis tests tao et but it tends to over fit in small samples leading to models that lack parsimony thus the corrected aic aicc has been proposed and includes a penalty that increases with the of parameters hurvich tsai parameters sample size parameters parameters as the sample size increases the penalty decreases thus aicc aic whereas p type i error p reject ho ho is true p type ii error p accept ho ho is false the power of a hypothesis test is p reject ho ho is false if interest is in controlling the power of subsequent hypothesis tests then bayesian information criteria bic is recommended tao et and calculated as schwarz bic ln parameters ln sample size while determining the optimal of components in a finite mixture pdf see is an unsolved problem in statistics ic are often used as a heuristic to compare mixture pdfs of various sizes titterington et caution that in theory their validity often relies on the unmet regularity conditions of the probability of ruin in retirement let t be the time points of a retirement horizon where the withdrawal is made at time and the last withdrawal at time which can be fixed tf or random tr the pmf for tr is defined as p tr t for t and can be derived using lifetables published at for an individual or a group rook the safe withdrawal rate swr is a heuristic that suggests retirees withdraw wr in real terms from their savings at each time point bengen a retirement plan often couples the withdrawal rate wr with an asset allocation if n securities are involved let rti rti ei and it be the total and real returns expense ratio proportion allocated to security i and inflation rate respectively all at time the total compounding return for security i at time t is rti rti it the total compounding return for security i at time t is ei rti ei rti it the real compounding return for security i at time t denoted is ei rti ei rti it since is a continuous rv it is governed by a univariate pdf say fti when cov rki rsj for times t and securities i n then and fti are independent of time if we drop the time index they become and fi respectively the marginal pdf for security i fi is modeled using historical a retirement plan succeeds or fails based on the return of a diversified portfolio not that of a single security the real compounding return for the portfolio at time t is t n where n consequently t is a function of time via the portfolio weights the asset allocation set at time and is derived as a linear transform n of the univariate pdf for t say ht is used to a retirement plan in some cases this pdf is easily derived normal and in others there is no solution lognormal various methods exist to derive the pdf of a transformed rv and one uses the multivariate pdf of the random vector f freund our goal is to model ht using f while maintaining the individual security marginals that is subject to fi here ht may be skewed and multimodal generally normal allowing higher pdf moments to aid in determining a plan s success or failure retirement surveys and alternative metrics retiree surveys reveal that the concern is running out of a retiree who runs out of money experiences financial ruin the probability of this event occurring can be computed and shared with the retiree for a given decumulation strategy probabilities are bounded by and multiplying by yields a percentage which is bounded by percentages are a ubiquitous metric and universally understood for example a probability of translates to which can be described in words as a coin flip retirees make withdrawals at time t and they experience the event of ruin at time t denoted ruin t iff the time withdrawal is successful but the account does not support or is completely emptied by the time t withdrawal the compliment of this event is avoiding ruin at time t denoted ruinc t which occurs iff the withdrawal at time t is successful leaving a balance define ruin t as the event of ruin occurring on or before time t and let ruinc t be its compliment the tools described in can be used to detect serial correlation within and between securities over time a number are referenced at the conclusion of this research despite being extensively researched the probability of ruin as a metric is not universally accepted with criticisms leveled from all directions the most common being that the retirement ruin event could occur as ruin or ruin and there is a substantial difference for the retiree between these events this criticism argues that ruin as a binary outcome is too simplistic and a more nuanced approach would consider varying degrees of failure a separate criticism is that the ruin metric is overly complicated and better left to actuaries at insurance companies under this argument the metric is misunderstood and being abused by financial planners who lack the ability to properly calibrate the computation fail to understand its inherent flaws such as the impact of covariances and higher order pdf moments while a retirement strategy can have varying degrees of failure we are primarily interested in the compliment of the ruin event which is success unlike retirement ruin the event of retirement success does not have varying time attached to it further a decumulation model that maximizes the probability of success will also minimize the probability of ruin as these are equivalent optimization problems a model that maximizes the probability of success may in fact fail and in this case it is reasonable to try and limit the damage harlow and brown introduce two downside risk metrics which do precisely this their approach uses fully stochastic discounting to compute a retirement present value rpv for withdrawals cash flows from a decumulation plan the rpv is an rv and its pdf can be estimated via simulation values of rpv below zero indicate the account did not support all withdrawals and retirement ruin has occurred a strategy that minimizes downside risk recognizes that the ruin section of the rpv s pdf can be markedly different for retirement plans having similar failure probabilities the goal is to make this section of the pdf as palatable as possible if ruin occurs both the mean and standard deviation of negative rpv values are used as minimization metrics in the optimization and corresponding asset allocations are found harlow and brown report that far lower equity ratios are optimal in the context of minimizing downside risk this finding has the benefit of being intuitive as we can generally think of a retiree s bequest distribution as having a spread variance that increases with the equity ratio and a negative bequest rpv indicates that the retiree has exhausted their savings while still alive milevsky takes the opposing view that ruin probabilities are being routinely abused misunderstood by retirement planners and advocates for replacing it altogether by a different metric namely the portfolio longevity pl since investments are volatile pl is an rv that measures the length of time a retirement portfolio lasts it takes values l in discrete time and has pmf defined by p pl l note that l successful withdrawals are made ruin l only successful withdrawal is made ruin l exactly successful withdrawals are made ruin t finally l t all t withdrawals are made successfully ruinc retirement success given horizon length t therefore p pl l p ruin for l and p l t p ruinc see table iii the mean median and mode of pl will thus be functions of ruin probabilities and any flaws inherent in their construction will propagate through to these statistics table iii the portfolio longevity pl pmf and corresponding statistics portfolio longevity l i j k p pl l p ruin p ruin p ruin mean pl p ruin p ruin p ruin p ruin p ruin i p ruin j p ruin k p ruin ep x x x argmax p pl l sum probabilities in either direction and stop when the corresponding is the median shown as j above if the sum is exactly at l j then median pl locate the maximum probability s p l all corresponding are the mode s shown above as mode pl i k table iii applies to any withdrawal rate wr and p ruin t implies that no investment account lasts in perpetuity including when wr it is suggested that financial advisors examine the event pl tr as it implies the retiree outlives their savings the probability of this event p pl tr is the probability of ruin with t tr we can compute p pl tr using conditional probabilities as follows pl tr p t s p t t t t p p p pl tr t where s is any generic sample space p s t p p t t p p t p p p p p ruin p retirement success t replace s by the sample space for tr t t t distributive property for sets probabilities for mutually exclusive events are summed t t p t t p t t t conditional probability uses fixed horizon length tf t t p t t since p p a b b drop from prior step since pl t t p t t t p t compute using success probabilities t as noted p pl tr is by definition the probability of ruin using a random time horizon with respect to the portfolio longevity pl the probability said to be of most interest is derived entirely using ruin probabilities as shown in where probabilities for the rv pl do not appear computing ruin probabilities assume the retiree has made successful withdrawals at times the event ruin t occurs when t rf where rf t rf t rf for t and rf wr rook thus ruinc t occurs when t rf rf t is called the ruin factor and reflects the retiree s funded status at time t with t equal to the of real withdrawals remaining rook the event of achieving retirement success using an swr is thus defined for fixed and random horizons respectively as ruinc tf retirement success t rf t t ht ruinc tr retirement success t rf t t ht and tr p tr t recall that t is the real compounding return on a diversified portfolio of n securities and a function of the asset allocation set at time it follows that the corresponding success probability for fixed in assuming independence of t across time is t p ruin replaces t as vbl of integration the success probability for random in is p ruin tr where p ruin tr was derived in above and also uses for any events a and b if a b then p b ac p b p a since ruinc t ruinc and ruin t ruinc it follows that p ruin t p ruinc ruin p ruinc p ruinc consequently for t ht and t tf p ruin t p i rf i p i rf i given values for the security weights n at time t the asset allocation the terms in are estimated using simulation or approximated recursively with a dynamic program dp see rook and rook subsequently table iii can be populated with these probabilities usually we are not given the weights but are tasked with deriving them according to some optimality criteria rook derives the weights that minimize the probability of ruin for a stock and bond portfolio using a dynamic glidepath for both tf and tr and rook derives the corresponding weights to minimize the probability of ruin using a static glidepath both solutions assume normally distributed compounding returns and as noted many financial reject this assumption the primary purpose of this research is to extend these and other models to compounding returns it was noted above that investment accounts do not last forever regardless of the withdrawal rate wr when wr rf t t see rook for corresponding venn diagrams as t ruin t occurs when t for any consequently if p t the event of ruin will eventually occur under an infinite time horizon unfortunately the lognormal pdf is defined for values with f and it does not allow compounding returns of zero the pdf we develop for t assigns a probability to the event t as compounding returns and prices of securities can and do take values of zero iii univariate density modeling the real compounding return on a diversified portfolio determines the success or failure of a retirement strategy we assume independence across time and use securities from table i that are random the multivariate pdf for the real compounding return on s p l small cap equities s and b will be developed in this research the rvs representing these returns are l s b and the multivariate pdf is f l s b a diversified portfolio using these securities generates time t real compounding return t l b where l s b are the portfolio weights set at time with l s b and el es eb are the expenses similar to copula modeling we first build univariate pdfs fl l fs s and fb b for l s and b respectively the multivariate pdf f l s b built in and will preserve the marginals fl l fs s and fb b univariate pdfs for l s b are fit to finite normal mixtures using the em algorithm with random starts and a variance ratio constraint to eliminate spurious maximizers a novel procedure is introduced to find the optimal of univariate components generally considered an unsolved problem in statistics the forward portion tests components using a bootstrapped lrt then or components up to the maximum of univariate components allowed if the forward procedure ends with the last significant test being g components the backward portion tests g components then g etc until a significant difference is found ending the procedure for example if backward test g yields a significant difference then the optimal of components is note that andersondarling normality tests for l s b yield and respectively indicating that the normality assumption is not rejected at for these securities the fact that all can be assumed to originate from univariate normal distributions should not be lost in the forthcoming analysis univariate pdfs all univariate tests of g components will use lrt bootstrap samples each sample fits data to both g and component mixtures using random starts for each execution of the em algorithm when g random starts use values generated from the nearest fitted mixture for the same data but with fewer components each lrt sample value thus requires em executions retirement research that uses serially correlated assets should account for the dependence in the multivariate pdf otherwise valid doubts may be raised this would include strategies that use certain cash equivalents such as see table i univariate pdf for s p l fl l figure iv annual real s p compounding returns l histogram table iv univariate mixture pdfs for annual real s p compounding returns l number of components ll vr aic aicc bic ll vr aic aicc bic ll vr aic aicc bic ll vr aic aicc bic ll vr aic aicc bic l l l l l l l l l l l l l l l l l l l l abbreviations ll vr variance ratio skewness kurtosis see for aic aicc bic skewness x x e e for the normal pdf thus e kurtosis x x e e for the normal pdf thus e see hogg et for definitions and johnson et for moment details note that higher moments such as these can be difficult to interpret in multimodal distributions figure lrt sampling distribution for testing the optimal of univariate mixture components annual real s p compounding returns l forward backward figure iv plots annual real compounding returns for the s p index from in a histogram and table iv fits these returns to univariate normal mixtures components tests for the optimal of components are done in figure v using the procedure described above all displayed values are rounded throughout and unrounded values are used in calculations the vr constraint is and a significance level of is used for each test some evidence of will lead to rejection of ho as shown in table iv there is insufficient evidence to reject normality for annual real compounding s p index returns the bootstrapped lrt procedure is in agreement with all information criteria values aic aicc bic that a univariate normal pdf is appropriate for these returns which also agrees with the ad test for normality univariate pdf for small cap s fs s figure vi annual real small cap compounding returns s histogram table univariate mixture pdfs for annual real small cap compounding returns s number of components ll vr aic aicc bic ll vr aic aicc bic s s s s s s s s ll vr aic aicc bic s s s s ll vr aic aicc bic s s s s ll vr aic aicc bic s s s s abbreviations ll vr variance ratio skewness kurtosis see for aic aicc bic figure vii lrt sampling distribution for testing the optimal of univariate mixture components annual real small cap compounding returns s forward backward annual real small cap compounding returns from are plotted in figure vi and table v fits these returns to univariate normal mixtures components tests for the optimal of components are shown in figure vii using the procedure described above with vr constraint of and significance level the test of components yields a significant pvalue and ho is rejected backward processing begins by testing components which is also significant ending the procedure a normal mixture pdf is therefore found appropriate for these returns the coefficient s indicates the pdf has positive skew and s indicates a positive excess kurtosis which implies a heavier tail than the normal distribution the fitted marginal pdf is evidently skewed and multimodal table v univariate pdf for b fb b figure viii annual real total compounding returns b histogram table vi univariate mixture pdfs for annual real total compounding returns b number of components ll vr aic aicc bic b b b b ll vr aic aicc bic b b b b ll vr aic aicc bic b b b b ll vr aic aicc bic b b b b ll vr aic aicc bic b b b b abbreviations ll vr variance ratio skewness kurtosis see for aic aicc bic figure ix lrt sampling distribution for testing the optimal of univariate mixture components annual real total compounding returns b forward backward annual real total compounding returns from are shown in figure viii and are fit to univariate normal mixtures in table vi the optimal of components is found via the procedure detailed in figure ix with vr constraint of and significance level of the test of components yields a significant and ho is rejected backward processing begins by testing components which is a repeat test and not performed a normal mixture pdf is thus appropriate for these returns the coefficient b indicates this pdf has positive skew and b indicates a positive excess kurtosis which implies a heavier tail than the normal distribution the fitted marginal pdf is evidently skewed and multimodal table vi univariate pdf summary security s p l small cap s b table vii full univariate pdf parameterization component note use these estimates along with the historical data to reproduce the values tests for the optimal of components use large values are common defaults in forwardbackward testing procedures a variance ratio constraint of vr was used to eliminate spurious optimizers while finding mles with the em algorithm reducing this value or adding new constraints on either the probabilities or means will alter the pdf shapes for example increasing or decreasing the of see figure x to add a constraint simply discard any random start that violates it skewed unimodal pdfs such as the lognormal were available but did not optimize the constrained objectives under interpretation from components are assigned labels the data suggest that annual real compounding s p returns originate from one regime but that small cap equity returns originate from namely a dominant n pdf generates about of returns including outliers and a low n pdf generates with originating from a high n pdf the regimes add shoulders above and below the mean with the pdf evidently heavier tailed than a normal annual real total compounding returns originate from regimes with the dominant n pdf generating of returns and a n pdf regime generating the other note that these returns averaged from which is above the dominant regime and overall historical mean of b consequently widespread claims that current low yields invalidate retirement heuristics such as the rule should be met with skepticism figure univariate mixture pdfs with probability weighted component regimes iv multivariate density modeling covariances the multivariate pdf for l s b is built in two steps first dependence is introduced without correlations and the result is the starting point for a final step of estimating correlations under interpretation in mixture observations come from labeled regimes as seen in for l s b an observation from the multivariate pdf can be viewed as originating from some combination of these regimes there are regimes governing l s b respectively thus at most regimes will govern the multivariate pdf a parsimonious multivariate pdf may call for eliminating combinations that have not produced data the goal is to perform regime selection in an optimal manner accounting for the sample size and total of multivariate pdf parameters we must also preserve the marginal pdfs that were derived in multivariate regimes under mixture pdf interpretation regimes produce observations after estimating the pdf parameters in a mixture we can estimate the probability that an observation is from a given regime let xt and zt ztg be the time t observation and component indicator rvs for a univariate mixture pdf for assume all parameters have been estimated as in and that xt is the observed value at time the probability that xt is produced by component i g is p since has been estimated and p p p with p also estimated this quantity can be computed for each observation xt bayes decision rule assigns each observation to the component with largest probability and is considered an optimal allocation scheme mclachlan peel let nijk be the of observations in regimes i j k of rvs l s b respectively using the assignment rule above the probability that an observation on l s b originates from a given multivariate component can be estimated as where will be the true unknown probability see figure xi figure xi multivariate regime combinations for l s b with estimated probabilities note observations on the trivariate l s b exist in each cell and correlations may change across regimes for example the correlation between s and b may be strongly positive in one cell and strongly negative in another see if l s and b are independent the multivariate pdf is the product of the marginals f l s b fl l s b the marginals were fit to mixtures in and their product is a multivariate normal mixture that yields the fitted marginals under independence the probability that a multivariate observation l s b is from a given regime is the product of the marginal probabilities there is no basis to assume independence and figure xi shows the multivariate component probabilities estimated from the data namely dependence for multivariate mixture pdfs takes forms between component and within component between component dependence is modeled via the probabilities and within component dependence is modeled via the covariances both must preserve the marginals from however the estimates in figure xi do not thus they are infeasible for example using fs s the probability that s is from the dominant regime equals table v whereas using the data it is figure xi conway gives conditions that guarantee both univariate and bivariate pmfs are enforced for probabilities in a contingency table if the bivariate pdf is of interest we suggest deriving it from the trivariate pdf multivariate regime selection via linear programs lps our goal is to parsimoniously model between component dependence while preserving the marginal pdfs from two approaches are presented and become initial solutions for a final step of estimating covariances and updating component probabilities we limit the discussion to the problem at hand however the methods are completely generic and easily extendable to arbitrarily dimensional problems minimize maximum distance minimax lp define the maximum distance between true s b from figure xi as z max and estimated component probabilities for l values for that minimize z and preserve the marginal pdfs from would be of interest namely solving minimize subject to z max where probabilities sum to constraints minimizing the maximum of a set is a minimax objective while z is not linear the constrained optimization problem in can be formulated and solved as an lp note that of the constraints the and are redundant since all probabilities sum to as is the since l has component they are thus dropped and since the maximum of a set must be or all set elements becomes minimize subject to z where z constraints constraints in include absolute values and are however note that x iff x y and x to promote parsimony we penalize the objective when a cell from figure xi containing observations yields a probability this will ensure it only occurs when needed for feasibility the constraints are nijk and the penalty is where m is arbitrarily large when nijk and x xijk must be to satisfy the constraint and z suffers a penalty the final lp formulation is minimize subject to and z x for for nijk xijk where xijk for m and n for constraints known constants z this lp was solved using the techniques from the solution is yielding the minimization objective z minimum sum of squared distances lp define the sum of squared distances between true for l s b from figure xi as z values of and estimated component probabilities that minimize z and preserve the marginal pdfs in are of interest namely solving minimize subject to where constraints the objective z is quadratic and separable therefore can be approximated by an lp as shown in each decision variable appears in one term of z and has a convex shape similar to figure iii each horizontal axis range is and is converted to an lp by replacing the quadratic terms with connected line segments the horizontal axis is partitioned into s contiguous sections and any value is reachable as linearly by p p with p each term and the same penalty from is applied thus becomes subject to p p p p p the decision variables minimize where is then approximated p nijk xijk xijk for s m s n for p in are replaced by p x for constraints known constants in and note that a minimization objective ensures only adjacent s are for each i j k we use and solve the lp in to obtain yielding the objective z the lps solved in and were customized for the current exercise of modeling the multivariate pdf of l s b and the concepts are easily extendable to any collection of securities the code supplied in appendix c models an arbitrary number of securities our purpose is twofold find a feasible solution to initialize the final step and eliminate as many unnecessary multivariate components as possible in larger problems this step can eliminate of multivariate cells note that dependence for l s b has been introduced without having estimated any covariances as the multivariate pdfs using these lp solutions are not the product of the marginals from specifically between component dependence has been introduced using the data to guide which components occur together and at what frequency note that randomly reordering the returns on each of the assets separately would not change the results of however would change the probability estimates in and the unknown from figure xi now have sets of initial values that are feasible and preserve the marginals multivariate density modeling let xn be rvs for the real compounding return on n financial securities that are not serially correlated historical data on each xj will be a random sample over time say xtj t and n where xtj rtj with rtj being the real return from the marginal density for each xj can be modeled as a normal mixture fj xj having gj components with e v and p for n and gj the multivariate pdf for xn will be modeled as a normal mixture with cov xj for n and p c as seen in figure xi each zc defines a combination of univariate components the multivariate pdf for xn is e which to maintain the fitted marginals must satisfy j n the of the unknown parameters given the historical sample and known parameters is ln ln e mles for the unknown parameters are found by solving the following general nlp see maximize ln subject to j j j n j eigenj for n and g where e diag g uphold marginals vc matrices are for n and g n multivariate pdf decision variables known constants the for a multivariate mixture is maximized in with respect to the probabilities and vc elements covariances linear probability constraints maintain marginals as in and covariance constraints will ensure vc matrices see researchers found that mixture pdf parameters can be estimated conditionally by first deriving the probabilities holding other parameters constant then estimating the remaining parameters holding probabilities constant repeating until convergence the ecme algorithm is one such approach that makes use of the actual or incomplete loglikelihood in see liu rubin mclachlan krishnan and is the technique we will employ in optimizing we will define the indicator function i c j i as when univariate component i of security j exists in multivariate component c for n gj g and otherwise multivariate pdf optimization step optimize wrt probabilities holding covariances constant maximize subject to where ln i c j i ln for n and gj uphold marginals for g constraints e for t g known constants since linear functions are concave log of a concave function is concave sum of concave functions is concave z is concave in boyd vandenberghe marginals are enforced with independent linear constraints thus is a ccp from and local optimums are global optimums a critical point of the lagrangian for yields the maximum where l z jensen bard the derivatives are i k j i the derivatives are i k j i i c j i and and c j i for g n gj all derivatives wrt the lagrange multipliers are zero and constraints on are enforced by dropping any components having step optimize wrt covariances holding probabilities constant maximize subject to eigenj for n and g where ln ln for n and g for g e vc matrices are decision variables known constants maximizing a multivariate function with respect to variance components is a difficult general nlp as there may be multiple local optimums or saddle points with zero gradient as well as boundary optimums with gradient searle et recommend a procedure based at a good starting point the gradient g helps inform on direction and the hessian h on step size levenberg suggests a modification to newton s method that iterates as h where si adjusts the step size and climbing angle marquardt derived a similar modification iterating as h h which is considered an optimal compromise between newton s method which often diverges and gradient which converges too slowly a class of techniques based on these approaches has since been published see gavin for an overview while designed to find estimates in models a constrained minimization problem searle et al note that they are also useful in finding mles for variance components both gradient ascent and newton s method failed to optimize for the reasons stated and a approach was taken namely iterations defined by h with max h and parameter a large of random si and are generated at each iteration and we select randomly among the top performers varying si and prevents divergence with large si mimicking gradient ascent and small si newton s method this ensures the nearest maximum is found relative to an informed start while scanning nearby regions for better values iterating into an infeasible region is addressed by performing a ridge repair on the offending vc matrix exact and derivatives of the in are derived below wrt covariance terms only first derivatives for gradient terms there are covariance terms in a multivariate mixture pdf for n securities using the chain rule along with the results derived in and z from the derivatives are z e e e where q which is a scalar for c g and j k second derivatives for hessian terms c p where j k and j r are as follows r s the derivatives wrt terms case wrt where c p covariances are from different multivariate components case wrt where c p covariances are from the same multivariate component z z q q q where q the last term in depends on the location of in r k s for k r above diagonal in and is derived piecewise as for j otherwise below diagonal in diagonal in whereas methods approximate the hessian this approach will use exact derivatives supplied in the multivariate pdf is maximized by iterating over and the objective is across steps and iterations we begin with the informed starts from and then iterate until the stops increasing solutions are maximums if the region around is concave the hessian is negative the algorithm from and is an ecme approach see liu sun who apply newton s method to the em algorithm for faster convergence see also liu rubin and mclachlan krishnan for ecme algorithm details all eigenvalues border solutions may exist in regions note that eigenvalues computed from an matrix are unstable and should not be used sparse matrices containing extremely large and small diagonal entries are often the hessian derived above in may appear problematic via inspection however it is real and symmetric thus its eigenvectors are orthogonal and the condition is in theory give bounds for the accuracy of eigenvalues as u where the hessian h is subject to error e and u is the matrix of column eigenvectors for h with denoting the euclidean norm the condition is u and when u the matrix is and calculated eigenvalues are suspect meyer we will select the pdf having min aic across the informed starts from and conduct an analysis of the surface s properties at the maximizer considering also whether or not it is spurious multivariate pdf for l s b the approach from was used to estimate the probabilities and covariances for f l s b the multivariate pdf of l s b the result is a multivariate mixture pdf see table viii which supplements the univariate pdf estimates in table vii the univariate regime labels propagate through to the multivariate pdf without a doubt many disciplines would discard this solution as a spurious maximizer since components model only a few observations and much of the improvement over a multivariate normal is via these finance however differs from other industries in that a primary focus is studying risk extreme events drive risk and instead of discarding outliers finance assigns them labels such as gray or black swans taleb conventional wisdom suggests for example that such outliers can cause a bank to fail or a retiree to experience financial ruin thus they must be accounted for it is also a reason why the normal distribution may be rejected in financial research our model accounts for risk by either explicitly modeling low outliers which adds density to the tail or by modeling high outliers which shifts the dominant regimes left along with their tails in this application the latter occurs there is a tradeoff with either approach as the within regime variance shrinks when observations are separated the kurtosis indicates whether or not the mixture pdf is heavier tailed than a normal pdf as described in as an aside the predictive modeler may accuse us of memorizing the training data and suggest that this is always possible with a model of sufficient complexity but that such models are poor at prediction a best practice when predicting is to partition the data into sets then train models on the set the best using the and report results after applying the chosen model to the unfortunately there is insufficient data to use this practice on annual historical returns in finance lastly using information criteria such as aic from the multivariate mixture from table viii free parameters is superior to a multivariate normal free parameters since the solution would be to tighten the marginal constraints lower the variance ratio or add constraints on the means probabilities table viii full multivariate pdf parameterization component l s b det note use these estimates with those in table vii and the historical data to reproduce the multivariate values table viii estimates were generated using the procedure from which converged in iterations step from required and while step from required then respectively the optimization in is convex and will converge to a global maximum while that in is not starting at the lp solution from instantly requires ridge repairs and lands in a region the final vc matrix repair occurs at and between and the procedure finds a concave region and methodically climbs it the hessian matrix condition begins at and slowly increases ending at perhaps revealing some numerical instability between and a hessian eigenvalue turns positive and step ends at a saddle point step ends nearby and is a boundary solution since is borderline which is a constraint in to enforce it we require that all vc matrix eigenvalues are with determinants table viii reveals that det is at the threshold and note that the value can be driven higher at this solution by lowering the threshold however the condition becomes large and the result so unstable that minor rounding of produces a non matrix such is the nature of a border solution the covariance between rvs x and y is e x y xy using the law of total expectation e x ez e for any rv z thus for a multivariate mixture pdf ez e xy z e z and using the multivariate pdf similar values are common in retirement research likely derived using the unbiased sample estimator correlations are the correlation between rvs x and y is then derived as x and x y y x x y y the corresponding sample the mles produced by iterating over and are subject to constraints that maintain the marginals from further the lps solved in set the initial pdf structure and thereby introduce dependence for this reason and are in general mles for variance components are biased because the degrees of freedom are not discounted for the estimated means a procedure such as restricted maximum likelihood reml corrects this by estimating the after removing the means unlikely to perfectly align since variances and covariances are defined as expectations averages they can and be skewed by extreme values the small positive estimate for masks the fact that in over of years the correlation between l and b is negative regimes and in another it is strongly positive regimes mixture modeling thus uncovers insights previously not known as the within component are used to derive these correlations as witnessed during the financial crisis extreme value correlations can invalidate models as simple rv dependence structures do not hold during times of stress in hindsight many blame the crisis on the gaussian copula and its failure to accurately model failure time correlations of derivative securities under duress we should not expect other simple structures such as a copula family to perform any better retirement research increasingly advocates for use of complex instruments to improve outcomes coupled with the near universal use of a gaussian or lognormal copula to model dependence reveals a situation that sounds all too simulating from a multivariate mixture pdf let xn be rvs for the compounding return on n financial securities at a given time point where f g is a multivariate normal mixture pdf simulating values from f is a process first generate a uniform random value say u to determine the component if u the observation is from regime else if the observation is from regime k next generate a value from the selected regime say f k if has covariances we apply a decorrelating transformation recall that the n eigenvalues and eigenvectors ui of satisfy so that see also from the eigenvectors of a matrix are orthogonal thus for let and make the linear transform z ux where e z x and v z is a diagonal matrix with variances if the independent and normally distributed z zn are simulated individually as then is a sample on x f a retirement plan let xn f be rvs for the compounding return on n financial securities at a given time point if f has been developed using the historical sample then it accounts for gray swans observed outliers a will be defined as evaluating a retirement strategy using a multivariate pdf g that can produce black swans unobserved outliers a subjective determination this is accomplished by seeding the historical data with extreme events note that the model proposed here can be fit to such data whereas the normal or lognormal pdf can not the proposed retirement strategy is then subjected to g the problem is not avoided by backtesting a given strategy as the at any retirement start year is a bernoulli rv which is highly correlated with the in nearby years this correlation is rarely if ever accounted for in retirement research vi real compounding return on a diversified portfolio assume a retiree holds n securities and let i and pt i be the price value of their security i holdings at times and t respectively for the time t total return on security i is rt i pt i i i and the compounding return is i pt i so that i i pt i the real compounding return for security i is i where i i and it is the inflation rate between times and solving yields i pt i where pt i pt is the real price value of security i holdings at time t and rt i i pt i if security i includes an expense ratio ei then t is paid as a cost at time t and the price value is pt i i t the compounding return is i i and the real compounding return is i i adding up all holdings denote the total account values at times and t by p and vt ei p respectively the total return on the account between times and t is rt vt and the compounding return is rt the time t real compounding return on the account must satisfy rt rt so that t rt where vt is the real account value at time combining it all r t r v i e p i p p p e r e p e e p r r where i is the proportion invested in security i at time and i n is the asset allocation with which proves when modeling returns as rvs that are not serially correlated the time index on i can be dropped however t remains a function of time through the asset allocation real compounding return for portfolio using l s b let el es and eb be annual expenses for the s p l small cap equities s and b define the asset allocation set at time as where applying the real compounding return t is a linear transform of l s b namely t l b the expenses and asset allocations are constants or decision variables using means and variances from table vii and probabilities and covariances e from table viii f l s b is as in let ht be the univariate pdf of t at time t then ht satisfies where ht is the cdf of t and t p t ht define the sample space s where sc is the event that z and these are mutually exclusive with p s and t p t p t s p t where p t p t z is a normal pdf having the following mean and variance for t the pdf for t is a univariate gaussian mixture pdf derived as various pdfs using are shown in figure xii and feature asset allocations that are dominant in one of the securities l s b they are skewed and evidently mostly than a normal pdf each approaches its univariate shape as the proportion for that security increases see figure xii pdf for t real compounding return on a diversified portfolio notes expenses for all asset allocations are el es and eb minimum variance portfolio occurs at asset allocation b marginal distributions s p l fl l set in and then t l fl l where and thus matching l fl l small cap s fs s set in and then t s fs s where for for with and for with thus matching s fs s b fb b set in and then t b fb b where for with for with and thus matching b fb b vii retirement portfolio optimization dynamic retirement glidepaths are asset allocations that adapt over time to changes in either a retiree s funded status or market conditions whereas static glidepaths are fixed allocations the retiree can set and forget glidepaths are often considered in the context of safe withdrawal rates swr see the optimal glidepath outperforms all others with respect to some measure rook and derive the dynamic and static glidepaths that minimize the probability of ruin using an swr respectively both models use normally distributed returns which is an assumption many practitioners and researchers reject due to the lack of skewness and heavy tail the purpose of this research is to extend those models to returns that are skewed of generic complexity the multivariate pdf we develop also allows for a retirement plan using data seeded with extreme events see optimal dynamic retirement glidepaths the dynamic glidepath that minimizes the probability of ruin using an swr is derived in rook via a dynamic program dp for fixed and random time horizons it applies to an individual or a group the dp value function v is defined on dimensions t rf t and t both t and rf t are discretized see to construct the corresponding grid that manages v and it can be solved when the cdf for t is tractable under lognormal returns the cdf for t is intractable although methods do exist to approximate it see rook kerman for one implementation we have derived t the annual real compounding return for a diversified portfolio using s p l small cap equity s and b returns the cdf for t in is a function of the normal cdf which is considered tractable as near exact approximation routines are readily available source code to solve the portfolio problem is supplied in rook for example an swr plan using small cap stocks s and bonds b can be optimized using t from with and so that incorporating additional securities such as l into the optimal dynamic glidepath problem is optimal static retirement glidepaths expressions for the probability of ruin were supplied in for fixed and random time horizons minimizing the probability of ruin and maximizing the probability of success are equivalent optimization problems using t t p ruin from and under a fixed time horizon tf the optimal static glidepath is found by maximizing with respect to the asset allocation and is solved for a portfolio in rook using both gradient ascent and newton s method for fixed and random time horizons as in assume s and b are used so that and since the probability of success is a function of the derivative wrt each is p ruin t tf tf the derivatives wrt the same are p ruin t tf the derivatives wrt are p ruin t tf each term in the sum of and can be computed to an arbitrary level of precision see rook which includes the relevant source code as in the corresponding dps would use the cdfs of univariate normal mixture pdfs for t other than i j also be generated using simulation estimates for these expressions can viii conclusion as retirement decumulation models increase in sophistication financial firms may guarantee their success the retiree could pay for this as a percentage of funds remaining at death decumulation models are statistical and based on assumptions which if incorrect can render the model unsound quantitative mortgage products sold during the housing boom were priced using generated by simulating default times with a gaussian copula in hindsight the normal assumption was incorrect because correlations change in a crisis since housing booms are followed by housing busts model assumptions should have incorporated economic regimes as our economy transitions from pensions to defined contribution plans quantitative retirement products are proliferating at present the industry is built on a gaussian or lognormal foundation which also fails to incorporate regimes or crises when modeling returns and their correlation the purpose of this research is to develop a multivariate pdf for asset returns that is suitable for quantitative retirement plans the model fits any set of returns however the curse of dimensionality will limit the number of securities we propose a multivariate mixture having fixed mixture marginals using normal components the model is motivated by the claim that a lognormal pdf is virtually indistinguishable from a mixture of normals whereas the lognormal pdf is intractable with regard to weighted sums the normal mixture is not the lognormal pdf is only justifiable when returns are iid and the pdf is for the given sample size a typical retiree could endure several market crashes and we should not expect the historical sample to represent all possible extremes we can stress test a retirement plan by subjecting it to a return pdf that has been fit on the historical sample seeded with black swan events the normal or lognormal pdf are unhelpful in this regard as neither can accommodate such outliers the univariate and multivariate pdfs we have developed fit the historical returns closely and a valid criticism is that models which memorize the training data project poorly into the future adjusting the variance ratio constraint when bootstrapping the marginal lrts can loosen the fit we have used relatively high values for both larger variance ratio constraints and lrt lead to more marginal peaks and more components since the multivariate pdf maintains the marginals over fitting the marginals propagates through to the multivariate pdf the user sets these values as desired we fit the multivariate pdf in steps first generic mixture marginals are derived using the em algorithm second the multivariate pdf structure is set using lps where the number of multivariate regimes is pruned by penalizing the objective when it includes components with no data lastly covariances are added and probabilities updated using an ecme approach with the split into convex and general nlp optimizations for the nlp we use a approach that simulates the step size and line search parameters while iterating lastly a linear transform on the multivariate pdf forms the real compounding return on a diversified portfolio and it is incorporated it into optimal discrete time retirement decumulation models using both static and dynamic asset allocation glidepaths references akaike hirotugu a new look at the statistical model identification ieee transactions on automatic control vol no pp anton howard calculus with analytic geometry edition john wiley sons new york ny baltussen guido sjoerd van bekkum and zhi da indexing and stock market serial dependence around the world working paper series https bengen william determining withdrawal rates using historical data journal of financial planning vol no pp border more than you wanted to know about quadratic forms california institute of technology http box george gwilym jenkins and gregory reinsel time series analysis forecasting and control edition englewood cliffs nj boyd stephen and lieven vandenberghe convex optimization cambridge university press new york ny casella george and roger berger statistical inference wadsworth series pacific grove ca conway deloras multivariate distributions with specified marginals stanford university technical report no dempster laird and rubin maximum likelihood from incomplete data via the em algorithm journal of the royal statistical society vol no pp fama eugene the behavior of prices journal of business vol iss pp freund john mathematical statistics edition englewood cliffs new jersey gavin henry the method for least squares problems duke university http guttman irwin linear models an introduction wiley series in probability and mathematical statistics new york ny hamilton james a new approach to the economic analysis of nonstationary time series and the business cycle econometrica vol no pp harlow and keith brown market risk mortality risk and sustainable retirement asset allocation a downside risk perspective journal of investment management vol no pp hillier frederick and gerald lieberman introduction to operations research edition new york ny hogg robert joseph mckean and allen craig introduction to mathematical statistics pearson prentice hall upper saddle river nj huber peter john tukey contributions to robust statistics the annals of statistics vol no pp hurvich clifford and tsai regression and time series model selection in small samples biometrika vol no pp jensen paul and jonathan bard operations research models and methods john wiley sons hoboken nj johnson norman samuel kotz and balakrishnan continuous univariate distributions volume edition john wiley sons new york ny kachani soulaymane the housing bubble and the financial crisis in the united states causes effects and lessons learned industrial economics ieor lecture notes columbia university kall peter and janos mayer stochastic linear programming models theory and computation edition springer series in operations research and management science new york ny law averill and david kelton simulation modeling and analysis series in industrial engineering and management science new york ny levenberg kenneth a method for the solution of certain problems in least squares quarterly of applied mathematics vol no pp li david on default correlation a copula function approach the journal of fixed income vol no pp liu chuanhai and donald rubin the ecme algorithm a simple extension of em and ecm with faster monotone convergence biometrika vol no pp and santosh vempala fast algorithms for logconcave functions sampling rounding integration and optimization proceedings of the annual ieee symposium on foundations of computer science pp mackenzie and taylor spears the formula that killed wall street the gaussian copula and modeling practices in investment banking social studies of science vol issue pp marquardt donald an algorithm for estimation of parameters journal of the society for industrial and applied mathematics vol no pp mclachlan geoffrey on bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture appl vol no pp mclachlan geoffrey and thriyambakam krishnan the em algorithm and extensions wiley series in probability and statistics new york ny mclachlan geoffrey and david peel finite mixture models wiley series in probability and statistics new york ny meyer carl matrix analysis and applied linear algebra society for industrial and applied mathematics siam philadelphia pa milevsky moshe it s time to retire ruin probabilities financial analysts journal vol no pp nau robert notes on arima models duke university https nelson roger an introduction to copulas edition springer series in statistics new york ny nocera joe risk management the new york times magazine http paolella marc multivariate asset return prediction with mixture models the european journal of finance vol iss pp giordoni paolo xiuyan mun and robert kohn flexible multivariate density estimation with marginal adaptation working paper series https rabiner lawrence and juang an introduction to hidden markov models ieee assp magazine pp rook christopher minimizing the probability of ruin in retirement working paper series http rook christopher optimal equity glidepaths in retirement working paper series https rook christopher and mitchell kerman approximating the sum of correlated lognormals an implementation working paper series https ross sheldon introduction to probability and statistics for engineers and scientists edition elsevier new york ny salmon felix recipe for disaster the formula that killed wall street https schwarz gideon estimating the dimension of a model the annals of statistics vol no pp searle shayle george casella and charles mcculloch variance components wiley series in probability and mathematical statistics new york ny taleb nassim the black swan random house new york ny tao jill mixed models analysis using the sas system sas institute cary nc titterington smith and makov statistical analysis of finite mixture distributions wiley series in probability and mathematical statistics new york ny tran paolo giordani xiuyan mun robert kohn and mike pitt estimators for flexible multivariate density modeling using mixtures journal of computational and graphical statistics vol no pp tsay analysis of financial time series edition john wiley sons hoboken nj westfall peter randall tobias dror rom russell wolfinger and yosef hochberg multiple comparisons and multiple tests using sas sas institute cary nc wothke werner testing structural equation models chapter nonpositive definite matrices in structural modeling sage publications newbury park ca data sources inflation rate cash no interest federal reserve bank of minneapolis consumer price index link https accessed december real s p total s p real total real aswath damodaran updated january historical returns on stocks bonds and bills united states link http accessed december download http real small cap equity total small cap equity roger ibbotson roger grabowski james harrington carla nunes september stocks bonds bills and inflation sbbi yearbook john wiley and sons link http accessed december yearly shiller cape ratio january s p earnings january robert shiller online data robert shiller stock markets and cape ratio link http accessed december download http gold returns kitco metals historical gold prices gold london pm fix us dollars link http accessed december download http retiree surveys steve vernon december the top retirement fears and how to tackle them cbs news moneywatch link http accessed january lea hart october american s biggest retirement fear running out of money journal of accountancy link http accessed january robert brooks july a quarter of americans worry about running out of money in retirement the washington post link https accessed january prudential investments perspectives on retirement retirement preparedness survey findings prudential financial link https accessed january emily brandon march baby boomers reveal biggest retirement fears us news world report link http accessed january ix appendix with source code appendix a proof of unbounded likelihood for normal mixture pdf let xt f x f x f x f g x where f x is a mixture pdf with f i x n g t and suppose that an iid sample of size t has been observed as xt the normal component pdfs are given by i g the vector of unknown parameters for f x f x is defined as where and to obtain an arbitrarily large value for from take a single component and dedicate it to one observation for example consider component k f k x and observation j xj let be an arbitrarily small number and set xj and the value of f k xj is recall that we initialize to begin the and is guaranteed to increase at each iteration from above as f k xt therefore by choosing k j and we can initialize f k xt to an arbitrarily large number making unbounded such a solution for however is not meaningful as it dedicates a component to a single observation similarly a component that is trapped into fitting a small number of closely clustered observations also leads to a high likelihood value due to the small variance and is referred to as a spurious maximizer of when the solution is not meaningful these should be identified and removed if they are not the mle to avoid manually evaluating each for spuriousness we impose a variance ratio constraint as noted in this prevents any single variance from becoming too small and eliminates both problems noted above mclachlan peel appendix b diagnostic plots for time series diagnostic plots for the time series analyzed in table i are presented here each time series includes a plot of the uncentered raw observations xt t as well as the acf and pacf up to lags where n total data points for the series box et annual values from are used therefore n for all series that are not differenced or averaged the sources for all data can be found in the data sources section located after references in the main paper each test is formed as ho no serial correlation vs ha serial correlation with the provided on the pacf plot preliminary conclusions about the behavior of each process are supplied in table i followed by a discussion of the appropriate to account for multiplicity all security returns are annual compounding inflation rate real compounding s p total compounding s p real compounding small cap equity total compounding small cap equity real compounding total compounding real compounding real compounding gold return cash compounding no interest real s p risk premium real s p cap risk premium real small risk premium avg real compounding s p avg total compounding s p diff shiller cape ratio diff log shiller cape ratio detrended s p avg real earnings appendix source code the application accepts input files a control file with settings and a text file of returns samples of each are shown below the control file sets parameters with the and being the of assets and of time points respectively the parameter sets the of random starts per cpu core for each execution of the em algorithm when fitting univariate pdfs the value is multiplied by less than the of components in the mixture being fit for example if fitting a univariate mixture the setting below results in random since our machine contains cores this mixture pdf is fit using em algorithm random starts the parameter is the maximum of components for all univariate mixtures which we set to in this implementation the parameter is the of samples to use for each bootstrapped lrt within the proposed framework that determines the optimal of univariate mixture components for each asset the final values in are the lrt respectively the returns file contains a column of compounding returns for each asset on our computer the control file shown can reproduce the multivariate pdf for l s b from in a few minutes both univariate and multivariate parameter estimation results are written to the screen and to the file named in the folder supplied by the user upon application launch the files above must also exist within this directory lastly a folder to contain error files for issues encountered during the optimization must exist and be specified by the user using the global constant errfolder which is also defined in the header file to save time processing for both the univariate and multivariate pdf is multithreaded our implementation of the em algorithm launches each random start in a separate thread random starts end at local optimums the ecme algorithm takes a small or large step in the general direction of steepest ascent while maximizing the multivariate function with respect to the covariance parameters see the line search and stepping parameters are simulated and values that yield the largest increase are used which is also multithreaded note that our ecme implementation uses the actual or incomplete loglikelihood function from conditioning first on the probabilities holding the covariances constant and then on the covariances holding the probabilities constant these are steps and in and this application uses the boost http lpsolve http and eigen http external libraries which are freely available to download under terms and conditions described at the given sites our code consists of a header file and functions and is being provided under the gnu general public license see http for full details copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename summary this is the header file which is included before each function call to define needed libraries global constants inline functions and function prototypes for function prototypes see headers attached to the code definition for a description of the purpose and parameters inline functions getndens the supplied normal density function evaluated at a given point without the constant pi getmdens the supplied normal mixture density function evaluated at a given point without the constant pi getpost the probability that a given value belongs to a given component of the supplied normal mixture density getunimean the mean of a given set of observations getunistd the standard deviation mle of a given set of observations getllval the value for a given observation set from the supplied univariate mixture density getvratio the ratio of the variance for a supplied set of standard deviations getemmean mean update for the em algorithm along with a quantity needed for the em algorithm variance update getmvndens the supplied multivariate normal density function evaluated at a given vector of values for each random variable getmvnmdens the supplied multivariate normal mixture density function evaluated at a given vector of values for each random variable getcovs accept a multivariate normal mixture density and return the vector of covariances starting with vc element of component then element etc setcovs accept a vector of all covariances and insert the values into a single component vc matrix as specified by the user getcofm accept a matrix and and convert it to one who determinant is equal to the cofactor of the matrix with respect to that getidm accept an empty square matrix and populate it with an identity matrix of the same size chksum accept a multivariate normal mixture density and check that no component has a likelihood of zero at all time points showvals display the parameters means standard deviations component probabilities of a supplied multivariate mixture density to standard output pragma once include libraries include include include iostream include iomanip include string include fstream include random include include include include include include include using namespace std global constants const string const string const string const string c const long double const long double const long double const long double const long double const long double name of parameter control input file name of input data file name of output file folder where will be written to files mandatory constant representation of pi ln root of pi square root of the variance ratio constraint threshold convergence criteria large negative value for use as an invalid value indicator arbitrarily large positive value const const const const const const const const const const const const const const long double long double long double long double int int int long double int int int int int int minimum multiple when performing a ridge repair on a broken matrix maximum multiple when performing a ridge repair on a broken matrix threshold for minimum eigenvalue to ensure a positive definite matrix threshold for determinant to ensure a positive definite matrix maximum of em iterations allowed per single optimization large constant for use in lp objective function to enforce feasibility constraint discretization level for separable quadratic objective function in solvelp minimum additive factor to use for hessian stepping maximum hessian steps per thread set to for ecme stepping multiply by cores to determine the total threads used for hessian stepping set to for ecme stepping of beats to randomly select from during the ecme algorithm step set to for ecme stepping debug level for output window details higher more details processor cool down time in between em iterations in msec value of seconds number of outer loops for ecme processing of times to repeat the ecme procedure inline functions inline long double getndens const long double val const long double mn const long double std return exp pow inline long double getmdens const long double val const int g const long double inmdist long double for int c g inmdist c getndens val inmdist c inmdist c return dval inline long double getpost const long double val const int g const long double inmdist const int cid return inmdist cid getndens val inmdist cid inmdist cid val g inmdist inline long double getunimean const int t const long double r long double for int t t r t return inline long double getunistd const int t const long double r const long double mn long double for int t t pow r t return sqrt inline long double getllval const int t const long double r const int g const long double inmdist long double long double t for int t t log getmdens r t g inmdist return llval inline long double getvratio const int g const long double stds long double for int c g if stds c minstd c if stds c maxstd maxstd stds c return pow inline long double getemmean const int t const long double r const long double pprbs const long double cprbs long double ssq long double atrm ssq for int t t t r t ssq r t return cprbs inline long double getmvndens const eigen vals const eigen mn const eigen vcmi const long double sqrdet const long double picnst return exp vcmi inline long double getmvnmdens const int ucells const eigen vals const eigen mns const eigen vcmis const eigen prbs const long double sqrdets const long double picnst long double for int u ucells u getmvndens vals mns u vcmis u sqrdets u picnst return dval inline void getcovs const int ucells const eigen invcs eigen indvarsm int for int v ucells for int r int invcs v for int c int invcs v indvarsm v r c inline void setcovs const int ucell eigen invcs const eigen indvarsm int int int invcs ucell int invcs ucell for int r int invcs ucell for int c int invcs ucell invcs ucell r c invcs ucell c r ucell r c inline void getcofm const int numa const int inr const int inc const eigen ine eigen inejk inejk for int r numa for int c numa if inejk r c else if inejk r c inline void getidm eigen id for int r int id for int c id if id r c else id c r r c inline void chksum const int t const int nucmps const long double fvals for int v nucmps long double for int t t fvals t v if tmpsum cout endl error unique component v has a likelihood that is zero for each time point endl this will eliminate the corresponding component probability from the stage objective function endl the component probability should be treated as a constant and moved to the rhs constraint vector endl and eliminated from the objective function the code for this has not yet been implemented endl exiting ecmealg endl exit inline void showvals const int t const long double r const int g const long double inmdist ostream for int x t ovar r x endl for int c g ovar string prob c inmdist c mean c inmdist c c inmdist c endl function prototypes int fitmixdist const int a const int t const long double r const int maxcmps const int nsmpls const int nstrts const long double sl long double fnlmdst string rdir void getrvals const int n const int g const long double inmdist long double rvls void getrprbsstds const int t const long double r const int g long double prbs const long double mns long double stds void emalg const int t const long double r const int g long double prbs long double mns long double stds long double llval const long double inmdist int rprms int ecmealg const int t const long double r const int numa const int nucmps const eigen cmtrx const eigen cvctr long double muprbs eigen mumns eigen muvcs int ucellids const string rdir long double thrdemalg const int t const long double r const int rs const int ing const long double inmdist const int outg long double outmdist void mapcells const int totcells const int numa int incellary const int curast const int incmps int cid int tmpary int getcell const int incellary const int totcells const int cmplvls const int numa void asgnobs int inary const int t const long double r const int g const long double inmdst void getcor const int t const long double r const int asgn const int incellary const int cellid eigen mn eigen vc const int vcell int solvelp const int totcells const int numa const int incellary const int cmps const long double prbs const int ncellobs const long double cellprob double outprbs const int type void getcmtrx const int totrows const int nucmps const int sol const int numa const int ncmps const int vcids const int incellary const long double prbs eigen flhs eigen frhs long double gethesse const int t const int inucmps const long double infvals const long double indnoms const eigen incmtrx const eigen incvctr eigen inhess eigen inlhs eigen inrhs long double gethessm const int t eigen rs const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs const eigen mumns const eigen e const eigen einv const eigen ina eigen inhess void getgrade const int t const int inucmps const long double infvals const long double indnoms const eigen inlhs const eigen inrhs const eigen indvars eigen ingrad void getgradm const int t eigen rs const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs eigen mumns const eigen e const eigen einv const eigen ina eigen ingrad int long double getlfvals const int t const int numa const int nucmps const eigen rs const eigen uprbs const eigen inmns const eigen invcis const long double insqs const long double inpicst long double denoms long double lfvals void wrtdens const string typ const int nucmps const int ucells const long double muprbs const eigen mumns const eigen muvcs ostream ovar void stephessm int long double const eigen rs eigen indvars const eigen ingrad const eigen inhess const eigen uprbs const eigen inmns const eigen invcs int ridgerpr const int ucell eigen e long double mult copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function main summary this function defines the entry point for the console application and drives the analysis using major sections section the contents of the control file see global constant cfile in the header program and the data file see global constant rfile in the header program are read in and stored as variables section build the univariate mixture pdfs for each asset using the em algorithm with random starts and bootstrapped likelihood ratio test for determining the optimal of components for each asset section combine the univariate mixtures into a multivariate mixture pdf without disturbing the marginals and without correlations at this point a multivariate mixture pdf is estimated with dependence but without regime correlations there are multivariate mixture densities at this stage one for each type of lp solved the lp objectives for building the multivariate mixture pdf are minimax and minimum squared distance section use an ecme type algorithm to estimate the correlations and refine the component probabilities with maximum likelihood as the objective this step is a iterative procedure similar to the em algorithm and random starts are used in the step where the covariances are estimated the step is repeated necmes times and the optimal multivariate mixture pdf for each is written to the output file once all steps have been completed there are necmes multivariate mixture densities with fixed marginals and one is chosen based on some criteria such as higher likelihood or higher information criteria this final step is left to the user we use aic and the decision would account for the likelihood value as well as the total of parameters all pdfs and their values are written to the output file see global constant ofile in the header program along with the univariate mixture pdf details inputs no input arguments are processed by this function critical inputs are supplied via the file see global constant cfile in the header program and data is supplied via see global constant rfile in the header program in addition to other inputs are set to global constants in the header file outputs this function writes details of fitting univariate multivariate mixture pdfs for the supplied assets to the screen and to the file see ofile in the header program include int main int argc char argv local variables string rootdir int nboots ntpoints nassets nrstarts mcomps ucell long double alpha long double ofstream fout ensure that an folder has been provided set in the header file if errfolder cout error an folder has not been provided problems found during optimization will be written to files in this folder endl use global variable errfolder in the header file to set this destination endl exiting main endl exit retrieve directory location of setup files cout enter the directory where the setup files reside eg c endl cin rootdir boost rootdir cout endl read in control file which contains of asset classes of timepoints with return data of random starts for maximizing the likelihood of a univariate mixture as a multiple of independent processing units and components value of with independent processing units will use random starts for a mixture random starts for a mixture and random starts for a mixture maximum of components appropriate for this data set when fitting univariate mixture densities the marginals sample size to use when bootstrapping the lrt test statistic for lrts for the of components using a selection algorithm where the alpha is used for forward selection and the alpha is used for backward selection ifstream getparams if getparams nassets ntpoints nrstarts mcomps nboots alpha alpha else cout error could not open file rootdir cfile endl exiting main endl exit instantiate arrays to hold means standard deviations and proportion weights for mixtures ncomps new int nassets array to hold components for each asset rtrn new long double nassets one return per asset and time point up to time ntpoints prob new long double nassets one prob per asset and component for component sizes up to mcomp mean new long double nassets one mean per asset and component for component sizes up to mcomp stdev new long double nassets one stdev per asset and component for component sizes up to mcomp asgnmnt new int nassets one assignment per asset and time point up to time ntpoints for int a nassets ncomps a start with component for each asset rtrn a new long double ntpoints array of returns for each asset returns are then rtrns a rtrns a etc asgnmnt a new int ntpoints array of component assignments for each asset after density has been read in returns file which has a column of returns for each asset and store in a array ifstream getrtrns if int while r ntpoints for int a nassets getrtrns rtrn a r if r ntpoints cout error file rootdir rfile should have ntpoints rows of returns for nassets assets but it has fewer endl exiting main endl exit else cout error could not open file rootdir rfile endl exiting main endl exit build a large array to temporarily hold the optimal distribution for each asset for int m optmdst m long double mcomps calculate the component probability mean and standard deviation normal mle version note mle standard deviation divides by n not and is a biased estimator for int a nassets initialize all probabilities in the large array to zero for int c mcomps optmdst c find the best fitting mixture distribution for this asset ncomps a a ntpoints rtrn a mcomps nboots nrstarts alpha optmdst rootdir assign each observation to a component the most likely one using bayes rule asgnobs asgnmnt a ntpoints rtrn a ncomps a const long double optmdst build arrays to hold the optimal solution and transfer it to these arrays prob a long double ncomps a mean a long double ncomps a stdev a long double ncomps a for int c ncomps a prob a c c mean a c c stdev a c c write out the assignment of each observation time point to the corresponding components during debug mode if dbug cout endl string endl assignment of observations at each time point to a set of components using bayes decision rule endl string for int t ntpoints cout endl time t setfill setw long long ntpoints t for int a nassets cout asset asgnmnt a t cout endl endl mixture distribution has been fit for all assets assemble the multivariate density start by computing the total of cells that need to be mapped dealing with a cube also compute the total of components summing across all assets int for int a nassets ncells ncells ncomps a totcmps totcmps ncomps a initialize variables int int ncells int nassets for int i ncells allcells i new int nassets call function to map each cell of the cube to a single list value mapcells ncells nassets allcells ncomps cellid tmpvals derive the unique cell id for each time point and count the of obs per unique cell int int nassets int ntpoints int ncells long double long double ncells for int c ncells cellprb c long double cellcnt c if dbug cout endl endl string endl assignment of each time point to a cell endl string endl for int t ntpoints if dbug cout time t setfill setw long long ntpoints t for int a nassets tmpcombo a a t cellasgn t const int allcells ncells tmpcombo nassets cellcnt cellasgn t cellasgn t cellprb cellasgn t cellasgn t long double ntpoints build and solve the corresponding lp that determines the structure of the multivariate density using both a minimax and minimum squared distance objective then build an array of cell ids that have probabilities attached using each method these are the cells we must derive a correlation for ensuring that the resulting vc matrix is in solvelp type is defined as these become the index values for all arrays that hold results from both methods to compare and select the better performer minimax objective minimum sum of squared distances objective string type type minimax type minimum sum of squared distances ssd int numcmps int ctr double double eigen eigen eigen eigen cout endl string endl running type and type lps to set the initial multivariate density structure endl string endl endl for int i use an lp to approximate the structure of the multivariate density estprbs i double ncells numcmps i ncells nassets const int allcells ncomps const long double prob cellcnt cellprb estprbs i i valcids i int numcmps i identify and store the unique cells with probabilities for int c ncells if estprbs i c valcids i retrieve the effective constraint matrix that applies to the lp solution and is full row rank ensure it does not have more rows than components getcmtrx numcmps i i nassets ncomps valcids i const int allcells const long double prob mlhs mrhs cout type i done there are numcmps i unique cells with probabilities using type i endl assemble multivariate densities using the minimax and minimum squared distance objectives as starting points and select the one with a better fit after iterating with the ecme algorithm until convergence where the step is replaced by a convex programming problem better fit means higher likelihood int int necmes int necmes long double long double necmes eigen eigen necmes eigen eigen necmes for int i necmes initialize the arrays to their proper size based on the of components resulting from the above lps mprobs i long double numcmps i mmeans i eigen numcmps i mvcs i eigen numcmps i vcids i int numcmps i initialize each type and cell specific mean vector and vc matrix to the appropriate size which is the of assets for int v numcmps i mmeans i v nassets mvcs i v nassets nassets vcids i v i v populate probabilities means and the initial matrix for each valid cell elements covariances are set to zero at this stage for int v numcmps i i v mprobs i v i ucell for int nassets mmeans i v allcells ucell for int nassets if mvcs i v stdev allcells ucell else mvcs i v if debugging mode is on write out the corresponding parameters of the distribution these are the initial lp values before improving with covariances if dbug wrtdens type i numcmps i vcids i mprobs i mmeans i mvcs i cout improve the initial estimate using the ecme algorithm stop when the multivariate likelihood is maximized cout endl string endl ecme algorithm for type i initial solution endl string endl endl fout endl string endl ecme algorithm for type i initial solution endl string endl endl numcmpsf i ntpoints const long double rtrn nassets numcmps i mlhs i mrhs i mprobs i mmeans i mvcs i vcids i rootdir write final density to the output file and display to user which contains maximum likelihood estimates wrtdens type i numcmpsf i vcids i mprobs i mmeans i mvcs i fout wrtdens type i numcmpsf i vcids i mprobs i mmeans i mvcs i cout cout endl done processing type i initial lp solution final density shown above and written to file endl free temporary memory allocations for int i necmes delete mprobs i mprobs i delete mmeans i mmeans i delete mvcs i mvcs i delete vcids i vcids i for int i delete valcids i valcids i delete estprbs i estprbs i delete mprobs delete mmeans delete mvcs delete valcids delete mlhs delete mrhs delete estprbs for int c ncells delete allcells c allcells c delete allcells delete tmpvals delete tmpcombo delete cellasgn delete cellcnt delete cellprb for int m delete optmdst m optmdst m for int a nassets delete rtrn a rtrn a delete asgnmnt a asgnmnt a delete prob a prob a delete mean a mean a delete stdev a stdev a delete optmdst delete rtrn delete asgnmnt delete prob delete mean delete stdev delete ncomps delete numcmpsf exit cout endl done return hit return to exit endl copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function fitmixdist summary this function fits a univariate normal mixture density to a set of observations using the em algorithm within a iterative procedure that tests for the optimal of components first a density is fit using standard maximum likelihood ml estimates and tested against a mixture density fit using the em algorithm with a user specified large of random starts note the of random starts is specified by the parameter in the input control file as a multiple of the of processing units on the computer running the application in addition this value is multiplied by components which uses an increasing of random starts for densities with more components when fitting a particular density the one with largest likelihood value from all random starts and obeying the variance ratio constraint set in the header file using global constant stdratio is selected as the ml estimator a hypothesis test of ho vs ha is then conducted using the forward alpha significance level set by the user in the input control file parameter the likelihood ratio test statistic lrt is used and derived as where lo and la are the maximized likelihood values under ho and ha respectively the observed lrt statistic value is then compared to the critical value of the lrt distribution under ho that yields area to the right equal to the forward alpha the distribution of the lrt statistic under ho is approximated via bootstrapping see mclachlan assuming the mle fit under ho reflect the true distribution of the data under ho random samples from the distribution under ho are generated a and univariate mixture density is fit to each sample which together generate a single null value from the true null distribution of the lrt statistic again assuming the true null distribution is that fit by the mles using a large of samples yields a set of values used to approximate the null distribution of the lrt statistic with corresponding critical value for rejecting ho based on the forward alpha if ho is rejected then a univariate mixture density becomes the new testing basis and the forward procedure continues with a test of vs components otherwise if ho is not rejected then the forward procedure testing basis remains a density which is tested against in the exact same manner the key to bootstrapping the lrt statistic null distribution is to assume that the density with mle parameters under ho is the true null density and random samples are generated from it the forward procedure continues in this manner testing ho go components vs ha ga components up to ha max components specified in the control file parameter if the maximum of components is set to n then the forward procedure will conduct hypotheses tests sequentially where the alternatives are components components components n components respectively the forward procedure will end with a certain of mixture components being suggested as optimal it is followed by a backward procedure that tests a univariate mixture density with less component and repeats the test until ho is rejected ending the procedure if the forward procedure ends by suggesting that f components are needed to properly fit the data then the backward procedure begins by testing ho components vs ha f components using the backward alpha specified by the user in the control file parameter note that the forward procedure conducts a series of tests with ha having maxcmps maxcmps is the maximum of components to consider in the application which is set by the parameter in the control file components respectively therefore once the forward procedure ends we have optimal univariate mixtures fit with ml estimates for all components from to maxcmps during backward processing these are used to generate the random samples needed under ho to approximate the null distribution of the lrt if this hypothesis is rejected then the procedure ends and f components are considered optimal for the given observation set else if ha is not rejected then the backward procedure basis becomes components since it is not significantly different from f components and applying the principle of parsimony in this case the backward procedure continues by testing ho components vs ha components if this test is not rejected a mixture density with components becomes the new basis and the procedure continues by testing ho components vs ha components if the test is rejected then components are considered optimal and the procedure ends the backward portion continues in this manner until a test is rejected consider an example with a maximum of components specified by the user in the control file parameter and a forward alpha parameter and backward alpha parameter the following details a possible sequence of testing events and illustrates how the procedure iterates test ho ha alpha type forward forward forward forward forward forward forward forward forward backward backward backward result ho accept accept reject accept accept reject accept accept accept accept accept reject final result here a mixture is considered to provide best fit based on the forward and backward alpha values significance levels forward processing ended with a component mixture producing the best fit and backward processing found no significant difference between components a test that was not performed during forward processing and no difference between components a test that was also not performed during forwared processing the backward test of vs components was performed during forward processing but with a smaller alpha and was not rejected but is rejected using the larger alpha for backward processing by tuning the alphas the user customizes the procedure to favor a of components inputs the asset being processed ranges from thru total the asset is used for debugging and displaying results to the output window and output file a the total of observations time points in the current application specified by the user via the control file parameter t an array of size t holding the at each time point for the asset currently being processed r the maximum of components to allow in the univariate mixture distribution fit by this function specified by the user via the control file parameter maxcmps the of bootstrap samples to use for each lrt when approximating the statistic null distribution specified by the user via the control file parameter nsmpls the of random starts to use when finding the ml estimates for a specific mixture density specified by the user via the control file parameter note that this value is taken as a multiple of the of cores and is also increased by the multiple components nstrts an array of size to hold the forward and backward alphas respectively for the lrts sl an empty double array to hold the fitted univariate mixture distribution that results from applying the procedure detailed here the array is indexed as s c where s and c component here refers to the component probability refers to the component mean and refers to the component standard deviation fnlmdst a string for the output directory the final univariate mixture density for each of components considered is written to the output file specified by the global constant ofile rdir outputs this function populates the empty array that is supplied via parameter with the optimal univariate mixture density fit by the procedure detailed above the number of components in this density is returned at the function call include int fitmixdist const int a const int t const long double r const int maxcmps const int nsmpls const int nstrts const long double sl long double fnlmdst string rdir local variables int sas strt end incr cbbsol curopt tstid nbootsadj pvalcntr fprms cmpord string yn tmptxt long double lrt long double long double maxcmps long double maxcmps long double maxcmps long double maxcmps long double long double t long double long double long double long double long double vector long double pvalue alpha vector int derive the solution for the incoming sample using the mle estimates for int m orimdst m long double m long double mixture used for sample random orimdst orimdst t r orimdst t r orimdst for each asset arrays holds the optimal solutions for the original sample this will avoid having to rebuild optimal solutions during procedure orprob long double orprob ormean long double ormean orstd long double orstd orllval t const long double orimdst cout string endl string endl processing asset endl string endl string endl endl forward for int fb iteration limits depend on the procedure if fb backward for int g end set null and alternative hypotheses if fb cbfsol g sl else if fb g cbfsol sl write details of this iteration cout string endl string endl asset hypothesis test hnum vs direction endl string endl string endl endl string endl string endl asset is best fit by a n component normal mixture endl vs endl asset is best fit by a n component normal mixture endl string endl string endl if backward processing check whether or not this hypothesis has already been tested if so issue a warning then retrieve and use the existing solution for int h if h h if sl the same else a different cout endl warning this hypothesis has already been tested using tmptxt alpha see test rebuild input array under we need the null dist to generate bootstrap samples note that components under for test hnum and is the index of the optimal solution under of components for int m m long double for int c in general the solution is stored in orxyz c for example c c orprob is a array with single element orprob c c ormean is a array with elements ormean ormean c c orstd is a array with elements orstd orstd orstd report the current optimal solution for the actual sample under when forward processing if fb dbug cout endl optimal solution orllval endl getvratio endl showvals r const long double variance ratio cout endl simulation is used to approximate the null distribution of the lrt statistic if a sample is generated but no local optimum is found then reduce the simulation count by also reduce the simulation count by when the lrt statistic is this can happen if a local optimum is found under the full model that has a smaller likelihood than the optimum value under the reduced model which may itself be a local optimum the adjusted count is held in nbootsadj lrt value for current solution numerator uses the parameters already estimated we need to fit a new solution using components on the incoming data for the denominator of the lrt this is done here once fit the estimates are transferred to the double arrays that hold optimal solutions for all component sizes note when forward processing hnum so that if fb for int m oromdst m long double note that orimdst has exactly hnum components during forward processing which is less than the of comps in orllval hnum t nstrts hnum const long double orimdst oromdst instantiate the arrays of size and transfer the optimal solution for storage across component sizes orprob hnum new long double ormean hnum new long double orstd hnum new long double for int c orprob hnum c c ormean hnum c c orstd hnum c c if the algorithm fails to converge on the original data we must exit the program if orllval hnum lnegval cout error asset did not converge to a local optimum when attempting to fit components endl try increasing the of random starts decreasing the of components or increasing the variance ratio constraint endl exiting fitmixdist endl exit also exit if the algorithm finds an inferior optimum when compared to those with fewer of components if orllval orllval hnum cout error likelihood for asset when fitting components is less than the likelihood when endl fitting hnum components an inferior local optimum has been found increase the of random starts to prevent this endl exiting fitmixdist endl exit statlrt nsmpls orllval orllval report the lrt statistic value for the actual sample if dbug cout lrt details for original sample orllval orllval lrts for asset statlrt nsmpls endl endl bootstrap the lrt statistic to determine its sampling dist the reduced model is the one with mle just derived for the given of components this test is for of components if if dbug cout processing bootstrap samples instantiate the arrays to hold the optimal solutions under and these will be reused for each sample for int m m long double m long double test the hypothesis for int b nsmpls if dbug cout if nsmpls cout endl string else if dbug cout string endl string endl string asset hypothesis test hnum vs start processing bootstrap sample of nsmpls endl string endl string endl endl generate sample of size ntpoints from the reduced model under the null hypothesis that the reduced model using mles is correct for this set of returns when bootstrapping the lrt statistic use values probs means stds to generate the random starts under because we do not have access to the solution using components less on each sample and use the model fitted under to generate random starts when fitting the model specified by getrvals t const long double tmprtrn fit both components and components models and form the lrt statistic random start for always a mixture t tmprtrn t tmprtrn lrt t tmprtrn nstrts const long double if lrt lnegval lrt t tmprtrn nstrts const long double random observation from the null distribution of the following lrt reduce the simulation size count by if the lrt statistic is negative data originate from mixture with components data originate from mixture with components if lrt lnegval lrt lnegval statlrt b lrt lrt else statlrt b the denominator gets decremented in ways the lrt statistic is negative which means that an inferior local optimum was found no local optimum was found when fitting either the null component or the alternative component distributions all were spurious leading to unboundedness if statlrt b report the lrt statistic value for the bootstrap sample if dbug if lrt lnegval lrt lnegval statlrt b cout lrt details for sample lrt lrt lrt lrt lrts for bootstrap sample of nsmpls statlrt b endl endl else if lrt lnegval lrt lnegval if lrt lnegval cout the em algorithm did not find a local optimum under h tstid sample will be discarded endl endl else cout negative lrt test statistic statlrt b sample will be discarded endl an inferior not the largest local optimum was found under endl endl clear out sample solution arrays for int m delete m m delete m m clear the array that holds the optimal solution for the original sample under for int m delete m m if dbug cout nsmpls endl determine test result if is rejected then move the temporary values into their permanant placeholders if null hypothesis is not rejected then set variable stop to if sas for int n nsmpls if statlrt n statlrt n statlrt nsmpls long double uncomment to write the lrt array to a file in the error folder these values form the null distribution of the test statistic ofstream fout errfolder long long long long long long fout lrt statistic for original data statlrt nsmpls endl endl fout null distribution lrt statistic values for nsmpls bootstrap samples set values of to missing endl endl for int b nsmpls fout statlrt b statlrt b endl else pvalue write out the hypothesis test result if this is a retest report the prior result note that it is a retest when sas and sas holds the prior test number if sas cout endl string endl hypothesis test hnum endl the hypothesis test uses nbootsadj valid lrts values from endl of these there are pvalcntr values cout the resulting for testing vs is pvalue vs result endl string resampling along with the lrts value from the original sample the sample lrt statistic endl alpha sl fb endl else cout endl string endl hypothesis test hnum vs is a retest of hypothesis test sas endl string endl cout the resulting for testing vs is pvalue alpha sl fb endl accept or reject the hypothesis if ho is rejected then ha is considered a better fit when forward testing ha has additional components the integer value of when backward testing ho is considered an acceptable fit when not rejected integer value of here we keep track of the best solutions for both forward and backward testing the index of the optimal solution when accessing the orxyz double arrays is once finished processing that is orxyz orxyz not if pvalue sl fb if fb was rejected set cbfsol to components else stop backward processing after first is rejected else if fb curopt int cbfsol int cbbsol for displaying current solution to user if forward processing and not at the max components we should reset orimdst and clear out oromdst if fb end the input and output arrays the orimdst densities will generate the random starts when fitting the original sample to one additional component note that the original double mixture arrays are only needed during forward processing for int m delete orimdst m orimdst m long double for int c orimdst m c m c delete oromdst m oromdst m report the current best solution after testing this hypothesis cout the null hypothesis is yn rejected in favor of the alternative hypothesis endl at this stage a n curopt normal mixture provides the best fit for asset endl endl string endl string endl endl processor cool down if dbug cout endl processor cool down double minutes endl sleep cdown display then clear out the containers cout all for asset by test number endl string endl for int m cout for hypothesis test setfill setw setw m vs setw m pvalue m cout alpha alpha m cout values ll orllval m ll orllval m endl insert the optimal solution into the double array passed to this function the components is returned by the function order the mixture returned by the respective means for int cbbsol for int cbbsol if ormean ormean account for ties for int cbbsol fnlmdst cmpord populate the outgoing array fnlmdst cmpord fnlmdst cmpord fnlmdst cmpord display the final result for each asset fprms cbbsol free parameters cout endl distribution for asset is a n cbbsol normal mixture full details below endl endl orllval endl variance ratio getvratio cbbsol orstd endl aic long double fprms orllval endl bic orllval long double fprms log long double t endl aicc long double fprms orllval long double fprms fprms long double t fprms endl density parameters endl showvals r cbbsol const long double fnlmdst cout endl string endl string endl write all densities to output file for this asset along with all ofstream fout if a else fout string endl asset optimal densities endl string endl for int i maxcmps fout endl optimal solution endl string fprms free parameters fout endl orllval i endl variance ratio getvratio orstd i endl aic long double fprms orllval i endl bic orllval i long double fprms log long double t endl aicc long double fprms orllval i long double fprms fprms long double t fprms endl density parameters endl for int c fout string prob c orprob i c mean c ormean i c c orstd i c endl fout endl endl all for asset by test number endl string endl for int m fout for hypothesis test setfill setw setw m vs setw m pvalue m fout alpha alpha m fout values ll orllval m ll orllval m endl fout endl endl optimal density for asset endl string endl showvals r cbbsol const long double fnlmdst fout fout endl endl clear vectors and free temporary memory allocations for int m delete orimdst m orimdst m delete oromdst m oromdst m delete m m delete orimdst delete oromdst delete for int c maxcmps delete orprob c orprob c delete ormean c ormean c delete orstd c orstd c delete orprob delete ormean delete orstd delete tmprtrn delete statlrt delete orllval delete delete delete return the optimal of components return cbbsol copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function thrdemalg summary this function threads the em algorithm an individual thread is used to maximize the likelihood function for random start likelihood functions for mixture pdfs have multiple local optimums and they are unbounded the mles for the parameters are those that yield the largest of the local maximums after removing spurious optimizers spurious optimizers are those where a single component is used to fit one or a small number of closely clustered observations in such cases the variance of the corresponding component becomes very small or approaches zero which drives the likelihood value to infinity spurious optimizers can be eliminated by imposing a variance ratio constraint which does not allow the ratio of the largest to smallest variance across components to exceed some given constant this will prevent a variance from approaching zero or a small number the user must set the variance ratio constraint to a value that is appropriate for the data in their application if the intent is to construct a density function that memorizes the training data then this value can be set very high if the intent is to build a density that extends well to data then this value can be set very low the global constant stdratio is set in the header file as the square root of the desired variance ratio constraint the of threads is set by the of independent processing units on the computer running the application as well as the of components in the mixture being fit and the of random starts parameter setting in the control file parameter the of random starts is equal to rs cores where rs of random starts specified by the parameter in cores of indpendent processing units on the computer running the application and g of components in the univariate mixture density being fit the of random starts therefore increases with the size of the density being fit each random start is assigned its own thread and launches the emalg function to find the likelihood maximizer based on that random start this is a unique value and reflects the local maximum nearest to the parameter settings in the random start once all threads finish the parameter settings from the random start which yields the largest likelihood value and obeying the variance ratio constraint are taken as the mles an empty double array is populated with these values and the maximum function value is returned at the call inputs the total of observations time points in the current application specified by the user via the control file parameter t an array of size t holding the at each time point for the asset currently being processed r the of random starts to use when finding the ml estimates for a specific mixture density specified by the user via the control file parameter note that this value is taken as a multiple of the of cores and is also increased by the multiple components so that an increasing of random starts is used as the of components increases nstrts the of components in the univariate mixture distribution used to generate random starts means for the em algorithm for example if fitting a univariate mixture distribution to an asset and an optimal univariate mixture distribution is available for that asset then a mixture distribution is used to generate outg argument to this function means as the starting point for constructing the random starts if no such mixture distribution is available for example during bootstrapping of the lrt with ho having components then a mixture will be used to generate the means for an random start note recall that a random start is built by generating random values which serve as the component means each observation is then attached to the nearest mean component the probability for that component is the attached divided by the total of observations and the standard deviation for that component is the sample standard deviation for all observations assigned to it ing a array to hold the univariate mixture distribution used to generate means for a random start this density function has ing components as mentioned above and the double array is indexed as s c where s and c component here refers to the component probability refers to the component mean and refers to the component standard deviation as noted above when generating random starts for the em algorithm a density function that is most similar to the of components being fit is desirable therefore if fitting a component mixture density and an optimal mixture density is available then it will be used to generate the random starts if an optimal mixture density is not available then a univariate normal distribution mixture can be used to generate the means for a random start inmdst the of components in the outgoing univariate mixture density that is being fit using the em algorithm outg an empty double array to hold the fitted univariate mixture distribution fit using the em algorithm the double array is indexed as s c where s and c component here refers to the component probability refers to the component mean and refers to the component standard deviation outmdist outputs this function returns the for the optimal univariate mixture fit using the em algorithm at the function call it also populates the supplied empty double array outmdist with the corresponding optimal univariate mixture distribution include long double thrdemalg const int t const long double r const int rs const int ing const long double inmdist const int outg long double outmdist local variables long double llval handle trivial case first then case if outg for int m outmdist m m transfer probabilities means and standard deviations llval getllval t r outg const long double outmdist compute else local variables find of independent processing units and create array of thread objects using the random start multiplication factor specified in the control file the of random starts specified in the control file is a multiple of the of independent processing units int p int rs boost int p long double long double p long double p long double p long double p boost boost p string cmt spc display message with details of the optimization process if dbug cout threading em algorithm will use p random starts for the current optimization endl if dbug cout note when the variance ratio constraint is violated or the maximum of iterations has been reached or an invalid probability detected endl the is set to the arbitrarily large negative cout lnegval endl cout endl create arrays to hold the random starts and return values from the em algorithm a random start must specify all unknown parameters for a mixture component probabilities means standard deviations these p arrays will be reused within each run group for int j p rprbs j new long double outg rmns j new long double outg rstds j new long double outg runprms j new int runprms j runprms j iterate over of random starts and determine the start then launch optimization calls for int j p launch call to emalg for each thread once all threads finish we scan the likelihood values across the p solutions inner loop and select the largest local optimum as the optimum the outer loop then repeats this process rs number of times t j boost emalg t boost r outg boost rprbs boost rmns boost rstds boost llvals boost inmdist boost runprms j pause until all finish and save the optimal solution from this group of runs for int j p t j report results when requested if dbug if runprms j variance ratio constraint violated solution not used else if runprms j maximum of iterations reached solution not used else if runprms j if llvals j llvals j else if llvals j llvals j else if runprms j invalid probability encountered solution not used cout outg for random start setfill setw long long p runprms j cout spc llvals j cout iterations setfill setw long long miters runprms j cmt endl retrieve optimal solution scan random starts and locate the one associated with the highest likelihood value the optimal probabilities means standard deviations are transferred to the placeholders passed to this function if no local optimum has been found then set the return code to otherwise use a return code of if j fstgrp for int c outg outmdist c c outmdist c c outmdist c c else if llvals j llval j for int c outg outmdist c j c outmdist c j c outmdist c j c if dbug cout endl free temporary memory allocations for int j p delete rprbs j rprbs j delete rmns j rmns j delete rstds j rstds j delete runprms j runprms j delete rprbs delete rmns delete rstds delete runprms delete llvals delete t report the optimal solution if dbug cout outg optimal solution llval endl if llval lnegval showvals r outg const long double outmdist cout endl variance ratio getvratio outg outmdist endl return the value corresponding to the optimal solution return llval copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function emalg summary this function implements the em algorithm for estimating parameters of a univariate mixture pdf the observations from a univariate mixture pdf can be viewed as an incomplete data problem where the component for each point is missing or unobserved that is at each time point random variables produce a value the component and the value from it note that both sets of rvs will have parameters to be estimated if viewed this way the likelihood function can be expressed using both the missing and random variables and their corresponding parameters for example the parameters for the unobserved component random variable that each observation originates from are the component probabilities and the parameters for the observed density values are means and standard deviations from each component distribution the em algorithm then estimates parameters for both random variables iteratively as follows step select starting values for all parameters for both the and random variables step compute the expected values for all random variables using the most recent parameter estimates step replace all instances of the missing random variables with their expected values in the likelihood or function step optimize the resulting likelihood or function with respect to the parameters for the random variables step check the in likelihood or value maximized in step for convergence small or condition step if no convergence or condition is met in step then return to step starting values for step are computed as random starts for all parameter values this is acheived by first generating g values from the closest distribution to the one being fit if fitting a univariate mixture and an optimal univariate mixture is available then use the mixture to generate random starts and if no other density is available then use a mixture these g values are taken as the means and each observation is attached to the closest mean the standard deviation of the set of obserations attached to each mean is computed as the component standard deviation and the proportion of observations attached to each mean is the component probability for the random start convergence in step is checked using the global constant epsilon set in the header file in step we also check for conditions and exit the optimization if any of the following error conditions are met these are variance ratio constraint is violated control with global constant stdratio set in the header file maximum of iterations reached control with global constant miters set in the header file any component probability becomes zero or negative while iterating inputs the total of observations time points in the current application specified by the user via the control file parameter t an array of size t holding the at each time point for the asset currently being processed r the of components in the univariate mixture density being fit g an empty array of size g to hold the probability for each component in the univariate mixture this array is updated during each iteration of the em algorithm and therefore holds the optimal probabilities upon convergence which are returned in this array to the calling function note that this parameter is a double array indexed as t g where t is the thread assigned by thrdemalg which invokes this function the em algorithm is implemented within threaded calls where a random start is assigned to its own thread prbs an empty array of size g to hold the mean for each component in the univariate mixture this array is updated during each iteration of the em algorithm and therefore holds the optimal means upon convergence which are returned in this array to the calling function note that this parameter is a double array indexed as t g where t is the thread assigned by thrdemalg which invokes this function the em algorithm is implemented within threaded calls where a random start is assigned to its own thread mns an empty array of size g to hold the standard deviation for each component in the univariate mixture this array is updated during each iteration of the em algorithm and therefore holds the optimal standard deviations upon convergence which are returned in this array to the calling function note that this parameter is a double array indexed as t g where t is the thread assigned by thrdemalg which invokes this function the em algorithm is implemented within threaded calls where a random start is assigned to its own thread stds an array to hold the value that the em algorithm converges to it is the optimal value for the given random start the array is indexed by thread therefore a single call to this function generates optimal value which is returned in the element for the current thread llval a double array to hold the univariate mixture density that is used to generate the random starts this function begins by generating a random start then optimizes the function based on that start finds the nearest local maximum this double array is indexed as s c where s and c component here refers to the component probability refers to the component mean and refers to the component standard deviation inmdist an array of integer parameters holding values passed to and returned by this function element holds the of components in the mixture density that is used to generate the random starts which is parameterized in inmdist parameter to this function element is the thread of the current call element is a return code which takes the following values ratio constraint is violated of iterations reached based on change to value component probability is not between and element is the iteration of convergence or algorithm termination for one of the reasons decoded in element rprms outputs this function does not return a value at the call but updates several empty arrays that are supplied by the user the prbs array is updated with the final estimated component probabilities the mns array is updated with the final estimated component means the stds array is updated with the final estimated standard deviations the llval array is updated with the final value and the rprms array is updated at element with the functions return code and element with the iterations before convergence or termination include void emalg const int t const long double r const int g long double prbs long double mns long double stds long double llval const long double inmdist int rprms local variables const int int stop long double oldllval newllval minstd maxstd cprbs ssqrs var long double t psum long double long double generate g random obs from solution with rprms components which are specified in inmdist use these means to generate probabilities and standard deviations for each random start do getrvals g ing const long double inmdist mns thrd getrprbsstds t r g prbs thrd mns thrd stds thrd check that none of the probabilities are zero in the random sample and that the variance ratio constraint is not violated in the random sample note that any component having standard deviation will disqualify that sample generate a new sample if a probability is zero or the variance ratio constraint is violated for int c g if prbs thrd c if stds thrd c minstd thrd c if stds thrd c maxstd thrd c if maxstd stdratio minstd while stop store random start and updated mixture distribution as array for debugging and compliance with other functions for int m ormdst m long double g umdst m long double g for int c g ormdst c thrd c ormdst c thrd c ormdst c thrd c store component likelihood x component probability in pdens for each using the current solution store the mixture likelihood value in mdens for each observation using the current solution these are needed to implement the updating equations the for the initial random start parameters is also computed here and used below pdens new long double t mdens new long double t oldllval long double t for int t t pdens t new long double g mdens t for int c g pdens t c thrd c getndens r t mns thrd c stds thrd c mdens t t pdens t c oldllval mdens t iterate using the em algorithm component probabilities are updated first and independently of the int do get updated component probabilities means and standard deviations store in temporary placeholders the mean and standard deviation update formulas will not work as written when any component probability is zero because there will be a division by zero if this happens end the optimization with an error for int c g derive the posterior probabilities for this component along with the component probabilities umdst c if c derive component manually if not at last component for int t t postprbs t pdens t c t umdst c umdst c postprbs t umdst c umdst c psum psum umdst c else final component probability is sum of all others for int t t postprbs t pdens t c t umdst c psum exit the optimization with appropriate code if any single probability is not between and if umdst c umdst c llval thrd rprms rprms else new updating equations faster processing c t umdst c t r postprbs cprbs ssqrs ssqrs umdst c umdst c cprbs updating equations can result in zero but stored value is negative variance if it happens the variance ratio constraint is automatically violated if var umdst c var else llval thrd rprms rprms find maximum and minimum stdev values the em algorithm will stop when the ratio of largest to smallest variance exceeds a constant variable is stdratio set in the header file this prevents an unbounded likelihood value and once the constraint is violated we conclude that the solution is spurious the likelihood will be set to a large negative to ensure it is never the maximum across random starts we also stop when the maximum of iterations exceeds some value miters set in the header file or when any component probability is not between and if stop for int c g if umdst c minstd minstd umdst c if umdst c maxstd maxstd umdst c if maxstd stdratio minstd llval thrd rprms rprms else if itcntr miters llval thrd rprms rprms if the variance ratio constraint is not violated then proceed as usual if stop transfer values to permanant placeholders variance ratio constraint has not been violated for int c g prbs thrd c c mns thrd c c stds thrd c c store the component likelihood x component probability in pdens for each using the updated solution store the mixture likelihood value in mdens for each observation using the updated solution these are to implement the updating equations the for the initial random start parameters is also computed here and used below newllval long double t for int t t mdens t for int c g pdens t c thrd c getndens r t mns thrd c stds thrd c mdens t t pdens t c newllval mdens t terminate the algorithm when criteria is met if epsilon abs oldllval llval thrd rprms rprms else while stop write files for debugging if rprms ofstream fout errfolder long long g long long thrd fout maximum of iterations miters has been reached for this optimization check the solution below endl endl fout new value newllval endl fout epsilon epsilon endl endl fout endl original observation vector and parameter starting values for this emalg call endl endl for int x t fout r x endl fout endl for int cc g fout prob cc ormdst cc mean cc ormdst cc cc ormdst cc endl else if rprms ofstream fout errfolder long long g long long thrd fout probability of or has been encountered and updating equations will not work check the solution below endl endl fout original observation vector and parameter starting values for this emalg call endl endl for int x t fout r x endl fout endl for int cc g fout prob cc ormdst cc mean cc ormdst cc cc ormdst cc endl delete temporary memory allocations for int t t delete pdens t pdens t delete pdens delete mdens for int m delete ormdst m ormdst m delete umdst m umdst m delete ormdst delete umdst delete postprbs copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getrvals summary this function accepts a mixture distribution as input and generates a random sample of observations from that distribution the of observations to generate is specified by the user in the call and the sample is placed into an empty array supplied to this function by the user generating a random sample from a mixture distribution is a process generate a uniform random value and compare it to the component probabilities to determine the component and once the component is selected generate an observation from the corresponding component density let p i be the probability for component i and let there be a total of c components then if the uniform random value u p then component is selected else if u p then component is selected else if u p then component is selected etc once a component is selected an observation is generated from the corresponding density function the result is an observation generated from the supplied mixture distribution here there are reasons for generating random observations from a univariate mixture distribution as described below in this application univariate mixture distributions are fit using the em algorithm with random starts a random start must specify a value for all parameters in the given univariate mixture density which is of known size but with unknown parameters the em algorithm continues to increase the likelihood function until a local optimum is found based on the given parameter settings from the random start to optimize a mixture density when a component mixture density is available from the same data set we generate a single random start by producing g random observations from the component density these g values are taken as means for the mixture the observations are then assigned to the component with nearest mean by using a standard distance computation once all observations are assigned to the closest mean the standard deviation for each component mean is computed by taking the standard deviation of the corresponding set of observations assigned to that mean the probability assigned to each component is the of observations assigned to the component divided by the total of observations in the data set at this point values for all parameters g means g standard deviations g component probabilities have been derived and the em algorithm can be applied starting at the given point in the parameter space the em algorithm then climbs to the top of the nearest hill and declares a local optimum the maximum likelihood estimators would be the set of parameters that yields the largest value for the likelihood function amongst the set of local optimums found via a large of random starts the likelihood ratio test lrt is used to select the optimal of components for a univariate mixture density fit to a given observation set the null hypothesis is that components are optimal vs the alternative of ga components being optimal here ga by fitting univariate mixtures of both sizes and ga to the data we can generate a single value for the lrt statistic the value is compared to the null distribution of the lrt statistic which is the distribution of the lrt under the assumption that a component mixture is the appropriate size the distribution of the lrt under is not known and is not asymptotically due to the relevant regularity conditions not being satisfied we can estimate the null distribution of the lrt by bootstrapping see mclachlan to bootstrap the lrt distribution we generate a random sample from the distribution specified by of the same size as our data set note that does not specify a particular univariate mixture but rather just a of components namely we take the ml estimates of our data under components as the distribution governed by the null hypothesis and this distribution is used to generate the sample of observations this sample is then fit to a univariate mixture under both and ga components by applying the em algorithm using random starts and a single value of the lrt statistic is produced by repeating the process a large of times we can approximate the null distribution of the lrt statistic we then compare the value derived from our data set and reject the null hypothesis when the lrt is large where the critical point is determined by the user choice of alpha type error for the test type error probability probability that the null hypothesis is rejected when it is true this function is used to generate the random samples used for the lrt just described inputs the of random observations to generate from the supplied univariate mixture density the observations are inserted into an empty array of the same size that is supplied by the user as the last parameter to this function n the of components in the univariate mixture density that a sample will be generated from g a double array holding the univariate mixture distribution definition from which a sample of size n will be generated the array is indexed as s c where s and c component here refers to the component probability refers to the component mean and refers to the component standard deviation inmdst an empty array of size n to be populated by this function as a random sample from the supplied univariate mixture distribution rvls outputs this function populates an empty array with a random sample of size from the supplied univariate mixture distribution no value is returned at the function call include void getrvals const int n const int g const long double inmdist long double rvls generate n random observations from the current optimal solution that uses g components these can be the means to use as a random start for fitting a specific model or observations used to approximate the null distribution of the lrt statistic rd gen rd long double ndist new long double g long double udist define the array of normal distribution objects one for each component for int c g ndist c long double inmdist c inmdist c generate n observations from the existing mixture distribution int cid long double uval psum for int i n initialize variables generate uniform random value uval udist gen find the corresponding component for int c g cid if uval psum c else psum psum inmdist generate a single obs from that component and store in array provided rvls i ndist cid gen free up temporary memory allocations delete ndist copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getrprbsstds summary this function accepts a set of t observations along with a set of g means for a mixture distribution and it derives corresponding values for the component probabilities and standard deviations each observation is assigned to a single component mean and the of observations assigned to a component divided by the total of observations is the corresponding estimate for that component probability observations are assigned to components using a simple distance function and specifically each observation is assigned to the component with nearest mean the standard deviation for each component is then estimated as the sample standard deviation of the set assigned to that component assuming the mean is known empty arrays of size g are supplied to this function to hold the set of component probabilities and set of standard deviations respectively combined with the existing set of g means derived as a random sample via the function getrvals these arrays mean standard deviation and component probability completely define a mixture distribution if the means were generated as a random sample then it defines a single random start for fitting a mixture distribution using the em algorithm inputs the total of time points with data collected t an array holding the set of observations returns for the asset being processed r the of means components in the array provided by parameter used to generate a random start for a mixture density g an empty array of size g to be populated by this function as component probabilities for the univariate mixture density being constructed prbs an array of means components to use as the basis for generating a mixture distribution mns an empty array of size g to be populated by this function as standard deviations for the univariate mixture density being constructed stds outputs this function populates two empty arrays of size g supplied prbs stds with component probability estimates and standard deviation estimates for each of the g components defined by the means supplied in mns no value is returned at the function call include void getrprbsstds const int t const long double r const int g long double prbs const long double mns long double stds local variables long double long double g mindist int cid int g initialize component counter and arrays to all zeros for int c g ccntr c cssqrs c iterate over all observations and classify each into the component whose mean it is closest to for int n t r n for int c g if abs r n c mindist r n c cid cssqrs cid cid compute the estimated probabilities and standard deviations mles for int c g prbs c long double ccntr c long double t stds c sqrt long double cssqrs c c delete temporary memory allocations delete ccntr delete cssqrs copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function asgnobs summary this function assigns each observation which is a time point in this application to one of the univariate mixture components for a given asset each asset has been fit to a univariate mixture distribution containing a certain of components each component can be viewed as a generator of observations for that asset with the corresponding component probability that the observation originates from any single component density bayes decision rule is used to assign each observation to a corresponding component by computing the posterior probability that each observation originates from each component the observation is then assigned to the component with the highest posterior probability this function performs that task the user supplies an empty array for the given asset of size equal to the of time points inserted into the array at each position is the component that the observation most likely originates from and therefore is assigned to inputs an empty array of size t that will hold the component that the observation is assigned to determined by this function inary the total number of time points with data collected t an array of size t holding the returns for the asset being processed r the number of univariate components for the asset being processed g the optimal univariate mixture distribution fit for the given asset as a array indexed as c s where c component and s here refers to the component probability refers to the component mean and refers to the component standard deviation inmdst outputs this function populates an empty array of size t that is supplied with the component that the given observation most likely originates from bayes decision rule is used and the observation is assigned to the component with highest posterior probability this function returns no value at the call include void asgnobs int inary const int t const long double r const int g const long double inmdst declare local variables long double maxprob tmpprob assign each observation per time point to the component with highest posterior probability bayes rule for int t t for int c g r t g const long double inmdst c if tmpprob maxprob inary t copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function mapcells summary a multidimensional grid is formed using the components from the univariate marginal mixture densities for all assets if there are a assets with asset i having c i univariate mixture components for then the multidimensional grid used as the basis for the multivariate density will have a total of c xc xc x xc cells this function converts the multidimensional grid to a single holding all cells each value in this represents one cell in the grid and therefore contains a set of components one per asset for example the first element of this list contains all assets at their first component the element of this list contains all assets at their first component but the final asset at it component the element of this list contains all assets at their first component but the last asset at its component etc the list is ordered as a design matrix for a full factorial experiment from left to right with values to the right repeatedly cycling through all of their components for each set of values to the left the function populates an empty list that is supplied to keep track of which component levels are used in any given cell a term is added to the list making it a array indexed as c a where c unique cell and a asset that is element would indicate the component of the first asset within the first cell of the multidimensional grid and element would indicate the component level for the asset within the first cell of the multidimensional grid if there are a total of assets then elements at positions would indicate the set of components for each of the assets within the cell of the multidimensional grid essentially this function converts the multidimensional grid to a list which is easier to manage and each cell in the list contains a term to identify the contents of the list item which is a single cell example suppose there are assets with c c c components in the respective univariate mixture densities the multidimensional grid is formed by crossing all univariate components across assets and will contain total cells this grid forms the basis for building the multivariate mixture density a list of length will be used to represent each cell and the components that are contained within each cell as follows cell id asset asset asset array index for example last row is defined as incellary incellary incellary note that the list can be derived by starting with the last asset and repeatedly cycling through all component levels then proceeding to the last asset and cycling through all component levels for each set just defined then proceeding to the last asset and cycling through all component levels for the sets defined to the right this is the strategy used to convert the multidimensional grid to a single list this function is recursive with a single call for each asset that call cycles through all levels of the given asset invoking a call for the asset to the right at each level when at the final asset no additional recursive calls are made and all levels of that asset are posted in this manner the function begins at asset burrows inward to asset then expands outward back to the asset when the debug level is set to a value all details of the mapping are printed for review similar to the table shown above inputs the total of cells in the multidimensional grid which is c xc xc x xc where there are a assets with asset i having c i components in the corresponding univariate mixture density totcells the number of assets in the current application numa a array that is populated by this function indexed as c a where c unique cell and a asset here both c and a begin at the value contained is the component level of asset a within unique cell incellary the current asset that is being processed this function processes each asset separately and iterates over all of its component levels at each component level the function recursively invokes itself for the next asset which similarly processes each component in order curast an array to hold the of components for each asset in their respective univariate mixture density this array will hold the values for c i as described above incmps an integer value to hold the current unique cell value begins at and ends at after each cell is defined and written to the incellary array this value is incremented cid an array to hold the component level being processed for each asset this function starts at the first asset and iterates over all c levels of the corresponding univariate mixture at each level the function is invoked recursively to process the next asset the function then iterates over all c levels of the asset and recursively invokes itself to process the next asset when at the final asset a cell is completely defined and the result is appended to the list this array is persistant and of size numa to hold the current component level being processed for each asset when at the final asset final recursive call the numa components held in this array defines the given unique cell tmpary outputs this function converts the multidimensional grid formed by crossing all assets and their univariate component levels to a single list where each element in the list defines one cell of the multidimensional grid it is more straightforward to navigate the grid in this manner the list has elements first is the unique cell and is the array of component levels that define the given cell this function is recursive invokes itself and returns no value at the call include void mapcells const int totcells const int numa int incellary const int curast const int incmps int cid int tmpary output cell mappings when debugging mode is on if curast dbug cout string endl cell mappings endl string endl iterate over all components of the current asset at each component recursively invoke this function to process the next asset for int i incmps curast store the component level for the current asset tmpary curast recursive call if not processing the final asset if processing final asset populate the array and increment the cell counter if curast mapcells totcells numa incellary incmps cid tmpary else if dbug cout cell setfill setw long long totcells cid for int j numa if dbug cout asset tmpary j incellary cid j tmpary j if dbug cout endl cid cid copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getcell summary this function accepts a set of component levels one per asset and returns the unique cell id from the multidimensional grid that contains this set the grid contains a cell for each combination of univariate mixture components across all assets if no match is found a value of is returned this function iterates over the array that is supplied until a match is found then returns the cell id and exits the array supplied is indexed as c a where c unique cell and a asset inputs array as c a where cell here both c a begin at the value contained is the component level of asset a within unique cell c incellary the total cells in the multidimensional grid c xc xc x xc there are a assets with asset i having c i components in the corresponding univariate mixture pdf totcells an array of size numa containing the set of asset component levels we are attempting to match the unique cell for the match is returned at the call cmplvls the number of assets in the current application numa outputs this function searches for a match on a set of numa asset component levels and returns the unique cell id the cell id ranges from return if no match is found include int getcell const int incellary const int totcells const int cmplvls const int numa iterate over all mapped values and find the match int cntr for int i totcells for int j numa if incellary i j cmplvls j check for match then return the cell position of the match if cntr numa if dbug cout cell i endl return i match return no match copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function solvelp summary this function solves linear programs lps which become feasible initial solutions for maximizing the multivariate mixture distribution likelihood the univariate mixture densities for each asset have already been fit using the em algorithm and then combined into a grid where the grid levels in each dimension are the corresponding component combinations this multidimensional grid forms the basis for the multivariate density function which is also a mixture pdf each cell in the multivariate grid defines a unique combination of assets and their components using bayes decision rule we can assign the observation for a given asset at each time point to a corresponding univariate component based on the component with highest probability of membership using these individual component memberships we combine them and assign each multivariate observation to a single cell in the multidimensional grid the probability of an observation originating from that grid cell is then the of observations in the given cell divided by the total of observations or time points we now have an estimated probability that a new observations originates from each grid cell refer to this estimate as ek applicable to cell note that each grid cell defines a multivariate density function using the corresponding means and variances for the that define the cell but at this point all covariances are undefined zero an important aspect of this research is that we must maintain the univariate marginals that have already been fit to accomplish this the sum of probabilities for each cell containing a given must equal the corresponding probability for that component in the univariate density this implies that we can use linear constraints on the grid cell probabilities to maintain the marginals since the sum of all grid cell probabilites must equal there will be univariate components constraints needed to maintain the univariate marginal for each asset once these constraints are enforced the final sum of probabilities constraint for each component is automatically enforced by the fact that the sum of all probabilities equals this means that there will be a total of total of components across all assets total of assets constraints required to maintain the marginals a set of linear equality constraints can be formulated in matrix notation as ax b here a is a matrix with one row holding the coefficients or for a single constraint and the matrix a has a column for each decision variable single grid cell probability which is a probability for the multivariate mixture density we will refer to a as the lhs constraint matrix and b as the rhs constraint vector the vector b will hold the component probabilities from the univariate marginals with only the first components needed since the probability for the final component within each asset is automatically enforced by the last row of a final constraint that the sum of all probabilities equals note that the lhs constraint matrix should be of full row rank meaning that we include the minimum of constraints needed to enforce the marginals the rows of a must be linearly independent this requirement is needed for a future optimization that uses this matrix the cell probabilities that have been estimated using the data the ek via bayes decision rule see above will generally not satisfy the marginal constraints in ax b therefore these estimates will not in general maintain the marginals the purpose of this function is to find cell probabilities that do maintain the marginals and that are in some way as close as possible to the estimated probabilities we offer methods here first to formulate the problem we assign an unknown decision variable to each unique cell in the multidimensional grid and that represents the true probability of membership in that cell these decision variables are then estimated using the following lp objectives minimize the maximum distance between all estimated unique cell probabilties and the decision variable that represents each cell and minimize the sum of squared distances between the decision variables and the estimated unique cell probabilities lp is a classic minimax objective of the form min max where there are u unique cells in the multidimensional grid and pk is the decision variable true cell probability for cell k and ek is the corresponding estimated cell probability the distance between the two is in lp our objective is to select the pk u that minimizes the maximum of these distance values such that the constraints ax b are satisfied the marginal densities are maintained as written the objective in lp contains absolute values and therefore is not linear however it can be rewritten as an equivalent linear program for example note that the objective can be rewritten as min max since or for u the absolute values have now been removed next let z max then the objective becomes min z and note that the following inequalities must hold z z z z z z this is because z is the maximum of a set of values therefore it must be all members of the set further since the objective is to minimize z it must take one of the values that bounds it below at an optimal solution using the new objective and added constraints the problem is now an equivalent linear program lastly it will be important to keep the multivariate density as parsimonious as possible meaning carrying fewer unique cells into the multivariate mixture density is desirable fewer components translates to fewer parameters and also we will run into problems when optimizing the multivariate mixture likelihood when a component is included that generates a zero likelihood value for the set of observations at all time points a future computation will see the component probabilities prefixed by the corresponding likelihood value in the objective if the likelihood of this multivariate component density is zero for all time points then the decision variable is effectively removed from the problem and the corresponding hessian will not be of full rank this will be a problem when applying newton method for example to prevent components with zero likelihood from being carried merely to satisfy the constraints we will add a decision variable to each unique cell in the multidimensional grid that penalizes the objective function when a cell with zero observations is included in the lp solution this will help to guarantee that we only keep cells that contain actual data points and it prevents likelihoods from being zero at all time points lp is similar to lp but the objective is to minimize the sum of squared distances between the actual and estimated probabilities subject to the marginal constraints that is in lp the objective takes the form min note that each term in this sum u is quadratic and concave in pk and centered at ek furthermore no decision variable exists in more than term of the sum this objective is known as a quadratic program that is separable which by definition means it is not linear it can however be approximated arbitrarily close by a linear program since pk for u we will define a set of decision variables to approximate each concave quadratic term in the sum over this pk range the approximation will consist of line segments that trace out the term for the range pk we first set the number of line segments to use when tracing out the curve and this is done via the global constant dlvl defined in the header file current next we compute the pk values on the horizontal axis the probability axis that are equidistant and cover the region between and note that there will be such points at the current setting of line segments these points will be they are fixed once dlvl is known and they also do not change per term in the sum that is these points on the probability axis are used to trace out each concave term in the objective once these points are known we can compute the function evaluated at each point for example considering the first term in the sum the function evaluated at each point is these values are fixed and constant once dlvl is determined and the estimates ek are known but they do change per term in the sum if there are a total of u components then there will be u such quantities defined connecting the dots of these function values will trace out the curve in lp the decision variables were the true probabilities for each unique cell but in lp the decision variables are alpha values for each term in the sum that are and sum to each such alpha variable is attached to a segment boundary on the horizontal axis a probability value pk is then defined by using the alpha variables that bound the pk value for example we can create the value by using where and and all other alpha values that is in general we define note that the alpha variables are specific to a component of the sum lastly the quadratic objective component is estimated arbitrarily close by where ek is the known cell estimate these constant multipliers were defined above and are stored in variables now both the objective function and constraints are linear in the u alpha decision variables within a component all alpha variables must sum to that is once the pk are defined we use these variables to build the constraints that maintain the marginals that is ax b we now have a linear objective with linear constraints that approximates the quadratic separable program linear programs can be solved fast for a global solution using the simplex algorithm here we use the free library of functions to solve both lps variable summary minimax objective total of decision variables defined in the array vbl totcells vbl vbl will hold the totcells totcells probability decision variables pk for u as detailed above these are referenced in the lp as column totcells vbl totcells vbl will hold the totcells totcells totcells feasibility factor decision variables fk for u these are referenced in the lp as column totcells one constraint is that the true cell probability value of the decision variable must be the total of observations that fall in a cell plus the cell corresponding feasiblity factor thus when a cell with zero actual observations assigned to it needs a probability for an optimal solution this feasibility factor must be set to a positive value to satisfy the constraint in the objective we then add each of these feasibility factors multiplied by the large constant k set in the header file this applies a penalty to the objective when a cell with no observations is included in the optimal solution and it makes the occurence rare as noted above we want to avoid including grid cells that have a zero likelihood across all time points since it will lead to the hessian of a future optimization being vbl totcells holds the objective function decision variable this is referenced in the lp as column minimum sum of squared distances objective total of decision variables defined in the array vbl totcells vbl vbl dlvl will hold the alpha values which sum to within the unique cell of the multidimensional grid say the corresponding optimal probability value is then derived using the values combined with the corresponding segment endpoints vbl vbl will hold the alpha values which sum to within the unique cell of the multidimensional grid say the corresponding optimal probability value is then derived using the values combined with the corresponding segment endpoints etc cont below vbl vbl totcells will hold the alpha values which sum to within the last unique cell of the multidimensional grid say the corresponding optimal probability value is then derived using the values combined with the corresponding segment endpoints vbl totcells vbl totcells will hold the single feasiblity factor assigned per cell the corresponding cell probability must be the observations which are assigned to that cell this variable when there are observations in a cell and it is required to be in an optimal solution then the feasibility factor must be forced to a positive value the objective function contains a term for each feasibility factor multiplied by a large constant see in the header file this serves as a penalty on the objective when a grid cell with no observations from the actual data is included in the optimal solution we want to avoid such solutions whenever possible since it can lead to the hessian matrix of an upcoming optimization being non full rank inputs the total of unique cells in the multidimensional grid formed by combining all components across the estimated univariate mixture density functions for all assets for example if there are a total of assets being considered and the univariate mixture densities have levels respectively the complete multidimensional grid will have unique cells this function provides methods for determining which cells are important and needed in the multivariate mixture density totcells the total number of assets being considered in the problem numa a double array indexed as c a with c being a unique cell id values are from and a being an asset id values are from the value held at this position is the univariate mixture component level for asset a within unique cell c recall that the multidimensional grid is formed by crossing all with all other here component refers back the the univariate mixture density for the asset incellary an array of size numa holding the of univariate mixture components for each asset index a of this array will return the number of components that were needed to fit the univariate mixture for component a range is to cmps a double array indexed as a g with a being an asset id from and g being a component id for asset a g ranges from cmps a the corresponding univariate mixture component probability is stored at the indexed position prbs an array of size totcells that holds the number of observations time points that are assigned to the given unique cell of the multidimensional grid observations are assigned to specific components of an asset using bayes decision rule that is they are assigned to the component with highest probability of membership once a time point has been processed it is assigned to a component for each asset and this defines the cell of the multidimensional grid that it is assigned to for example ncellobs implies that there are observations which fall into unique cell id ncellobs an array of estimated cell probabilities ek totcells these are derived as ncellobs c and drive both lp optimizations cellprob an empty array of true probabilities estimated by the lps and populated by this function this array will be of size totcells outprbs the type of lp to use for a given function call where minimax objective minimum sum of squared distances ssd objective type outputs this function counts the number of probabilities in the array outprbs and returns this value at the call this function also populates the empty array outprbs with the estimated true probability for each unique cell which is close to the estimated ek values using bayes decision rule but that satisfies the marginal constraints include int solvelp const int totcells const int numa const int incellary const int cmps const long double prbs const int ncellobs const long double cellprob double outprbs const int type initialize local variables variable of decision variables for the given lp and it depends on type lprec int int totcells int dvars j double double dvars objval double double string strlabel char char dvars build lp model to derive the joint density with zero covariances lp dvars if lp null cout error lp model did not build something is wrong with the setup type endl exiting solvelp endl exit add labels to the decision variables include the cell index and values for minimax objective include the cell index and alpha index for minimum squared distance objective if type minimax objective for int c totcells long long c for int a numa a long long c long long incellary c a if a vlabels c char vlabels c lp vlabels c now label the corresponding feasibility factor the decision variables come in probability feasibility factor pairs strlabel vlabels char vlabels lp vlabels z vlabels char vlabels lp int dvars vlabels else if type sum of squared distances objective for int c totcells for int s build array of constants for the piecewise linear function endpoints there are dlvl segments therefore endpoints and these do not change by cell build an array of corresponding function endpoints which do change by cell since the est prob changes if c epnt s double s fpnt c epnt s double cellprob c labels for alpha using minimum squared distance objective long long c long long s vlabels c char vlabels c lp c vlabels c label feasibility factor using minimum squared distance objective long long c for int a numa a long long c long long incellary c a if a vlabels totcells c char vlabels totcells c lp totcells vlabels totcells c add marginal constraints on the cell probabilities sum of all probabilities attached to an must equal that probability for each asset with g components once the first constraints have been satisfied the constraint is set since probabilities sum to we have not added the sum to constraint thus will keep all g for each component lp true for int a numa for int g cmps a for int c totcells if incellary c a g if type minimax objective vnum j vbl else if type sum of squared distances objective for int s vnum j c vbl s if lp j vbl vnum eq double prbs a g cout error lp issue marginal probability constraint for a g failed to load type endl exiting solvelp endl exit add feasibility constraints on the cell probabilities when using a minimax or minimum squared distance objective when an individual cell has zero observations assigned to it force the cell probability to zero relax if there is no feasible solution for int c int totcells if type vnum j vbl vnum j vbl else if type for int s vnum j vbl s vnum j vbl if lp j vbl vnum le double ncellobs c cout error lp issue feasibility factor constraint for c failed to load type endl exiting solvelp endl exit add inequality constraints on the minimax objective function objective is transformed into an lp using appropriate inequality constraints if type for int c int totcells first absolute value constraint for the objective function pertaining to this cell probability vnum j int dvars vbl vnum j vbl if lp j vbl vnum ge cellprob c cout error lp issue minimax objective function absolute value constraint for c failed to load endl exiting solvelp endl exit second absolute value constraint for the objective function pertaining to this cell probability vnum j int dvars vbl vnum j vbl if lp j vbl vnum ge cellprob c cout error lp issue minimax objective function absolute value constraint for c failed to load endl exiting solvelp endl exit the sum of squared distances objective requires a constraint that the decision variables sum to within each cell if type for int c int totcells for int s vnum j c vbl if lp j vbl vnum eq cout error lp issue sum of squared distances constraint that sum of decision variables within c failed to load endl exiting solvelp endl exit add minimization objective output the entire lp formulation when requested lp false if type minimax objective vnum j int dvars vbl for int c totcells vnum j vbl else if type sum of squared distances objective for int c totcells for int s vnum j vbl c vnum j vbl if lp j vbl vnum cout error lp issue objective failed to load type endl exiting solvelp endl exit lp if dbug note uncomment to write out full lp details when debugging note output can be large cout string endl lp details endl string endl lp stdout solve the lp and retrieve the results lp important if solve lp optimal cout error lp issue no solution found something has gone type endl exiting solvelp endl exit get the objective function value as well as the value of the unknown probabilities that define the multivariate density output the values lp lp vbl compute the probabilities for minimax these are the values from the first total cells decision variables for min ssd objective these are the weighted sum of the decision variables for int c totcells if type outprbs c vbl c else if type outprbs c for int s outprbs c outprbs c vbl c epnt s write out the lp estimated probabilities along with the empirical values and corresponding distance between the estimated probabilities lp solution probabilities when debugging is requested if dbug add correct labels if using sum of squared distances objective if type for int c totcells long long c for int a numa a long long c long long incellary c a if a vlabels c char vlabels c lp vlabels c output the estimated probabilities along with the actual and absolute distance between the values cout endl objective function type value objval endl cout lp solution endl endl for int c totcells cout lp outprbs c actual cellprob c and abs diff abs outprbs c c endl if type for int c int cout lp vbl c endl else if type for int c totcells cout lp vbl c endl issue a warning if a cell with zero observations is assigned a probability for int c totcells if cellprob c outprbs c cout endl warning lp issue unique cell c has no observations but is assigned a probability type endl the danger is that the likelihood function using this cell density could be zero at all time points if it occurs then the stage endl optimization will eliminate this decision variable the unique cell probability in the objective function of the step endl causing the corresponding hessian to be singular because the upper left block is singular it is a border matrix a solution is endl to change the alphas so that a simpler solution is used as there may be too many unique cells also the variance endl ratio constraint may be too large resulting in spurious solutions being combined across assets resulting in cells with no obs endl endl free the memory allocated for the lp and the labels array lp delete vnum delete vbl delete epnt delete fpnt for int c dvars delete vlabels c vlabels c delete vlabels count the of cell probability decision variables this is the of components in the multivariate density and is returned by this function for int c totcells if outprbs c return copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getcmtrx summary this function ensures that the constraint matrix used to maintain the marginal mixture densities is of full row rank using only the component probabilities from the given lp solution a full row rank component matrix is required for a future optimization the marginal probability densities are maintained via a set of linear constraints on the multidimensional grid cell probabilities since the sum of all probabilities must equal each asset will require univariate mixture components for that asset constraints to maintain the marginals note that the constraint on the last univariate component for each asset will be automatically satisfied by the final constraint that the sum of all cell probabilities equals if there are a assets and c i components for asset i in the univariate mixture density a then there will be c c c c c c c c a total rows in the original constraint matrix used to solve the lps the constraints can be written in matrix form as ax b where the vector b contains the corresponding probabilities for each row of a and x is a vector that holds the unique cell probabilities decision variables for the multidimensional grid before the lp has been solved rows a elements x since elements x c xc xc x xc therefore the matrix a will be of full row rank after each lp has been fit many elements of x will be zero the effective constraint matrix that applies to the lp solution will be the constraint matrix a with all columns that correspond to zeros of the vector x removed this effective constraint matrix is not necessarily of full row rank after the lp has been solved consider for example the case of assets with components in their corresponding univariate mixture densities in this scenario the multidimensional grid is suppose the optimal lp solution contains probabilities on the diagonal of this grid and zeros in all positions the matrix a will have rows initially however after the lp has been fit only decision variables columns of a remain therefore the effective constraint matrix a will be of dimension and not of full row rank that is not all rows of a are needed to enforce the marginal constraints given the current lp solution and rows may be dropped this function determines the rows that can be dropped and removes them from a producing a full row rank effective constraint matrix which is needed for an upcoming optimization to promote parsimony in the fitted multivariate mixture density components with zero probabilities are always permanantly eliminated at any point in any optimization once dropped a component is not permitted to return to the multivariate density inputs the number of rows in the original constraint matrix a before solving the lp either minimax or minimum sum of squared distances ssd totrows the number of probabilities decision variables after solving the lp either minimax or minimum sum of squared distances ssd nucmps type of lp objective or sum of squared distances ssd type the total number of assets under consideration numa array to hold the of univariate mixture components for each asset ncmps array of unique cell ids for the multidimensional grid probs in the lp solution cells that are used to structure the initial multivariate mixture pdf covariances vcids a double array indexed as c a with c being a unique cell id values are from and a being an asset id values are from the value held at this position is the univariate mixture component level for asset a within unique cell c recall that the multidimensional grid is formed by crossing all univariate with all other univariate here component refers back the univariate mixture density for the asset incellary a double array indexed as a r with a being the asset indicator values thru and r being the component of the optimal univariate mixture distribution for that asset values thru ncmps a the corresponding univariate mixture component probability is held by the array note that these probabilities are used to construct all but the last element of the vector b where the marginals are maintained via ax b prbs an empty matrix to hold the full row rank version of a for the given lp solution this function derives and returns the corresponding matrix inlhs an empty vector to hold the corresponding probabilities for the new full row rank version of a this is the b vector and it is derived here as the original b vector without the corresponding rows that were dropped from a to make it full row rank inrhs outputs this function returns no value at the call but it derives and populates the empty lhs matrix and rhs vector for the full row rank version of the constraint set that maintains the marginals the constraints are linear of the form ax b include void getcmtrx const int totrows const int nucmps const int type const int numa const int ncmps const int vcids const int incellary const long double prbs eigen inlhs eigen inrhs local variables int rnk trnk rr kr rw eigen olhs totrows nucmps eigen orhs totrows tvec nucmps build modified constraint matrix that applies to the lp solution for int a numa for int r ncmps a for int c nucmps olhs rw c int incellary vcids c a r orhs prbs a r for int c nucmps olhs rw c orhs rw rank of constraint matrix with and columns removed rnk int eigen eigen olhs full row rank modified constraint matrix must have rows equal to its rank inlhs type eigen rnk nucmps inrhs type eigen rnk if the constraint matrix is not of full row rank identify totrows rnk rows that can be removed from the constraint matrix to make it full row rank if rnk totrows remrows new int for int r totrows store the values at this row and then set the row to all zeros for int c nucmps tvec c r c olhs r c recheck the rank if it changes replace the row with its original values otherwise leave it as all zeros and store the row that can be dropped trnk int eigen eigen olhs if trnk rnk for int c nucmps olhs r c c else remrows populate the full row rank modified constraint matrix for int r totrows for int k if r remrows k if kr for int c nucmps inlhs type rw c r c inrhs type rw r print out the modified constraint matrix when debugging is on if dbug cout endl modified full lhs constraint matrix endl inlhs type endl cout endl modified rhs constraint vector endl inrhs type endl int chkrnk chkrnk int eigen eigen inlhs type cout endl the rank of this modified constraint matrix is chkrnk endl free up temporary memory allocations delete remrows copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function ecmealg summary an extension of the ecme algorithm liu rubin is implemented by this function the multivariate mixture likelihood is optimized with respect to the mixing proportions and the covariances the means and variances are held fixed to maintain the mixture marginals the step optimization is convex in the mixing proportions and constrained to maintain the mixture marginals the corresponding lagrangian is formed step optimization and its zeros are determined iteratively using newton method the resulting mixture proportions are unique global optimizers of the step likelihood all other parameters are fixed at this stage once step convergence is achieved processing is passed to the step where the likelihoood is maximized with respect to only the covariances this optimization is constrained by the of the corresponding estimated vc matrices there is vc matrix per multivariate density component it is also convex as there may be multiple local optimums of the likelihood function with only the covariances unknown and the constraint is not convex the goal of the step is to find the largest local optimum given that the means variances and mixing proportions are fixed and only the covariances are unknown we will attempt to climb to the top of the current hill local optimum using both the gradient and hessian while also searching for larger hills in the general direction of steepest ascent see marquardt this is considered a compromise between strictly applying newton method and gradient ascent and is useful when a newton method overshoots or a single step lands in an infeasible region once no further progress is made during the step we return to the step with the newly estimated covariances and repeat the optimization over the mixing proportions convergence is achieved when the step fails to improve the likelihood function returned from the step both the and steps are iterative with the corresponding gradients and hessians updated repeatedly during a single corresponding iteration of the extended ecme algorithm solutions found here will not be considered spurious which differs from the search for a solution when dealing with univariate mixtures this is due to the fact that the variances have already been fixed and do not change the corresponding variance ratio constraint specified by the user remains in force we can justify this approach by noting that commercial software packages such as sas r use a based algorithm to find the mle for a univariate mixture density instead of the em algorithm we have not proven that the extended ecme method used here will guarantee convergence to the largest local optimum only that we have located the nearest local optimum in the vicinity of the informed start important note the step imposes equality constraints that maintain the mixture marginals constraints on the mixing proportions probabilities are not explicitly imposed therefore negative probabilities may maximize the step likelihood function the step likelihood function treats the mixing probabilities as unknowns and all other parameters means variances covariances as known constants any components that require a negative probability to maximize the likelihood function are dropped during the step and the entire problem is resized accordingly fewer components this may result in a likelihood that decreases however the overall objective is to balance parsimony with maximizing the likelihood function inputs the total number of time points with data collected t the double array of returns for each asset and at each time point indexed as r a t the number of assets with returns collected numa the number of unique components in the multivariate mixture that results from either the minimax or minimum ssd lp optimizations each multivariate mixture has fixed means and variances and this function will optimize the mixing probabilities and covariance terms all covariance terms begin the optimization at zero nucmps the lhs matrix required to enforce the constraint that the marginal density for each asset equals its fixed univariate mixture this matrix is built during the lp optimization and resized there accordingly to ensure it is of full row rank the lhs matrix has a column for each component in the multivariate density cmtrx the rhs vector required to enforce the constraint that the marginal density for each asset equals its fixed univariate mixture this vector is built during the lp optimization and contains the marginal mixture component probabilities for each asset less the last probability for each asset which is fixed once all others have been fixed for that asset cvctr the array of multivariate mixture probabilities returned from the corresponding lp optimization either minimax of minimum ssd converted to a vector within this program as other functions require the values to be stored in a vector muprbs note that these probabilities are passed as an array but the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc all means supplied to this function are fixed and do not change which is required to maintain the marginals with the exception that components may be dropped when a component is dropped the corresponding mean vector for that component is dropped mumns the array of vc matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component all variances supplied to this function and are fixed and do not change which is required to maintain the marginals with the exception that components may be dropped when a component is dropped the corresponding vc matrix for that component is dropped muvcs the array of unique cell ids that link each component of the multivariate density back to the full factorial of components the full factorial of components represents each cell in the multidimensional grid formed by considering all combinations of assets and their levels note that the full factorial would be required to build a multivariate mixture density with given marginals under the assumption that the assets were all mutually independent random variables ucellids a string to hold the directory where the output file resides rdir outputs this function updates the arrays of multivariate mixture probabilities muprbs mean vectors mumns and vc matrices muvcs note that mumns is updated only when components are dropped and muvcs is updated when components are dropped and when covariances are estimated this function returns the total number of unique multivariate mixture components in the final density include int ecmealg const int t const long double r const int numa const int nucmps const eigen cmtrx const eigen cvctr long double muprbs eigen mumns eigen muvcs int ucellids const string rdir local variables long long mhessmag int ecnvg mcnvg cnvg int itr ncovs int boost nupdts int ncormult ncores int nthrds eitrs mitrs sumval nbeats nthrds int mtch npos long double long double t long double t double log pi long double ucmps ll curmlt oldll long double nthrds long double ell mll lbound sumll uval cnum eigen eigen eigen eigen t eigen eigen eigen eigen eigen eigen nthrds eigen eigen eigen nucmps eigen eigen eigen eigen int numa eigen eigen eigen string lblvar ndef boost boost nthrds rd gen rd long double udist eigen eigen egnslvr normslvr normslvrt ofstream fout populate an array of vectors with the returns at each time point indexed as t a to ease computations for int t t fvals t new long double nucmps rts t numa for int a numa rts t a a t derive vc inverses and corresponding determinants for int v ucmps vcminv v numa numa vcminv v v sqdets v vcminv v populate the covariance identifier matrices for use in the step for int numa for int numa a itr numa numa for int r numa for int c numa if a itr r c else a itr r c initialize decision variables for step probabilities will be set to their values as determined by solving the corresponding lp and the lagrange multipliers will be initialized to zeros for int i dvarse i ld for int d ucmps dvarse d d for int d ld dvarse d populate a double array with the likelihood values for each timepoint and component the multivariate likelihood of each observation is also stored in an array dnom any component with zero likelihood for all time points is a variable that does not exist in the objective function it should be treated as a constant and moved to the right hand side of each constraint and the problem needs to be resized accordingly this check is made via the function call chksum below t numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals cout initial value is ll endl endl chksum t ucmps const long double fvals build the corresponding hessian of the lagrangian the matrix is stored in hesse when building the hessian iterate until it is invertible by multiplying the constraint lhs and rhs by a constant value multiple of until full rank note a matrix with large and small eigenvalues may be and there may be computational issues when attempting to invert it hesse ld ld t ucmps const long double fvals dnom cmtrx cvctr hesse lhs rhs build the gradient of the lagrangian using the modifed constraint as required above grade ld getgrade t ucmps const long double fvals dnom lhs rhs dvarse grade initialize arrays used and reused during the step for int h nthrds h int h long double tmpdvarsm h eigen iterate using the ecme algorithm until convergence iterate and update the probabilities the step is a maximization problem with concave objective and convex constraints stationary points for the lagrangian will therefore be taken as global optimizers and these are determined using newton method the hessian here is a bordered matrix which is invertible under certain met conditions on the sections the step is a constrained maximization problem having multiple local optimums we attempt to find the largest local optimum using an iterative technique that steps in the general direction of steepest ascent do ecme step cout endl step start ecme algorithm ecmeitr beginning ll cout oldll endl endl string iterating do step iteration counter cout solve for new component probabilities which are the decision variables in the step dvarse grade dvarse dvarse check for zero or negative probabilities and prepare for next iteration for int v ucmps if dvarse v resize the problem if needed and perform another iteration int rhs update constraint undo the multiplier and adjust size if multivariate component probabilities have been set to zero tmpcmtrx nlms ucmps for int r int lhs for int c int lhs if dvarse c tmpcmtrx r lhs r c tmpcvctr nlms tmpcvctr rhs if a component is dropped then update mean vectors vc matrices and unique cell ids if for int v int lhs if dvarse v mumns itr v muvcs itr v vcminv itr itr sqdets itr vcminv itr ucellids v resize the internal array that holds the likelihood values for each timepoint and component when the of components changes for int t t delete fvals t fvals t long double ucmps update decision variable vectors dropping relevant rows tmpdv int dvarse tmpdv delete dvarse eigen for int i dvarse i ld for int v int lhs if tmpdv v dvarse v for int int lhs v int tmpdv dvarse v delete tmpdv eigen update density function values grid likelihood function values t numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals check for step convergence need no component probabilities and unchanged ll otherwise rebuild and iterate again if ll oldll epsilon oldll for int v ucmps muprbs v no step convergence iterate again if ecnvg reset which is only valid when there are no negative probabilities if rebuild hessian delete lhs eigen delete rhs eigen delete hesse eigen hesse ld ld t ucmps const long double fvals dnom tmpcmtrx tmpcvctr hesse lhs rhs rebuild gradient then delete temporary memory allocations delete grade eigen grade ld getgrade t ucmps const long double fvals dnom lhs rhs dvarse grade delete tmpcmtrx eigen delete tmpcvctr eigen while ecnvg cout done endl endl step done ecme algorithm ecmeitr converged in eitrs iterations cout ell endl endl ecme step int ucmps numa ncovs for int i dvarsm i ncovs getcovs ucmps muvcs dvarsm dvarsm gradm ncovs hessm ncovs ncovs p ncovs ncovs pinv ncovs ncovs for int h nthrds for int i tmpdvarsm h i ncovs cout endl step start ecme algorithm ecmeitr beginning ll cout oldll endl endl do step iteration counter build gradient for step getgradm t rts ucmps numa const long double fvals dnom muprbs mumns muvcs vcminv a gradm new ll build hessian for step find the length digits of the element with largest magnitude long long gethessm t rts ucmps numa const long double fvals dnom muprbs mumns muvcs vcminv a hessm write out the max eigenvalue condition and of eigenvalues of the hessian just derived hessm true p pinv p p false pinv pinv false for int a int hessm if a if abs a a if abs a a cnum sqrt sqrt cout endl string total of eigenvalues npos hessian condition cnum endl the function stephessm uses the hessian to step in the direction of the gradient to step we add a random constant to the diagonal with larger random constants translating to smaller steps and smaller random constants translating to larger steps eigen run long long mitrs threads launched cout lblvar for int h nthrds h h h h h int mhessmag h h h h h long double minhessadd h mitersh h h tmpdvarsm h tmpdvarsm h h stephessm boost h boost h boost rts boost tmpdvarsm h gradm hessm dvarse boost mumns boost muvcs conditionally output a line feed and alignment spaces once all threads have successfully launched if dbug do sleep for int h nthrds h while sumval nthrds cout string pr ll oldll endl string threads finished do sleep for int h nthrds if h cout h while sumval nthrds pause until all threads finish for int j nthrds j randomly select one of the top nbeats performers to begin the next iteration weight values by their ll to favor higher values the value of nbeats is set in the header file and only ll values that exceed the current ll are considered beats for int b maxhll b for int nthrds no ll values returned should be less than the existing maximum if oldll cout error ecme algorithm step stepping function has returned a ll value inferior to the current maximum which should not happen endl must inspect and fix thread endl the current maximum ll oldll endl maximum ll value returned from stepping function endl exiting ecmealg endl exit find and process the improvements for int nthrds if for int b deal with ll ties if oldll maxhll b if oldll itr maxhll itr store the for the given beat beat itr store the thread index for the given beat sumll sumll maxhll itr sum the magnitude of improvements record the of randomly select one of the ll beats to begin the next step iteration if nupdts uval udist gen sumll do lbound sumll maxhll if uval lbound mtch else sumll sumll maxhll while mtch dvarsm beat else if dbug cout done rc ll maxhll rrs beat endl endl rc randomly chosen update the vc and inverse vc matrices along with the vector of corresponding determinants for int v ucmps setcovs v muvcs dvarsm vcminv v v sqdets v vcminv v t numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals qc check that max ll equals the beat value chosen above if abs ll maxhll epsilon maxhll cout error ecme algorithm has derived ll not equal to the beat ll chosen randomly which should not happen must inspect and fix endl value of maxhll maxhll endl value of ll ll endl exiting ecmealg endl exit check for convergence of the step if ll oldll pow epsilon oldll dvarsm while mcnvg cout endl step done ecme algorithm ecmeitr converged in mitrs iterations new cout mll endl endl check ecme convergence step did not improve step likelihood if prepare another step iteration if convergence eigenvalues to check concavity if mll ell pow epsilon ell free temporary memory allocations delete gradm eigen delete hessm eigen delete p eigen delete pinv eigen delete dvarsm eigen delete eigen for int h nthrds delete tmpdvarsm h tmpdvarsm h eigen if cnvg populate the dnom and fvals arrays t numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals chksum t ucmps const long double fvals build hessian and gradient delete lhs eigen delete rhs eigen delete hesse eigen hesse ld ld t ucmps const long double fvals dnom tmpcmtrx tmpcvctr hesse lhs rhs delete grade eigen grade ld getgrade t ucmps const long double fvals dnom lhs rhs dvarse grade delete temporary memory allocations delete tmpcmtrx eigen delete tmpcvctr eigen processor cool down if dbug cout endl processor cool down double minutes endl sleep cdown while cnvg cout ecme algorithm converged in ecmeitr iterations maximum ll endl fout ecme algorithm converged in ecmeitr iterations maximum ll endl delete temporary memory allocations delete dnom delete sqdets delete grade delete gradm delete rts delete vcminv delete hesse delete hessm delete p delete pinv delete dvarse delete dvarsm delete lhs delete rhs delete tmpcmtrx delete tmpcvctr delete tmpdv delete a delete beat delete maxhll for int t t delete fvals t fvals t delete fvals for int h nthrds delete h h delete h h delete tmpdvarsm h tmpdvarsm h delete delete delete tmpdvarsm delete count and return the final number of unique cell probabilities for this solution return ucmps copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getlfvals summary this function decomposes the multivariate mixture likelihood as a grid of values with time t on the vertical axis and component u on the horizontal axis each cell in the dimensional grid is a likelihood value for the data at that timepoint using the corresponding multivariate density function for that component this function computes each value in the grid and stores the value in the array supplied by parameter in addition if the values in each row are summed using the component probabilities as weights the value is the multivariate mixture likelihood at the given time point these values are derived and stored in the single array supplied by parameter lastly if the log of the values computed for parameter are taken and summed across all time points then this value is the for the data using the supplied multivariate mixture density this value is at the call inputs the total number of time points with data collected t the number of assets with returns numa the number of unique components in the multivariate mixture initial value is from either the minimax or minimum ssd lp optimizations inucmps the array of vector returns at each time point indexed as rs t a there are t vectors of returns and each is of size numa where assets rs the current vector of multivariate mixture probabilities initial values are from either the minimax or minimum ssd lp optimizations uprbs the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc inmns the array of vc inverse matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is invertible and of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component invcis the array of square roots of the determinants of the vc inverse matrices from parmeter this term is required to construct the multivariate normal density insqs this parameter equals pi where numa total of assets in the application inpicst a single array of t values that sum the double array in parameter across the components at each time point weighting each component by its corresponding estimated probability each value in this array is the multivariate mixture likelihood value for the data at each individual time point this parameter is supplied empty and by this function denoms a double array of likelihood values indexed by time and component at each time point the likelihood for each component is computed and stored in this double array for reuse this forms a dimensional grid of values of size txu where of time points and components this parameter is supplied empty and by this function lfvals outputs this function returns the value at the call and also populates the incoming arrays denoms see parameter and lfvals parameter include long double getlfvals const int t const int numa const int inucmps const eigen rs const eigen uprbs const eigen inmns const eigen invcis const long double insqs const long double inpicst long double denoms long double lfvals local variables long double populate containers grid of all component likelihoods evaluated at each time point array of all full likelihood values evaluated at each time point for int t t denoms t for int v inucmps lfvals t v rs t inmns v invcis v insqs v inpicst denoms t t uprbs v lfvals t v if denoms t denoms t else return the value return ll copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getgrade summary this function derives the gradient for the ecme algorithm step optimization which is convex in the decision variables multivariate mixture component probabilities all means variances and covariances are treated as constants the objective is to maximize the corresponding function subject to constraints that the marginal densities are fixed and known univariate mixtures the marginal constraints can be enforced via linear functions on the decision variables component probabilities by incorporating the constraints into the objective we form the lagrangian stationary points of the lagrangian will be unique global optimizers of the constrainted convex optimization problem these points are found by applying newton method to the lagrangian newton method requires that the gradient and hessian of the lagrangian be constructed during each iteration the optimization problem converges when the fails to improve in this function we compute the gradient of the lagrangian for the step the gradient is the vector of partial derivatives of the lagrangian the number of elements is the sum of the of unique components multivariate mixture probabilities and the of constraints lagrange multipliers inputs the total number of time points with data collected t the number of unique components in the multivariate mixture initial value is from either the minimax or minimum ssd lp optimizations inucmps a double array of likelihood values indexed by time and component at each time point the likelihood for each component is computed and stored for reuse this forms a grid of values of size txu where of time points and components infvals a single array of t values that sum the double array in parameter across the components at each time point weighting each likelihood value by its corresponding estimated component probability therefore each value in this array is the multivariate mixture likelihood value for the data at each individual time point indnoms the lhs matrix needed to enforce the marginal mixture density constraints these constraints are linear in the component probabilities therefore can be represented using a lhs matrix and rhs vector this matrix has it may be necessary to multiply both sides of each constraint by a constant factor to make the corresponding hessian full rank computationally the of columns is equal to the of components and the of rows is equal to the of constraints lagrange multipliers needed to maintain the marginal univariate mixtures inlhs the rhs constraint vector required to enforce the fixed marginal density constraints using actual univariate marginal mixture probabilities the constraints needed to ensure that given fixed mixture marginals add multivariate component probabilities up to equal the given marginal mixture probabilities for each component of each asset this vector is scaled when the corresponding lhs matrix above is scaled to ensure the hessian of the lagrangian is full rank computationally inrhs current values of the decision variables values for the probabilities are not needed given that we have the double array infvals t u above however we do need the current values of the lagrange multipliers as these change during each iteration therefore we will pull these from the vector of all decision variables passed via this parameter indvars the empty gradient vector to be filled by this function the vector is of dimension equal to the sum of the of unique components and the of lagrange multipliers the of lagrange multipliers equals the of rows in the lhs constraint matrix which equals the of constraints ingrad outputs this function populates the empty gradient vector supplied to it but does not return any other output at the function call include void getgrade const int t const int inucmps const long double infvals const long double indnoms const eigen inlhs const eigen inrhs const eigen indvars eigen ingrad local variables int int gradient partials wrt probabilities for int v inucmps ingrad v for int t t ingrad v v infvals t v t for int r int ingrad v v indvars inlhs r v gradient partials wrt multipliers for int m ld ingrad m for int v inucmps ingrad m m double inlhs v indvars v ingrad m ingrad m inrhs copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function gethesse summary this function derives the hessian for the ecme algorithm step optimization which is convex in the decision variables multivariate mixture component probabilities all means variances and covariances are treated as constants during this optimization ecme step the objective is to maximize the corresponding function subject to constraints that the marginal densities are fixed and known univariate mixtures the marginal constraints can be enforced via linear functions on the decision variables component probabilities by incorporating the constraints into the objective we form the lagrangian stationary points of the lagrangian will be unique global optimizers of the constrained convex optimization problem these points are found by applying newton method to the lagrangian newton method requires that the gradient and hessian of the lagrangian be constructed during each iteration the optimization problem converges when the fails to improve at a zero of the lagrangian in this function we compute the hessian of the lagrangian for the step the hessian is a border matrix since the derivative wrt the lagrange multipliers is always zero therefore there will be a block matrix of zeros in the lower right corner a border matrix is invertible under certain conditions on the block matrices that border the zero block these conditions will be met for this optimization however it may be necessary to inflate the constraint matrix by using a constant larger than this would be needed when the hessian is using indicator variables to enforce the constraints inputs the total number of time points with data collected t the number of unique components in the multivariate mixture initial values are from either the minimax or minimum ssd lp optimizations nucmps a double array of likelihood values indexed by time and component at each time point the likelihood for each component is computed and stored for reuse this forms a grid of values of size txu where of time points and components infvals a single array of t values that sum the double array in parameter across the components at each time point weighting each component by its corresponding estimated probability therefore each value in this array is the multivariate mixture likelihood value for the data at each individual time point indnoms the lhs matrix of and needed to enforce the marginal mixture density constraints these constraints are linear in the decision variables component probabilities therefore can be represented using a lhs matrix and rhs vector this matrix has and but these may be multiplied by a constant to ensure the hessian returned is full rank computationally the of columns is equal to the of components and the of rows is equal to the of constraints lagrange multipliers that are required to ensure that the marginals match their fixed mixtures as found earlier the matrix must be of full rank therefore if multivariate components are set to zero we should check that it remains full rank if not force it to be full rank by removing rows one at a time until it is this code is yet to be implemented problem has not been encountered the function getcmtrx can be used to perform the task incmtrx the rhs constraint vector required to enforce the fixed marginal density constraints using actual marginal probabilities the constraints needed to ensure given fixed mixture marginals add multivariate component probabilities up to equal the given marginal mixture probabilities for each component of each asset this vector is scaled when the corresponding lhs matrix above is scaled to ensure the hessian of the lagrangian is invertible incvctr the empty hessian matrix to be filled by this function the matrix is square with dimension equal to the of unique components plus the of lagrange multipliers the of lagrange multipliers equals the of rows in the lhs constraint matrix which equals the of constraints inhess an empty matrix to be filled with the updated lhs constraint matrix once scaled to ensure the resulting hessian is invertible as noted the hessian is for the lagrangian which is a border matrix with a block of zeros in the lower right corner the upper right corner is the constraint matrix of and while the upper left corner is the hessian of the original objective function without the lagrange multipliers in rare cases large values in the upper left matrix coupled with and in the upper right matrix can cause the matrix to be conditioned therefore not invertible we have found that a solution is to scale the constraint matrix up by a large constant that is we multiple the lhs and rhs of each constraint by a given large constant this fixes the singularity of the hessian note that the gradient also uses the constraints therefore when the constraint matrix is scaled we must use the same scaled version when constructing the gradient this parameter returns the scaled lhs constraint matrix note that the scale factor is returned by the function inlhs an empty vector to be filled with the updated rhs constraint values when the constraint matrix is scaled to be invertible inrhs outputs this function returns the the multiplier used to scale the constraint matrix and vector to ensure that the resulting hessian is computationally invertible it also populates the empty hessian matrix supplied to it along with the scaled lhs matrix and rhs vector include long double gethesse const int t const int inucmps const long double infvals const long double indnoms const eigen incmtrx const eigen incvctr eigen inhess eigen inlhs eigen inrhs local variables int rnk int long double eigen eigen ulhess inucmps inucmps hessian upper left for int r inucmps for int c inucmps inhess r c for int t t inhess r c inhess r c infvals t r infvals t c indnoms t inhess r c inhess r c ulhess r c r c if c r inhess c r r c ulhess c r r c before proceeding with the ur ll and lr sections check that the ul hessian is full rank and put out a warning if it is not this may or may not prevent the optimization from working often it does not prevent the optimization from working int eigen eigen ulhess if rnk inucmps dbug cout endl warning the ul hessian matrix is singular which may prevent the component probabilities from being optimized endl this can happen for various reasons two of which are endl the likelihood of a single component is zero at all time points which eliminates the decision variable endl the upper left matrix is having large and small elements at different diagonal positions endl message from gethesse endl hessian build upper right lower left and lower right sections of the hessian iterate until the entire hessian is full rank so that newton method may be applied this may require multiplying all constraints by a constant both lhs and rhs first make sure that the constraint matrix is of full rank since components may be dropped during the step int eigen eigen incmtrx if rnk int cout error detection of a rank constraint matrix rank during the ecme step which is likely due to endl components with probabilities being dropped the function getcmtrx can be used to fix this endl by sequentially removing linearly dependent rows until the constraint matrix becomes full rank endl exiting gethesse endl exit inlhs int inucmps inlhs incmtrx inrhs int inrhs incvctr do hessian upper right lower left for int r inucmps for int c ld inhess r c inlhs r inhess c r r c hessian lower right for int r ld for int c ld inhess r c if c r inhess c r r c before proceeding ensure that the entire hessian is full rank int eigen eigen inhess if another iteration is needed update the multiplier and the constraints if rnk ld inrhs inrhs inlhs inlhs while rnk ld mult lposval if the entire hessian is not full rank then put out a warning but do not exit the optimization may still succeed if rnk ld dbug cout endl warning the step hessian matrix is singular which may prevent the multivariate component probabilities from being optimized endl this can happen for various reasons including endl there are very large and very small eigenvalues resulting in an matrix endl components have been dropped and the constraint matrix is no longer of full row rank note already checked above endl message from gethesse endl free temporary memory allocations delete ulhess return the multiplier used to correct an hessian return mult copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function getgradm summary this function derives the gradient for the ecme algorithm step optimization which is evidently not convex in the decision variables covariances in general mixture density likelihoods are not concave functions and have many local optimums we are dealing with a multivariate mixture density here all means variances and component probabilities are treated as constants during this ecme step optimization the objective is to maximize the corresponding function only covariances are unknown subject to constraints that all matrices are positive definite that is we seek the covariances that maximize the multivariate mixture function with the matrix at each component being positive definite and all means variances component probabilities being fixed and known these points are found by applying a modified newton method to the in which only the covariances are unknown newton method requires that the gradient and hessian of the be constructed during each iteration the gradient is the vector of first order partial derivatives wrt each covariance term and is derived in this function the hessian is the matrix of second order partial derivatives wrt all covariance terms if the problem has a total assets with returns measured and u components in the multivariate mixture density then there will be a total of u a unique covariance terms that require estimation clearly this problem suffers from the curse of dimensionality and will work best with a limited number of assets relative to the number of observations time points the optimization problem converges when the fails to improve the method used to find the largest local optimum is due to marquardt which uses the hessian to step in the general direction of the gradient by adding a constant to the hessian diagonals prior to solving the updating equation in practice we will iterate over a large number of random step sizes searching for the function maximizer once the maximizer is found we recompute the gradient and hessian and iterate again note that large additive quantities added to the hessian diagonal translate into small steps and small quantities translate into large steps an additive factor of zero translates into using newton method without modification which is assumed here to overshoot the local optimizer this method is approprate when strictly applying newton method overshoots here the goal is to find the largest local optimum in the vicinity of the carefully constructed starting point lp solution but also to search outside the current in an attempt to find a better solution the constraints on the resulting matrices are enforced implictly at each step the resulting matrix is decomposed and the eigenvalues are inspected if none are the resulting matrix is positive definite otherwise it is not and a ridge repair is immediately performed stepping continues using the repaired matrix in general we find that there are large regions of the covariance set where stepping proceeds without the need for repairs and other large regions of the covariance set where repairs are needed after each step the matrix of each multivariate component is examined and repaired if necessary by the function ridgerpr the feasible region is any covariance set that results in all component matrices being valid positive definite inputs the total number of time points with data collected t the array of vector returns at each time point indexed as rs t a there are t vectors of returns and each is of size numa where assets rs the number of unique components in the multivariate mixture initial value is from either the minimax or minimum ssd lp optimizations inucmps the number of assets with returns collected numa a double array of likelihood values indexed by time and component at each time point the likelihood for each component is computed and stored for reuse this forms a grid of values of size txu where of time points and components infvals a single array of t values that sum the double array in parameter across the components at each time point weighting each likelihood value by its corresponding estimated component probability therefore each value in this array is the multivariate mixture likelihood value for the data at each individual time point indnoms the current array of multivariate mixture probabilities initial values from either the minimax or minimum ssd lp optimizations inprbs the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc mumns the array of vc matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component e the array of vc matrix inverses each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component this parameter holds the corresponding array of inverses of the vc matrices einv the array of identifier matrices for each covariance term a vc matrix can be decomposed into the sum of a matrix of diagonal elements and a term for each unique covariance the constant matrix multiplied by the covariance term has a in each element where is in the location of the corresponding covariance term the constant matrices are contained in this array the constant matrices are identical across components and can be reused the of these matrices is the number of unique covariance terms for a single multivariate mixture numa ina the empty gradient vector to be filled by this function the vector is of dimension equal to the total of covariances in the problem if the problem has a total assets with returns measured and u components in the multivariate mixture density then there are a total of u a covariance terms ingrad outputs this function populates the empty gradient vector supplied to it but does not return any other output at the function call include void getgradm const int t eigen rs const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs eigen mumns const eigen e const eigen einv const eigen ina eigen ingrad int chk local variables int itra long double qtijk eigen eigen ejk numa numa populate gradient vector for int i inucmps for int j numa for int k numa getcofm numa j k e i ejk ingrad itr for int t t qtijk rs t i einv i ina itra einv i rs t i ejk i ingrad itr itr inprbs i infvals t i qtijk t delete temporary memory allocations delete ejk copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function gethessm summary this function derives the hessian matrix for the ecme algorithm step optimization which is evidently not convex in the decision variables covariances in general a mixture density likelihood function is not strictly concave and has many local optimums we are dealing with a multivariate mixture density here where all means variances and component probabilities fixed during the ecme step optimization the objective is to maximize the corresponding function only covariances are unknown subject to constraints that all resulting matrices are positive definite that is we seek the covariances that maximize the multivariate mixture function with the matrix at each component being positive definite and all means variances component probabilities are fixed these points are found by applying a modified newton method to the in which only the covariances are unknown newton method requires that the gradient and hessian of the be constructed during each iteration the gradient is the vector of first order partial derivatives wrt each covariance term and is derived in the function getgradm the hessian is the matrix of second order partial derivatives wrt all covariance terms and is derived in this function if the problem has a total assets with returns measured and u components in the multivariate mixture density then there will be a total of u a unique covariance terms that require estimation this problem suffers from the curse of dimensionality and will work best with a limited number of assets relative to the of observations time points the optimization problem converges when the fails to improve the method used to find the largest local optimum is due to marquardt which uses the hessian to step in the general direction of the gradient by adding a constant to the hessian diagonals prior to solving the updating equation in practice we will iterate over a large number of random step sizes searching for the best local and global function maximizer once the maximizer is found we recompute the gradient and hessian and iterate again note that large additive quantities translate into small steps and small quantities translate into large steps an additive factor of zero translates into using newton method without modification this method is appropriate when strictly applying newton method overshoots here the goal is to find the largest local optimum in the vicinity of the carefully constructed starting point lp solution using small step sizes but also to search outside the current in an attempt to find a better solution using large step sizes the constraints on the resulting matrices are enforced implictly at each step the resulting matrix is decomposed and the eigenvalues are inspected if none are the resulting matrix is positive definite otherwise it is not and a ridge repair is immediately performed and stepping continues using the repaired matrix in general we find that there are large regions of the covariance set where stepping proceeds without the need for repairs and other large regions of the covariance set where repairs are needed after each step the matrix of each multivariate component is examined and repaired if necessary by the function ridgerpr the feasible region is any covariance set that results in all component matrices being valid positive definite inputs the total number of time points with data collected t the array of vector returns at each time point indexed as rs t a there are t vectors of returns and each is of size numa where assets rs the number of unique components in the multivariate mixture initial value is from either the minimax or minimum ssd lp optimizations inucmps the number of assets with returns collected numa a double array of likelihood values indexed by time and component at each time point the likelihood for each component is computed and stored for reuse this forms a grid of values of size txu where of time points and components infvals a single array of t values that sum the double array in parameter across the components at each time point weighting each likelihood value by its corresponding estimated component probability therefore each value in this array is the multivariate mixture likelihood value for the data at each individual time point indnoms the current array of multivariate mixture probabilities initial values from either the minimax or minimum ssd lp optimizations inprbs the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc mumns the array of vc matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component e the array of vc matrix inverses each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component this parameter holds the inverses of the vc matrices einv the array of identifier matrices for each covariance term a vc matrix can be decomposed into the sum of a matrix of diagonal elements and a term for each unique covariance the constant matrix multiplied by the covariance term has a in each element where is in the location of the corresponding covariance term the constant matrices are contained in this array the constant matrices are identical across components and can be reused the of these matrices is the number of unique covariance terms for a single multivariate mixture numa ina the empty hessian matrix to be filled by this function the matrix is square and of dimension equal to the total of covariances in the problem if the problem has a total assets with returns measured and u components in the multivariate mixture density then there are a total of u a covariance terms inhess internal variables note each hessian element is a partial wrt sigma i j k then another partial wrt sigma p r s where i is the component index for the partial derivative and p is the component index for the partial derivative the paired index j k identifies the covariance term from component i whereas the paired index r s identifies the covariance term from component note that the covariance term at j k is equivalent to the covariance term at k j therefore we will also assume that j k and r itrajk index of the indicator matrix array a for the covariance term in the partial itrars index of the indicator matrix array a for the covariance term in the partial fti product of the likelihood value for the observations at time t using density for component i of c and the corresponding component probability partial derivative of fti defined above wrt a covariance term from component p of c where p i note that fti is just a constant when p i since it does not contain the covariance term in its function ftp product of the likelihood value for the observations at time t using density for component p of c and the corresponding component probability qtijk this is the extra term that arises in the numerator when differentiating the density of component i wrt the covariance term j k qtprs this is the extra term that arises in the numerator when differentiating the density of component p wrt the covariance term r s partial derivative of qtijk wrt sigma p r s gt overall likelihood of all data points using the full multivariate mixture density partial derivative of gt wrt sigma p r s note if there are n assets then there are n distinct covariance terms within each component and c n total distinct covariance terms outputs this function populates the empty hessian matrix supplied the magnitude of the largest element is returned at the call and used to help determine the best step size include long double gethessm const int t eigen rs const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs const eigen mumns const eigen e const eigen einv const eigen ina eigen inhess local variables int hc itrajk itrars long double fti ftp qtijk qtprs gt eigen eigen eigen eigen eigen eijk numa numa eprs numa numa ejkrs numa numa ejksr numa numa eigen numa numa populate hessian matrix for int i inucmps for int j numa for int k numa covariance term fixed cov i j k hessian column entry indicator thru getcofm numa j k e i eijk build eijk for int p inucmps for int r numa for int s numa covariance term fixed cov p r s if hr hc unconditional quantities that are not functions of time getcofm numa r s e p eprs build ers conditional quantities that are not functions of time if i p covariance term is from the same component getcofm numa r s eijk ejkrs build ejkrs conditionally getcofm numa s r eijk ejksr build ejksr conditionally eijk eprs e i ejksr e i ejkrs e i i ina itrajk einv i ina itrars einv i einv i ina itrars einv i ina itrajk einv i initialize the element inhess hr hc iterate over the time dimension for int t t derive unconditional quantities that are functions of time i infvals t i rs t i einv i ina itrajk einv i rs t i eijk i p infvals t p rs t p einv p ina itrars einv p rs t p eprs p t derive conditional quantities that are functions of time including the hessian value itself if i p correlation term is from same component qtprs qtprs if j r k s partial wrt term below diagonal only rs t i rs t i else if k r partial wrt term above diagonal only rs t i rs t i else partial wrt term above and below diagonal rs t i rs t i inhess hr hc hr hc gt fti qtijk fti qtijk else inhess hr hc hr hc fti qtijk ftp qtprs populate corresponding element below the diagonal if hr hc inhess hc hr hr hc hessian column entry indicator thru delete temporary memory allocations delete eijk delete eprs delete ejkrs delete ejksr return the magnitude of the largest element for int i int inhess for int j int inhess if abs inhess i j maxmag inhess i j return maxmag copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function stephessm summary this function steps in the general direction of steepest ascent for the multivariate mixture likelihood that maintains the marginal mixture densities using marquardt the multivariate mixture likelihood has fixed means variances and component probabilities but unknown covariances the multivariate mixture here is therefore a function of only the unknown covariances and we seek to maximize it a problem is that the function can have multiple local optimums using marquardt a constant term of random size is to the diagonals of the correpsonding hessian matrix prior to solving the updating equations when implementing newton method adding a large constant results in taking a small step and adding a small constant results in taking a large step this method will allow us to take small steps towards the local optimum of the current hill and simultaneously search for larger hills in the general direction of steepest ascent the minimum step size is set to zero in the header file see minhessadd and the maximum step size is set to where x digits in the maximum hessian element within the range a step size is randomly generated for each iteration the stepping is see the wrapper that invokes this function to save time and the total of threads used is equal to the of independent processing units on the pc running the application multiplied by the global constant ncormult also set in the header file each thread will take a total of mitersh random sized steps which is also a global constant set in the header file to cover the step size range from using a total of ncormult of independent processing units threads we first generate a random value of x between and the maximum of digits in the largest hessian element then generate a step size randomly between and at each iteration a step is taken by the random step size to the hessian diagonals and solving the newton method updating equations the decision variables are the u a unique covariances where u total multivariate mixture components and a total assets after obtaining the new solution all corresponding matrices are confirmed to be positive definite and if not a ridge repair is immediately performed using a random multiplier between the values rrmultmin and rrmultmax specified as global constants in the header file the decision variables that maximize the likelihood function are returned as is the maximum value along with the random step size that generates the maximum the best solution across all threaded calls is then used for the current iteration of the ecme step after stepping finishes processing returns to the top of the step and the step gradient and hessian are rebuilt for another iteration in ecmealg inputs input supplied to this function as a integer array the element at position is the of time points with data the element at position is the of unique componenents in the current multivariate mixture solution the element at position is the current thread determined by the function that generates threaded calls the thread is only used within this function for reporting results to the output window for example during debugging the element at position is an indicator that the current thread has launched both supplied to this function and generated by this function as a long double array elements at positions are input parameters and elements at positions and are output generated by this function and returned to the calling function the input at position is the current optimal value we are attempting to improve upon during this ecme step iteration input at position is the step size and the input at position is the starting value for stepping in this threaded call elements at positions and are placeholders for return values the maximum value found during this stepping iteration is returned in element and the step size multiplier that generates this maximum is returned in element the array of vector returns at each time point indexed as rs t a there are t vectors of returns and each is of size numa where assets rs the current vector of step decision variables all covariances as an array of elements the vector at indices and hold the current covariance estimates the vector at index holds the returned estimates that maximize the for this function call the vector at position holds the updated covariance estimates derived at each step indvars the current gradient vector evaluated at the current values of the covariance estimates decision variables ingrad the current hessian matrix evaluated at the current values of the covariance estimates decision variables inhess the current vector of multivariate mixture probabilities initial values are from either the minimax or minimum ssd lp optimizations uprbs the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc mumns the array of vc matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa the diagonals of each vc matrix are the corresponding variances for that asset within that component invcs outputs this function updates element of incoming parameter indvars array with the covariance estimates that maximize the during this stepping iteration in addition elements at positions and of incoming array are updated with the maximum value of the and the step size multiplier that maximizes the respectively include void stephessm int long double const eigen rs eigen indvars const eigen ingrad const eigen inhess const eigen uprbs const eigen inmns const eigen invcs local variables eigen int rpr int inmns expon curexp long double hmult ll double log pi long double inucmps long double int long double int jmp mult rd gen rd long double udist eigen eigen thessm eigen inucmps eigen inucmps idm getidm idm size the array holding all component likelihood values at all time points for int t int t long double inucmps size the local vc and inverse matrices for int v inucmps v na na v v v na na initialize the covariance maximizers to the starting values write details when debugging indvars if dbug cout else if dbug if cout endl cout launching thread endl set the multiple for matrix repairs this applies to all steps mult rrmultmin udist gen rrmultmax rrmultmin iterate using the hessian and solve for new covariances for int i mitersh hessian has element with maximum length equal to hmaxlen digits randomly select value between and this to use as the max for stepping expon int udist gen double hmaxlen jmp pow expon hmult minhessadd udist gen jmp if udist gen hmult indicate the value of means exponent of used for backward stepping hmult idm indvars udist gen ingrad indvars indvars check that the new covariance estimates yield pd vc matrices repair if broken and update the corresponding vector of current decision variables for int v inucmps setcovs v indvars ridgerpr v mult v v v v int na inucmps rs uprbs inmns picst if rpr getcovs inucmps indvars check vs existing maximum ll if larger then update the current maximum and covariance array if ll curmaxll indvars return the maximum value along with the mutliplier that generated it write details when debugging if dbug else if dbug cout done with thread which started at strt maximum ll is curmaxll endl delete temporary memory allocations delete idm delete delete delete delete for int t int delete t t delete copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function ridgerpr summary this function accepts a matrix as input and determines whether or not it is if not it performs a ridge repair to make the matrix positive definite to determine whether or not the matrix is positive definite an eigenvalue decomposition is performed if all eigenvalues are then the matrix is positive definite a necessary and sufficient condition for a matrix to be valid is that it is positive definite to perform a ridge repair the diagonal elements are all multiplied by the same constant value since the diagonals are the variances this implies that we increase the variances doing so will automatically reduce the size of the covariances relative to the variances it will also reduce the magnitude of the correlations the covariances are the correlations multiplied by the standard deviations that is cov x y rho std x std y when std x and std y increase and cov x y remains constant the correlations decrease in magnitude if the constant multiplier is large enough we will drive the correlations to near zero and at this point the covariances will be extremely small relative to the variances the resulting matrix approaches a diagonal matrix which is positive definite the point is to use a small multiplier and increase it iteratively until the repaired matrix becomes positive definite and then scale it back so that the variances are undisturbed but the elements are smaller relative to the diagonal elements if a matrix is positive definite then that matrix multiplied by a constant is also positive definite easily proven with the definition of positive definite the initial multiplier is randomly generated in the calling function and bounded by the global constants rrmultmin and rrmultmax which are set in the header file this function iterates multiplying the diagonals by i rrmult and checking for positive definiteness after each iteration i is the iteration index a matrix that is badly broken for example with a correlation term when all such quantities should be between and may require a large number of iterations to repair therefore to speed up processing we increase the multiplier by a factor of after each iterations that is after iterations rrmult is multiplied by and again after iterations etc note that the matrix supplied to this function is a modifiable value and is updated in place with care taken to ensure that the diagonals are not disturbed when it is returned inputs a single matrix is supplied to this function using inputs an array of matrices and an index to identify the one we are checking for definiteness and repairing if necessary the index that identifies the matrix to be ucell the array of matrices for the current multivariate mixture solution e the multiple used to add a ridge mult outputs this function returns an integer value of or a is returned if the matrix is repaired and a is returned if it is not in need of repair include int ridgerpr const int ucell eigen e long double mult local variables eigen int pd long double sclfctr det eigen eigen egnslvr eigen newvcm int e ucell int e ucell is the vc matrix as required if not repair it e ucell false for int a int e ucell if a pdmineval a if det detminval repair is necessary if pd vc matrix is not positive definite and a ridge repair will be performed ucell repair the matrix for int increase each variance by a factor and rescale double i for int r int e ucell for int c int e ucell if r c newvcm r c ucell r c check if the updated vc matrix is newvcm false for int a int e ucell if a pdmineval a if det detminval increase the scale factor by a multiple of each iterations if int i replace the original vc matrix with the repaired version e ucell if the vc matrix has been repaired return a otherwise return a return retvar copyright c chris rook this program is free software you can redistribute it modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details http filename function wrtdens summary this function writes out the structure of the final multivariate density along with the actual values contained in that structure the output has parts the structure which is a sum of multivariate normal densities weighted by the corresponding component probability using parameters and details of the parameter values for each component the definitions include the actual component probability the mean vector the matrix and the unique cell that this component originates from with respect to the full factorial of combinations defined after fitting the univariate mixture densities finally the inverse of the matrix is printed as is the rank and the determinant this function can be used to write the supplied density to either standard output using cout in the last parameter or a file using an ofstream fout definition in the last parameter inputs specify the type of starting point when transitioning between univariate marginal densities and the multivariate mixture pdf in this application we offer transition methods minimax or minimum sum of squared distances ssd each are solved as constrained linear programs lps the constraints maintain the univariate marginal mixture densities this string variable takes one of two values minimax or minimum sum of squared distances ssd this is for display purposes only and informs the user which transition method was used for the given multivariate density function being written typ the number of unique components in the multivariate mixture that results from either the minimax or minimum ssd lp optimization starting points nucmps the array of unique cell ids that link each component of the multivariate density back to the full factorial of components the full factorial of components represents each cell in the multidimensional grid formed by considering all combinations of assets and their levels note that the full factorial would be required to build a multivariate mixture density with given marginals under the assumption that the assets were all mutuallly independent random variables rvs ucellids the array of final multivariate mixture component probabilities muprbs the array of mean vectors each component of the multivariate mixture density is a multivariate density function which has its own set of means the first element of this array is the vector of means for the first multivariate component etc mumns the array of vc matrices each component in the multivariate mixture density is a multivariate density function with a corresponding vc matrix each vc matrix is of dimension numa x numa where numa total of assets the diagonals of each vc matrix are the corresponding variances for that asset within that component muvcs the output destination as a variable reference use either cout for display to the screen or a valid ofstream output object ovar outputs the density supplied to this function is written to the output desination supplied in parts first the structure of the density is written using parameters followed by a detailed definition of those parameters no value is returned at the call include void wrtdens const string typ const int nucmps const int ucells const long double muprbs const eigen mumns const eigen muvcs ostream ovar local variables long long long long nucmps string int if minimum sum of squared distances ssd typ start by writing the structure of the multivariate density without actual details of the values for the means variances covariances and component probabilities ovar endl string ln endl the structure of the multivariate density function for the supplied assets using an initial typ lp objective is given by endl string ln endl endl for int v nucmps ovar ps p setfill setw w v f m setfill setw w v v setfill setw w v if v ovar endl write out the details for each component that is the means variances covariances and component probabilities ovar endl endl string endl where values for the multivariate normal pdfs f m v with mean vector m and matrix v are endl string endl for int v nucmps ovar endl string endl component v is unique cell ucells v endl string endl p setfill setw w v muprbs v endl m setfill setw w v endl mumns v endl v v endl muvcs v endl v v setfill setw w endl muvcs v endl the rank of v v is int eigen eigen muvcs v and the determinant is muvcs v endl
| 5 |
almost hyperbolic groups with almost finitely presented subgroups feb robert kropholler february abstract we construct new examples of cat groups containing non finitely presented subgroups that are of type f these cat groups do not contain copies of we also give a construction of groups which are of type fn but not fn with no free abelian subgroups of rank greater than r introduction subgroups of cat groups can be shown to exhibit interesting finiteness properties in there are examples of groups satisfying fn but not fn for all n as well as the first examples of groups of type f that are not finitely presented answering a question of brown definition a group g satisfies property fn if there is a classifying space kpg with finite n skeleton definition a group g is of type f pn if there is a partial resolution of the trivial zg module pn z where pi is a finitely generated projective zg module it can easily be seen that is equivalent to finite generation and is equivalent to finite presentability in the groups are kernels of maps g z where g is a right angled artin group the finiteness properties of the kernel depend solely on the defining flag complex for the right angled artin group the construction of groups of type f that are not finitely presented are contained in right angled artin groups having free abelian subgroups of rank since there have been other constructions of groups with subgroups of type f not see in many cases these groups are subgroups of groups g of type f in these cases the maximal rank of a free abelian subgroup of g is at least we construct the first examples of groups of type f containing no free abelian subgroups of rank containing a subgroup of type f that is not theorem a there exists a non positively curved space x such that pxq contains no subgroups isomorphic to which contains a subgroup of type f not the groups constructed in which are of type not fn all contain a free abelian subgroup of rank n brady asked in question whether there exist groups of type but not fn which do not contain he notes that the known examples all contain while we are not able to find examples without we reduce the bound to a fraction of theorem b for every positive integer n there exists a group of type but not fn that contains no abelian subgroups of rank greater than r in the future we would like to extend these theorems giving examples of hyperbolic groups with such subgroups the author would like to thank federico vigolo for kindly helping draw several of the figures contained within the author would also like to thank martin bridson and gareth wilkes for reading earlier drafts of this paper and providing helpful and constructive comments preliminaries cube complexes a cube complex can be constructed by taking a collection of disjoint cubes and gluing them together by isometries of their faces there is a standard way in which cube complexes can be endowed with metrics there is a well known criterion of that characterises those cube complexes that are locally cat the precise definition of a cube complex is given in def as follows definition a cube complex x is a quotient of a disjoint union of cubes k c by an equivalence relation the restrictions x of the natural projection k x k are required to satisfy for every c p c the map is injective if q x q h then there is an isometry hc from a face tc onto a face such that pxq q if and only if x hc pxq definition a metric space is curved if its metric is locally cat gromov s insight allows us to easily check whether a p e complex is nonpositively curved lemma gromov a p e complex is curved if and only if the link of each vertex is a cat space in a cube complex the link of each vertex is a spherical complex built from speherical simplices see gromov realised that an spherical complex is cat if and only if it is flag definition a complex l is a flag complex if it is simplicial and every set vn u of pairwise adjacent vertices spans a simplex flag complexes are completely determined by their thus we arrive at the following combinatorial condition for cube complexes lemma gromov a cube complex is curved if the link of every vertex is a flag complex we wish to limit the rank of free abelian subgroups in fact we will limit the largest dimension of an isometrically embedded flat the following theorem shows that in the cat world the maximal dimension of a flat is the same as the maximal rank of a free abelian subgroup theorem let x be a compact non positively curved cube complex its universal cover and g pxq if h zn is a subgroup of g then there is an isometrically embedded copy of i rn moreover the quotient iprn q h is an finally we would like to know when a cube complex is hyperbolic bridson shows that the only obstruction is containing an isometrically embedded flat theorem bridson theorem a let x be a compact curved cube complex and be its universal cover is not hyperbolic if and only if there exists an isometric embedding i right angled artin groups right angled artin groups have been at the centre of a lot of recent study particularly because of their interesting subgroup structure in particular their subgroups have interesting connections with finiteness properties of groups as shown in for completeness we recall the basic theory of right angled artin groups definition given a flag complex we define the associated right angled artin group raag as the group d rv ws if rv ws p these groups have curved cube complexes as classifying spaces which are unions of tori definition given a flag complex the salvetti complex is defined as follows for each vertex vi in let s be a copy of the circle cubulated with vertex for each simplex vn s of there is an associated torus if then there is a natural inclusion now define o where the equivalence relation is generated by the inclusions there is a map ps sending each circle in to the respective circle in ps and extending linearly over cubes this map is an inclusion of cubical complexes it is a standard fact that has fundamental group and is a nonpositively curved cube complex proofs of the following can be found in lemma q lemma is curved a new classifying space we will construct a new classifying space which will also be a curved cube complex and will be more amenable to taking branched covers we will build such a classifying space in the case that the flag complex satisfies the following condition definition a simplicial complex has an structure if is contained in the join vn of several discrete sets vi in the case n we will say bipartite and in the case n tripartite any finite simplicial complex can be given an structure as a subcomplex of a simplex we define a cube complex for an complex as follows i let vi vm u and i nu for each vertex vki let be a i k copy of s cubulated with two vertices labelled and and two edges evki and definition given j p ppiq j il u we say that is a if it is of the form vkill remark every simplex is a j simplex for some unique possibly empty j for each vkill s in we associate the following space irj this is the product of a torus and a cube let tj is a and t jpppiq tj given simplices there is a natural inclusion o t t pt where once again the equivalence relation is generated via the inclusions we need to prove the two key lemmata the fundamental group of is and is curved lemma q proof apply the kampen theorem repeatedly lemma is curved proof there are vertices in given two vertices v w there is a cellular isomorphism which sends v to thus we only need to check the link of one vertex we will check the link of let l q let vi vi y there is a vertex in l for each edge at so vi two distinct vertices and are connected if one of the following conditions holds s is an edge of these edges come from the following subcomplexes th s this tells us that l vn we now want to prove that l is in fact a flag complex given a set w vkill u of pairwise adjacent vertices in l we want to show that vkill s is also in we split w into two sets i vkjj kj and w r wa u since the vertices in are pairwise adjacent there is a simplex spanning them we see that the subcomplex contains an pl which fills the required simplex in it should be noted the natural projections are injective on each closed cube and not just on the interior remark the complex requires a choice of structure given two structures the complexes are homotopy equivalent but are not isomorphic as cube complexes this is shown in figure together with some examples of the construction remark let be a cage graph with two vertices that has an edge for each element of vi then the complex constructed above embeds in i this fact will come in useful later as will the fact that in the case n there is an embedded copy of in finiteness properties of subgroups of raags we will require one theorem on the finiteness properties of subgroups of raags from a homomorphism f z can be defined by putting an integer label on each vertex and sending the corresponding generator of to its label definition let f z be a homomorphism we denote the full subcomplex spanned by those vertices with label let be the full subcomplex spanned by vertices not in theorem theorem a let f z be a homomorphism then the following are equivalent the kernel of f is of type f pn respectively for every possibly empty dead simplex the complex x is homologically pn respectively is simply connected y figure some examples of the labels on vertices exhibit the structure in the last example the shaded regions are identified branched covers of cube complexes we will take branched covers of cube complexes to get rid of high dimensional flats the techniques we will use were developed in the idea is to take an appropriate subset which intersects the high dimensional flats definition let x be a curved cube complex we say that y x is a branching locus if it satisfies the following conditions y is a locally convex cubical subcomplex lkpc xq r y is connected and for all cubes c in y the first condition is required to prove that curvature is preserved when taking branched covers the second is a reformulation of the the classical requirement that the branching locus has codimension in the theory of branched covers of manifolds ensuring that the trivial branched covering of x is x definition a branched cover of x over the branching locus y is the result of the following process take a finite covering x r y of x r y lift the piecewise euclidean metric locally and consider the induced path metric take the metric completion of x r y we require some key results from which allow us to conclude that this process is natural and that the resulting complex is still a curved cube complex lemma brady lemma there is a natural surjection b x and is a piecewise euclidean cube complex lemma brady lemma if y is a finite graph then is nonpositively curved hyperbolisation in dimension this section will be a warm up to our key theorems which are all related to dimension the lower dimensional case carries a lot of the ideas that will be used later throughout this section will be a bipartite graph say let i vi vm u let be the graph with two vertices labelled and i i all directed from to we noted and edges labelled vm i in remark that the complex constructed in section is a subcomplex of theorem let be a bipartite graph let be the associated raag and let be the classifying space constructed in section then there is a branched cover of which has hyperbolic fundamental group vm vm figure depiction of the deformation retraction t r s s there are in fact many branched covers delivering this result during the course of the proof we will pick a prime p and as this varies different hyperbolic branched covers are obtained proof the branching locus will be one of the vertices of given two vertices v and w there is a homeomorphism of which sends v to therefore it does not matter which vertex we pick to be our branching locus we will choose the vertex r deformation retracts onto the graph this can be seen as follows we start with a torus cubulated as in figure from each torus we remove the center vertex the complement deformation retracts onto the graph depicted in figure we now identify the edges via their labels which will result in this argument shows that r is a free group on erators we will denote these generators ai for i p and bj for j p from the deformation described we get a map q each loop of length in q corresponds to a torus as in figure under the deformation retraction this gets sent to in corresponding to the l m n vn s r i k ai j m bj commutator vm let p be a prime and sp be the symmetric group on p letters let be a in sp and an element which conjugates to where r is a generator of p we define our cover using the map q sp ai bj taking the cover corresponding to the stabiliser of in sp n note that the commutator s is a this means that the loops of length in the link have connected preimage in the cover we take the completion of the resulting complex there is a natural map b the link of the vertex which maps to will contain no cycles of length we now prove that the resulting complex has hyperbolic universal cover is a curved cube complex so to prove that we know that r it is hyperbolic we just have to show that there are no isometric embeddings by theorem if there is such an embedding then it will contain at least one square however each square contains one vertex which is a lift of and in the link of this vertex there are no loops of length however were the flat plane to be isometrically embedded there would be a loop of length in the link of every vertex on the plane this contradiction completes the proof we use this theorem along with morse theoretic ideas from section to find more examples of hyperbolic groups with finitely generated subgroups that are not finitely presentable proposition let be a complete bipartite graph on sets a u and b u fix p satisfying the above hypotheses and let b be the branched cover constructed above then q has a finitely generated subgroup which is not finitely presentable proof in this case where is as above and has edges put an orientation on each edge of such that there are edges oriented towards each vertex and edges oriented away from each vertex cubulating s with one vertex and one oriented edge define maps hi s on each edge by their orientation define h s by q q precomposing with the branched covering map b and lifting to universal covers we obtain a morse function f r which is the ascending and descending links of this morse functions are the preimages under b of the ascending and descending links of for the morse function h the ascending and descending links are joins of the ascending and descending links for hi these will be copies of s so the ascending and descending links will be copies of s there are now two possibilities if we look at a vertex in which does not map to the ascending and descending links will remain unchanged and will still be copies of s if the vertex in question maps to we will study the ascending link the other case being identical the ascending link of is a loop of length taking a branched cover cause this loop of length to lengthen but in the preimage it will still be a copy of s it follows that the kernel of is finitely generated but not finitely presentable by sizeable graphs are used in to give examples of hyperbolic groups with subgroups which are type not here we outline a procedure for producing examples of sizeable graphs proposition let a and b be sets with partitions a a and b b b where b are and let be the complete bipartite graph on a and b let be the associated right angled artin group the classifying space from and the branched covering constructed in theorem let v p be a vertex mapping to then lkpv q is sizeable proof the link of a vertex is a cover of the graph is the complete bipartite graph on sets a y u and b b y u define a and r defining b and b similarly the graph has the following properties it is bipartite as it is the cover of a bipartite graph and it has no cycles of length since the branching process was designed to remove these to check the last property we let a be the set of vertices mapping to a and the complement of these in the bipartite structure we define and similarly we must prove that y q is connected b q can be covered by finitely many loops of length such that cm x ym i cm h for all when taking the branched cover each ci has connected preimage and the intersection will still be non empty so the resulting union will be connected almost hyperbolisation in dimension notation given a tripartite complex l let lij be the full subcomplex spanned by the vertices of vi y vj the main theorem of this section is the following theorem let be a tripartite flag complex the associated raag and the classifying space constructed in section then there exists a branched cover x of such that pxq contains no subgroups isomorphic to proof our branching locus will be b y y q where is the graph with vertices and and edges each directed from to we have three maps r b r r b r r b r which are the restrictions of the projections the maps from section give us three maps r r r here qij are the primes picked in the process of taking a branched cover in theorem let q we can combine these permutation representations with the projection maps above to get a map r bq sq which defines a cover of r b by taking the subgroup corresponding to the stabiliser of in sq we complete this cover to get our branched cover x let ti j ku the maps are retractions to see this consider the natural map figure a loop corresponding to the commutator it follows that r bq r is surjective we will now consider what the link of a vertex in the branched cover is we will restrict our attention to a vertex mapping to all the other cases are similar we will consider the image of the link of under the three maps in the image of the map this link is sent surjectively onto the link of in by section we know that r deformation retracts onto the graph loops of length are sent to commutators of the form v v in the map r sq this commutator is sent to a we must now consider the image under the maps these maps send the link to a disjoint union of contractible subsets so the maps r bq and r bq send the image of the fundamental group of the link to the identity from this we can see that in the cover of r b corresponding to the stabiliser of in sq the preimage of one of the loops of length depicted in figure will have components and each component is a loop of length we will now prove that there are no isometrically embedded planes of dimension this combined with theorem will complete the proof of the theorem since the resulting cube complex has cubes of dimension at most we can see that the dimension of an isometrically embedded flat plane is at most if such a copy e of were isometrically embedded in it would contain at least one cube and would in fact be a cubical embedding of the flat for each vertex contained in the flat the link would contain a subcomplex isomorphic to an octahedron let be the lift of the branching locus b we can see how this intersects each cube in by figure as such any will intersect a vertex on let x be a vertex on x then lkpx has a tripartite structure if there is an octahedron in this complex it has a tripartite structure of the form s s s one of the copies of s will be contained in the vertices corresponding to the other vertices form a loop of length in the bipartite graph defined by the edges not in however we constructed the branched figure intersection pattern of on a cube in cover so that this graph has no cycles of length morse theory while morse theory is defined in the more general setting of affine cell complexes in this instance we shall only need it for curved cube complexes for the remainder of this section let x be a cat cube complex and let g be a group which acts freely cellularly properly and cocompactly on x let g z be a homomorphism and let z act on r by translations recall that is the characteristic map of the cube definition we say that a function f x r is a morse function if it satisfies the following conditions for every cube c x of dimension n the map f r extends to an affine map rn r and f r is constant if and only if n the image of the of x is discrete in f is f pg xq f pxq we will consider the level sets of our function which we will denote as follows definition for a closed subset i r we denote by xi the preimage of i we also use xt to denote the preimage of t p the kernel h of acts on the cube complex x in a manner preserving each level set xi moreover it acts properly and cocompactly on the level sets we will use the topological properties of the level sets to gain information about the finiteness properties of the group we will need to examine how they vary as we pass to larger level sets theorem lemma if i i r are closed intervals and xi r xi contains no vertices of x then the inclusion xi xi is a homotopy equivalence if xi r xi contains vertices of x then the topological properties of xi can be very different from those of xi this difference is encoded in the ascending and descending links definition ascending link of a vertex is pv xq tlkpw cq pwq v and w is a minimum of f u lkpv xq the descending link of a vertex is pv xq tlkpw cq pwq v and w is a maximum of f u lkpv xq theorem lemma let f be a morse function suppose that i i r are connected closed and min i min resp max i max i q and that i r i contains only one point r of f x then xi is homotopy equivalent to the space obtained from xi by coning off the descending resp ascending links of v for each v p f prq we can now deduce a lot about the topology of the level and sets we know how they change as we pass to larger intervals and so we have the following corollary corollary let i i be as above if each ascending and descending link is homologically pn then the inclusion xi xi induces an isomorphism on hi for i n and is surjective for i if the ascending and descending links are connected then the inclusion xi xi induces a surjection on if the ascending and descending links are simply connected then the inclusion xi xi induces an isomorphism on knowing that the direct limit of this system is a contractible space allows us to compute the finiteness properties of the kernel of theorem theorem let f x r be a morse function and let h if all ascending and descending links are simply connected then h is finitely presented is of type we would also like to have conditions which will allow us to deduce that h does not satisfy certain other finiteness properties a well known result in this direction is proposition brown let h be a group acting freely properly r i px zq cellularly and cocompactly on a cell complex x assume further that h r for i n and that hn px zq is not finitely generated as a then h is of type f pn but not f pn in the above result was used to prove that a certain group is not of type f however in our theorems not all the links will satisfy the assumptions of theorem and we require the following theorem kropholler theorem let f x r be a morse function and let h suppose that for all vertices v the reduced homology of pvq and pvq is in dimensions and n further r n pv q or h r n pv q assume that there is a vertex v such that h possibly both then h is of type f pn but not of type f pn finally we prove a key theorem regarding ascending and descending links theorem for i let xi be a cat cube complex gi a group acting freely properly and cocompactly on xi gi z a surjective homomorphism and fi xi r a morse function then there is a morse function f r such that q q q and q q q we will prove the staement for ascending links the proof for descending links is the same the proof really relies on the following key lemma lemma let x be a cat cube complex with morse function f then pv xq is the full subcomplex of lkpv xq spanned by the vertices in pv xq at first sight this does not appear to be an improvement however it allows for simple calculation of the ascending and descending links once the link of a vertex is known proof let vn be n pairwise adjacent vertices in pv xq proving that the simplex vn s is in pv xq will prove the claim let ei be the edge in x corresponding to vi we must prove that v is a minimum for f restricted to c en we note that since f extends to an affine map c is foliated by level sets f ptq for t p each of these level sets corresponds to a linear subspace of dimension n not containing any subcubes of dimension when intersected with the cube c there are exactly two subspaces where the intersection is a single vertex one of these corresponds to the minimum of f and the other to the maximum of f one of these must be at the vertex mapping to v and this is the minimum proof of theorem we define a morse function f r by f q q q this satisfies all the conditions of being a morse function and is with respect to the map q q q the link of a vertex q in is q q it is easy to see that the ascending link is the full subcomplex spanned by q q q which is equal to q groups of type but not fn in question brady asks whether there exist groups of type but not fn which do not contain he notes that the known examples all contain while we are not able to find examples without we can drastically reduce the rank of a free abelian subgroup as shown by the following theorem b for every positive integer n there exists a group of type but not fn that contains no abelian subgroups of rank greater than r proof for our general construction we require pgi xi fi q for i where gi is a hyperbolic group xi is a cat cube complex with a free cocompact gi action gi z is a surjective homomorphism fi is a morse function we also require that all ascending and descending links of fi are pi but at least one is not pi the existence of such a morse function would show by theorem that gi has a subgroup of type not fi namely q let be a free group of rank a tree the cayley graph of gi with respect to generators a and b the exponent sum homomorphism with respect to a and b and the map that is linear on edges and whose restriction to the vertices of is let q be the group classifying space homomorphism and morse function from proposition let q be the group classifying space homomorphism and morse function from we could also use the examples of groups from an important point to note is that for i the ascending and descending links for pgi xi fi q are isomorphic to s now let n be an integer let l t u and let pgl xl fl q be the with l l gl xl i l i fl i l i by theorem the ascending and descending links of fl are s s which is but not if n mod then n and we define g gl if not let m be the residue of n mod and consider the pg gl gm x xl xm f fl fm q the morse function f on the cube complex x has ascending and descending links that are all copies of s in all cases is of type not fn by theorem the group g contains no free abelian subgroups of rank greater than r s since gi as a hyperbolic group contains no copy of subgroups of type f that can not be finitely presented we will now apply our branched covering technique with carefully chosen flag complexes to prove the following theorem a there exists a non positively curved space x such that pxq contains no subgroups isomorphic to which contains a subgroup of type f not we will do this using the following steps start with a connected tripartite flag complex l such that plq is a perfect group with the the link of every vertex connected and not a point take an auxiliary complex rplq then build the complex in section define a function f s which lifts to a morse function on universal covers examine the ascending and descending links of this morse function take a branched covering of as in section to get a complex x with an associated morse function examine the ascending and descending links of this morse function this will show that the kernel of is of type f prove that the kernel of is not finitely presented the complex rplq the construction of this complex is required the key point of this complex is that it satisfies proposition this is required to make sure that the fundamental group of the links is not changed in the branching process firstly we prove that none of the assumptions from the first step are restrictive we can realise any finitely presented group as the fundamental group of a finite connected simplicial complex it is also well known that the barycentric subdivision of a simplicial complex is flag and we can put an obvious tripartite structure on the barycentric subdivision labelling vertices by the dimension of the corresponding cell so we can realise any group as the fundamental group of a connected tripartite flag complex given a connected tripartite flag complex there is a homotopy equivalent complex of the same form such that the link of every vertex is connected and not a point to see this first note that if there is a vertex where the link is just a single point then we can contract this edge without changing the homotopy type of the complex next note that if there is a vertex x with disconnected link then we perform the following procedure pick two vertices v w in two different components of the link add an extra vertex y and connect it to v w and x while also adding two triangles rv y xs and ru y xs the result is that we have reduced the number of components of the link at x without adding any extra components to the link of v or w and the link of y is connected we have also not changed the homotopy type since we have added a contractible space glued along a contractible subspace repeating this procedure we can make sure that the link of every vertex is connected definition for a simplicial complex l the octahedralisation splq is defined as follows for each vertex v of l let tv v u be a copy of s for every simplex of l take if then there is a natural map splq where the equivalence relations is generated by the inclusions the complex splq can also be seen as the link of the vertex in the salvetti complex for the raag defined by if l has an structure then there is a natural structure on splq let l be contained in vn then splq is contained in q spvn q remark the map defined by tvu extends to a retraction of splq to l not a deformation retraction in particular if plq then psplqq it is proved in that if l is a flag complex then splq is a flag complex lemma assume l is a connected simplicial complex and lkpv lq is connected and not equal to a point for all vertices v p if plq then psplqq proof let l be the full subcomplex of splq spanned by the set tv v p lu and let be defined similarly let n be the interior of the star of l and n the interior of the star of it is clear that splq is contained in n yn we can also see that n x n is the union of the open simplices which are contained in neither l nor here we are using the fact that n is homotopy equivalent to l and similarly n is homotopy equivalent to by considering the sequence for n and n we see that psplqq is isomorphic to pn x n q so we must prove that n x n is connected let x y be points of n xn we can always connect x and y to open edges contained in n x n label these edges ex and ey let vx be the end point of ex in l and wx the end point in define vertices vy and wy similarly let v pvx vn vy q be a sequence of vertices in l corresponding to a geodesic in from vx to vy the vertices wx and wy have corresponding vertices in l these are adjacent to vx and vy respectively we split into cases by whether these vertices are on the geodesic we will define a path p in each of these four cases if wx and wy then let p vn if wx and wy then let p vn wy if wx and wy then let p wx vn if wx and wy then let p wx vn wx we will now describe how to get a sequence of edges ex em ey such that ei ei are on a in the corresponding sequence of edges in n x n will give a path from ex to ey thus completing the proof of the lemma figure the key idea from lemma a path in the link of a vertex v can be viewed as a sequence of edges in l such that adjacent edges are on a we will relabel the path p to let ai be a path in the link of bi from ai to ai let bi be a path in the link of ai from bi to bi this can be done since the link of every vertex is connected the sequence of edges defines the sequence of edges we require the key idea is encapsulated in figure where the curved arcs correspond to the paths ai bi we can now define the complex rplq let l be a tripartite flag complex with plq perfect such that the link of every vertex in l is connected and not a point label the sets of vertices from the tripartite structure construct splq as above this is a tripartite flag complex with plq perfect add extra vertices v v v which are of type respectively connect v i to all the vertices not of type i define rplq to be the flag completion of the resulting complex take a simplicial complex l as above and let rplq we can now construct the cubical complex from section the morse function f as noted in remark we can view as a subcomplex of where is a graph that has two vertices and has edges labelled by the vertices of as well as one extra edge labelled such that each edge runs from to we define a morse function on the product by putting an orientation on each edge of as follows if it is an edge corresponding to a vertex of splq we orient it towards while for the vertices v i and we orient towards now put an orientation on s and map each graph by its orientation we then extend linearly across cubes restricting this map to we get a map g and by lifting to universal covers we get a morse function f r which is let the vertices of type i in splq be the set vi and let tv i u si vertex y y q splq y q y q y q s s y q y q y q y y q splq table the ascending and descending links of the morse function f s the ascending and descending links of this morse function are given in table notation given a simplicial complex and a subset s of the vertices of denotes the full subcomplex of spanned by the vertices in proposition given a complex of the form vi or yvj there is an ordering vq on the set vjs such that stpvm stpvl qq is connected and pstpvm q x stpvl qqq for m q proof in the case vjs we have vertices and and q x q vi which is connected but not simply connected so either ordering will do in the case vjs vj let vj wn u and vj u the subgraph z lpvi y vj q is connected since l is connected and the link of every vertex is connected and not a point we can thus assume that stpwl zq x stpwm qq noting that stpv s splqq cpsplkpv lqqq we can see that stpv s splqq x stpwt splqq spstpv lqxstpw lqq thus ordering vj u we can see that vi x stpvl splqq x stpvm splqqq contains at least two points noting that stpvl splqqxp stpvm splqqq vi x stpvl splqq x stpvm splqq we can see that this is connected but not simply connected remark in the above proof we are gluing contractible complexes along connected complexes this shows that all the complexes in the statement of proposition are simply connected remark at each stage in the above proof stpvl splqq x stpvm splqqq could be covered by cycles of length since it is the join of a discrete set and a copy of s almost hyperbolisation and the morse function h x we use the almost hyperbolisation technique from section to get a branched cover x of recall that there is a natural length preserving map b x we define a function h g b x s lifting to universal covers we get a morse function in what follows g is the fundamental group of x which does not contain any copies of in what follows h g zq it is worth noting that in the almost hyperbolisation procedure we ensured that loops of length in the link of a vertex have connected preimage ascending and descending links of h recall that we distinguish between types of vertices in x and label them as follows vertices of type a are those which map to or vertices of type b are those which map to or vertices of type c are those which map to or vertices of type d are those which map to or for a vertex in x the ascending descending link is the preimage of the ascending descending link of the corresponding vertex in type a vertices are disjoint from the branching locus so a small neighbourhood of each lifts to x and the ascending and descending links are isomorphic to those of the corresponding vertex of we claim that for vertices not of type a the ascending and descending links are simply connected we will prove this in the case of a vertex of type b and the ascending link the other cases are similar a vertex x of type b is on a lift of we may assume that x maps to now y q let us consider the preimage of in lkpx xq to envisage this we start by removing the vertices in and taking the covering of the remaining space coming from the derivative map of b then add back the vertices of by remark we can cover y q by stpvq as v runs over vertices in we noted in remark that we can construct this cover in such a way that each stpvm q x stpvl qq is connected and covered by loops of length with intersection specifically if and are loops of length in stpvm q x stpvl qq then x in the procedure of passing to the branched covering b x associated to each vertex of x there is a derivative map blkpvq lkpv xq lkpbpvq q each of the cycles above has connected preimage under the map blkpvq therefore the preimage of stpvm q x stpvl qq is connected upon taking the completion we replace the vertices in this corresponds to coning off the lifts of their links thus we see pxq is made from a sequence of contractible spaces glued along connected subspaces and so is simply connected proof that the kernel is not finitely presented to prove that h is not finitely presented we need the following lemma lemma pxi q for all compact intervals i r in our case s and proof for the purposes of this proof let y k rplq by theorem we know that the kernel of is not finitely presented since the kernel of acts cocompactly on the level set yi we can see that yi is not simply connected let s yi be a loop then there is a larger interval j such that is trivial in yj let j ra bs we can assume that is in j ra bs or j ra b for all assume the latter there is a sequence of vertices such that yj yj y pvi qq there is an integer m such that is trivial in yj pvi qq but not trivial in xj yj pvi qq y since adding pxm qq changes the fundamental group becomes trivxj which is also contained in pxm ial we can find a loop in y we also know that it is contained in yj since pvi q x pvj q h whenever i j as the restriction of an affine map to a cube can only have one maximum and one minimum or it is constant on a subcube since adding pxm qq changes we see that pxm qq so xm is a vertex mapping to and bounds a disc in y which does not intersect it follows that this disc lifts to under the branched covering coming from the following commutative diagram y x b the boundary of this lifted disc is in xj call this loop if bounded a disc in xj then we could map this to yj via but this would imply that bounds a disc in yj which it does not thus is in pxj q since pxi q pxj q is surjective we deduce by theorem that pxi q theorem h q is not finitely presentable proof assume that h is finitely presented h acts cocompactly on xi so we can add finitely many to the quotient to gain fundamental group taking a universal cover of the space obtained in this way we arrive at xi with finitely many of attached which is simply connected in other words there are finitely many of loops which generate pxi q the direct limit of all the xi is the space x which is cat and in particular contractible we can pass to a larger interval such that the of loops which generate pxi q are trivial in other words the map pxi q pxj q is trivial but it is also surjective by theorem and because we have assumed that all the ascending and descending links are connected thus pxj q is trivial however we know this not to be the case by lemma references agol the virtual haken conjecture doc with an appendix by agol daniel groves and jason manning bestvina and brady morse theory and finiteness properties of groups invent brady branched coverings of cubical complexes and subgroups of hyperbolic groups lond math bridson on the existence of flat planes in spaces of nonpositive curvature proceedings of the american mathematical society january bridson and haefliger metric spaces of curvature volume of grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences berlin brown cohomology of groups volume of graduate texts in mathematics springer new york new york ny gromov hyperbolic groups in chern kaplansky moore singer and gersten editors essays in group theory volume pages springer new york haglund and wise special cube complexes geom funct kropholler hyperbolic groups with almost finitely presented subgroups in preparation ian leary uncountably many groups of type fp math december arxiv yash lodha a hyperbolic group with a finitely presented subgroup that is not of type page london mathematical society lecture note series cambridge university press wise the structure of groups with a quasiconvex hierarchy
| 4 |
ijacsa international journal of advanced computer science and applications vol no automated classification of hand movement eeg signals using advanced feature extraction and machine learning mohammad alomari aya samaha and khaled alkamha applied science university amman jordan in this paper we propose an automated computer platform for the purpose of classifying electroencephalography eeg signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms it is known that eeg represents the brain activity by the electrical voltage fluctuations along the scalp and interface bci is a device that enables the use of the brain s neural activity to communicate with others or to control machines artificial limbs or robots without direct physical movements in our research work we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms the eeg dataset used in this research was created and contributed to physionet by the developers of the instrumentation system data was preprocessed using the eeglab matlab toolbox and artifacts removal was done using aar data was epoched on the basis of de synchronization and cortical potentials mrcp features rhythms were isolated for the analysis and delta rhythms were isolated for the mrcp analysis the independent component analysis ica spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated eeg sources the final feature vector included the erd ers and mrcp features in addition to the mean power and energy of the activations of the resulting independent components ics of the epoched feature datasets the datasets were inputted into two machinelearning algorithms neural networks nns and support vector machines svms intensive experiments were carried out and optimum classification performances of and were obtained using nn and svm respectively this research shows that this method of feature extraction holds some promise for the classification of various pairs of motor movements which can be used in a bci context to mentally control a computer or machine bci ica mrcp machine learning nn svm i introduction the importance of understanding brain waves is increasing with the ongoing growth in the interface bci field and as computerized systems are becoming one of the main tools for making people s lives easier bci or brainmachine interface bmi has become an attractive field of research and applications bci is a device that enables the use of the brain s neural activity to communicate with others or to control machines artificial limbs or robots without direct physical movements the term electroencephalography eeg is the process of measuring the brain s neural activity as electrical voltage fluctuations along the scalp that results from the current flows in brain s neurons in a typical eeg test electrodes are fixed on the scalp to monitor and record the brain s electrical activity bci measures eeg signals associated with the user s activity then applies different signal processing algorithms for the purpose of translating the recorded signals into control commands for different applications the most important application for bci is helping disabled individuals by offering a new way of communication with the external environment many bci applications were described in including controlling devices like video games and personal computers using thoughts translation bci is a highly interdisciplinary research topic that combines medicine neurology psychology rehabilitation engineering humancomputer interaction hci signal processing and machine learning the strength of bci applications lies in the way we translate the neural patterns extracted from eeg into machine commands the improvement of the interpretation of these eeg signals has become the goal of many researchers hence our research work explores the possibility of eeg classification between left and right hand movements in an offline manner which will enormously smooth the path leading to online classification and reading of executed movements leading us to what we can technically call reading minds in this work we introduce an automated computer system that uses advanced feature extraction techniques to identify some of the brain activity patterns especially for the left and right hand movements the system then uses machine learning algorithms to extract the knowledge embedded in the recorded patterns and provides the required decision rules for translating thoughts into commands as seen in fig this article is organized as follows a brief review of related research work is provided in section ii in section iii the dataset used in this study is described the automated feature extraction process is described in section iv the generation of our datasets and the practical implementation and system evaluation are discussed in section conclusions and suggested future work are provided in section vi p a g e ijacsa international journal of advanced computer science and applications vol no ii literature review the idea of bci was originally proposed by jaques vidal in where he proved that signals recorded from brain activity could be used to effectively represent a user s intent in the authors recorded eeg signals for three subjects while imagining either right or left hand movement based on a visual cue stimulus they were able to classify eeg signals into right and left hand movements using a neural network classifier with an accuracy of and concluded that this accuracy did not improve with increasing number of sessions vector consisting of the patterns of the mu and beta rhythms and the coefficients of the autoregressive model artificial neural networks anns is applied to two kinds of testing datasets and an average recognition rate of is achieved the strength of bci applications depends lies in the way we translate the neural patterns extracted from eeg into machine commands the improvement of the interpretation of these eeg signals has become the goal of many researchers hence our research work explores the possibility of eeg classification between left and right hand movements in an offline manner which will enormously smooth the path leading to online classification and reading of any executed movements leading us to what we can technically call reading minds iii the physionet eeg data a description of the dataset the eeg dataset used in this research was created and contributed to physionet by the developers of the instrumentation system the dataset is publically available at http fig feature extraction and translation into machine commands the author of used features produced by motor imagery mi to control a robot arm features such as the band power in specific frequency bands alpha and beta were mapped into right and left limb movements in addition they used similar features with mi which are the event related desynchronization and synchronization comparing the signal s energy in specific frequency bands with respect to the mentally relaxed state it was shown in that the combination of and cortical potentials mrcp improves eeg classification as this offers an independent and complimentary information in a hybrid bci control strategy is presented the authors expanded the control functions of a potential based bci for virtual devices and mi related sensorimotor rhythms to navigate in a virtual environment imagined hand movements were translated into movement commands in a virtual apartment and an extremely high testing accuracy results were reached a bci system was presented in for the translation of imagined hands and foot movements into commands that operates a wheelchair this work uses many spatial patterns of erd on mu rhythms along the cortex and the resulting classification accuracy for online and offline tests was and respectively the authors of proposed an bci system that controls hand prosthesis of paralyzed people by movement thoughts of left and right hands they reported an accuracy of about a single trial hand movement classification is reported in the authors analyzed both executed and imagined hand movement eeg signals and created a feature the dataset consists of more than eeg records with different durations one or two minutes per record obtained from healthy subjects subjects were asked to perform different tasks while eeg signals were recorded from electrodes along the surface of the scalp each subject performed experimental runs a baseline runs with eyes open a baseline runs with eyes closed three runs of each of the four following tasks o the left or right side of the screen shows a target the subject keeps opening and closing the corresponding fist until the target disappears then he relaxes o the left or right side of the screen shows a target the subject imagines opening and closing the corresponding fist until the target disappears then he relaxes o the top or bottom of the screen a target appears on either the subject keeps opening and closing either both fists in case of a or both feet in case of a until the target disappears then he relaxes o the top or bottom of the screen a target appears on either the subject imagines opening and closing either both fists in case of a or both feet in case of a until the target disappears then he relaxes the eeg signals were recorded according to the international system excluding some electrodes as seen in fig p a g e ijacsa international journal of advanced computer science and applications vol no b the subset used in the current work from this dataset we selected the three runs of the first task described above opening and closing the fist based on a target that appears on left or right side of the screen these runs include eeg data for executed hand movements we created an eeg data subset corresponding to the first six subjects and including three runs of executed movement specifically per subject for a total of records iv automated analysis of eeg signals for feature extraction a channel selection according to many of the eeg channels appeared to represent redundant information it is shown in that the neural activity that is correlated to the executed left and right hand movements is almost exclusively contained within the channels and cz of the eeg channels of fig this means that there is no need to analyze all channels of data on the other hand only eight electrode locations are commonly used for mrcp analysis covering the regions between frontal and central sites fcz cz and these channels were used for the independent component analysis ica discussed later in the current section fig fig schematic diagram for the proposed system filtering because eeg signals are known to be noisy and nonstationary filtering the data is an important step to get rid of unnecessary information from the raw signals eeglab which is an interactive matlab toolbox was used to filter eeg signals a band pass filter from hz to hz was applied to remove the dc direct current shifts and to minimize the presence of filtering artifacts at epoch boundaries a notch filter was also applied to remove the hz line noise automatic artifact removal aar the eeg data of significance is usually mixed with huge amounts of useless data produced by physiological artifacts that masks the eeg signals these artifacts include eye and muscle movements and they constitute a challenge in the field of bci research aar automatically removes artifacts from eeg data based on blind source separation and other various algorithms the aar toolbox was implemented as an eeglab in matlab and was used to process our eeg data subset on two stages electrooculography eog removal using the blind source separation bss algorithm then electromyography emg removal using the same algorithm fig electrodes of the international system for eeg epoch extraction splitting after the aar process the continuous eeg data were epoched by extracting data epochs that are time locked to specific event types p a g e ijacsa international journal of advanced computer science and applications vol no when no sensory inputs or motor outputs are being processed the mu hz and beta hz rhythms are said to be synchronized these rhythms are electrophysiological features that are associated with the brain s normal motor output channels while preparing for a movement or executing a movement a desynchronization of the mu and beta rhythms occurs which is referred to as erd and it can be extracted seconds before onset of movement as depicted in fig later these rhythms synchronize again within seconds after movement and this is referred to as ers on the other hand delta rhythms can be extracted from the motor cortex within the stage and this is referred to mrcp the slow less than hz mrcp is associated with an negativity that occurs seconds before the onset of movement in our experiments we extracted events with type left hand or type right hand with different epoch limits and types of analysis erd analysis epoch limits from to seconds ers analysis epoch limits from to seconds mrcp analysis epoch limits from to seconds each run and mrcp for both left and right hand movements for each subject practical implementation and results a feature vectors construction and numerical representation after the eeg datasets were analyzed as described in the previous section the activation vectors were calculated for each of the resulted epochs datasets as the multiplication of the ica weights and ica sphere for each dataset subtracting the mean of the raw data from the multiplication results then the mean power and energy of the activations were calculated to construct the feature vectors for each subject s single run feature vectors were extracted as power features mean features energy features type feature side target resulting in a feature matrix the constructed features were represented in a numerical format that is suitable for use with machine learning algorithms every column in the features matrices was normalized between and such that the datasets could be inputted to the learning algorithms described in the next subsection b machine learning algorithms in this work neural networks nns and support vector machines svms algorithms were optimized for the purpose of classifying eeg signals into right and left hand movements a detailed description of these learning algorithms can be found in and the matlab neural networks toolbox was used for all nn experiments the number of input features features determined the number of input nodes for nn and the number of different target functions output left or right determined the number of output nodes training was handled with the aid of the learning algorithm fig epoch extraction and mrcp independent component analysis ica after the aar process ica was used to parse the underlying electrocortical sources from eeg signals that are affected by artifacts data decomposition using ica changes the basis linearly from data that are collected at single scalp channels to a spatially transformed virtual channel basis each row of the eeg data in the original scalp channel data represents the time course of accumulated differences between source projections to a single data channel and one or more reference channels eeglab was used to run ica on the described epoched datasets left and right erd ers and mrcp for the channels fcz cz and rhythm isolation a short iir band pass filter from to hz was applied on the epoched datasets of the experiment for the purpose of isolating rhythms another short iir lowpass filter of hz was applied on mrcp epoched datasets for isolating delta rhythms the result of this was files for all svm experiments were carried out using the mysvm software svm can be performed with different kernels and most of them were reported to provide similar results for similar applications so the anovakernel svm was used in this work optimisation and results in all experiments samples were randomly selected and used for training and the remaining for testing this was repeated times and in each time the datasets were randomly mixed for each experiment the number of hidden nodes for nn varied from to in svm each of the degree and gamma parameters varied from to the mean of the accuracy was calculated for each ten pairs the features that were used as inputs to nn and svm are symbolized as follows p the power m the mean e the energy p a g e ijacsa international journal of advanced computer science and applications vol no x the sample type the results of the experiment are summarized in the table i table nn features all p x m x e x p m x m e x p e x accuracy results for experiment svm hidden layers accuracy degree gamma it is clear from the testing results that svm outperforms nn in most experiments an svm topology of degree and gamma provides an accuracy of if tested with the power energy and type inputs of the experiment a nn of hidden layers can provide an accuracy of if all features are used these results clearly show that the use of advanced feature extraction techniques provides good and clear properties that can be translated using machine learning into machine commands the next best svm performance is achieved using the energy and type features in general there has been an increase in the classification performance with the use of more discriminative features such as the total energy compared to the power and mean inputs vi conclusions and future research this paper focuses on the classification of eeg signals for right and left fist movements based on a specific set of features very good results were obtained using nns and svms showing that offline discrimination between right and left movement for executed hand movements is comparable to leading bci research our methodology is not the best but is somewhat a simplified efficient one that satisfies the needs for researchers in field of neuroscience in the near future we aim to develop and implement our system in online applications such as health systems and computer games in addition more datasets has to be analyzed for a better knowledgeable extraction and more accurate decision rules acknowledgment the authors would like to acknowledge the financial support received from applied science university that helped in accomplishing the work of this article references donoghue connecting cortex to machines recent advances in brain interfaces nature neuroscience supplement vol pp levine huggins bement kushwaha schuh passaro rohde and ross identification of electrocorticogram patterns as the basis for a direct brain interface journal of clinical neurophysiology vol pp vallabhaneni wang and b he interface in neural engineering b he ed springer us pp wolpaw birbaumer mcfarland pfurtscheller and vaughan interfaces for communication and control clinical neurophysiology vol pp niedermeyer and da silva electroencephalography basic principles clinical applications and related fields lippincott williams wilkins sleight pillai and mohan classification of executed and imagined motor movement eeg signals ann arbor university of michigan pp graimann pfurtscheller and allison interfaces a gentle introduction in interfaces springer berlin heidelberg pp selim wahed and kadah machine learning methodologies in interface systems in biomedical engineering conference cibec cairo pp grabianowski how interfaces work http smith salvendy krauledat dornhege curio and blankertz machine learning and applications for braincomputer interfacing in human interface and the management of information methods techniques and tools in information design vol springer berlin heidelberg pp vidal toward direct communication annual review of biophysics and bioengineering vol pp pfurtscheller neuper flotzinger and pregenzer eegbased discrimination between imagination of right and left hand movement electroencephalography and clinical neurophysiology vol pp sepulveda control of robot navigation in advances in robot navigation barrera ed intech mohamed towards improved eeg interpretation in a sensorimotor bci for the control of a prosthetic or orthotic hand in faculty of engineering master of science in engineering johannesburg universityof witwatersrand su qi luo wu yang li zhuang zheng and chen a hybrid interface control strategy in a virtual environment journal of zhejiang university science c vol pp wang hong gao and gao implementation of a braincomputer interface based on three states of motor imagery in annual international conference of the ieee engineering in medicine and biology society pp guger harkam hertnaes and pfurtscheller prosthetic control by an computer interface bci in aaate european conference for the advancement of assistive technology germany j kim hwang cho and han single trial discrimination between right and left hand movement with eeg signal in proceedings of the annual international conference of the ieee engineering in medicine and biology society cancun mexico pp goldberger amaral glass hausdorff ivanov mark mietus moody peng and stanley physiobank physiotoolkit and physionet components of a new research resource for complex physiologic signals circulation vol pp schalk mcfarland hinterberger birbaumer and wolpaw a interface bci system ieee transactions on biomedical engineering vol pp deecke weinberg and brickett magnetic fields of the human brain accompanying voluntary movements bereitschaftsmagnetfeld experimental brain research vol pp neuper and pfurtscheller evidence for distinct beta resonance frequencies in human eeg related to specific sensorimotor cortical areas clinical neurophysiology vol pp p a g e ijacsa international journal of advanced computer science and applications vol no delorme and makeig eeglab an open source toolbox for analysis of eeg dynamics journal of neuroscience methods vol pp bartels and automatic artifact removal from eeg a mixed approach based on double blind source separation and support vector machine in annual international conference of the ieee engineering in medicine and biology society embc pp automatic artifact removal aar toolbox for matlab in transform methods for electroencephalography eeg http joyce gorodnitsky and kutas automatic removal of eye movement and blink artifacts from eeg data using blind component separation psychophysiology vol pp bashashati fatourechi ward and birch a survey of signal processing algorithms in interfaces based on electrical brain signals journal of neural engineering vol pp vuckovic and sepulveda delta band contribution in cue based single trial classification of real and imaginary wrist movement medical and biological engineering and computing vol pp gu dremstrup and farina discrimination of type and speed of wrist movements from eeg recordings clinical neurophysiology vol pp gwin and ferris eeg and independent component analysis mixture models distinguish knee contractions from ankle contractions in annual international conference of the ieee engineering in medicine and biology society embc boston usa pp makeig j bell jung and sejnowski independent component analysis of electroencephalographic data advances in neural information processing systems vol pp delorme and makeig single subject data processing tutorial decomposing data using ica in the eeglab tutorial http qahwaji colak and ipson machine learningbased investigation of the associations between cmes and filaments solar physics vol pp qahwaji colak and ipson automated machine learning based prediction of cmes based on flare associations sol phys vol qahwaji and colak automatic solar flare prediction using machine learning and sunspot associations solar vol pp qahwaji colak and ipson using the real gentle and modest adaboost learning algorithms to investigate the computerised associations between coronal mass ejections and filaments in mosharaka international conference on communications computers and applications mosharaka for researches and studies amman jordan pp fahlmann and lebiere the learning architecture in advances in neural information processing systems denver colorado university of dortmund lehrstuhl informatik p a g e
| 9 |
mar a concentration inequality for the excess risk in regression with random design and heteroscedastic noise adrien saumard bretagne loire march abstract we prove a new and general concentration inequality for the excess risk in regression with random design and heteroscedastic noise no specific structure is required on the model except the existence of a suitable function that controls the local suprema of the empirical process so far only the case of linear contrast estimation was tackled in the literature with this level of generality on the model we solve here the case of a quadratic contrast by separating the behavior of a linearized empirical process and the empirical process driven by the squares of functions of models keywords regression excess risk empirical process concentration inequality margin relation introduction the excess risk of a is a fundamental quantity of the theory of statistical learning consequently a general theory of rates of convergence as been developed in the nineties and early however it has been recently identified that some theoretical descriptions of learning procedures need finer controls than those brought by the classical upper bounds of the excess risk in this case the derivation of concentration inequalities for the excess risk is a new and exiting axis of research of particular importance for obtaining satisfying oracle inequalities in various contexts especially linked to high dimension in the field of model selection it has been indeed remarked that such concentration inequalities allow to discuss the optimality of model selection procedures more precisely concentration inequalities for the excess risk and for the excess empirical risk are central tools to access the optimal constants in the oracle inequalities describing the model selection accuracy such results have put to evidence the optimality of the slope heuristics and more generally of selection procedures based on the estimation of the minimal penalty in statistical frameworks linked to regularized quadratic estimation under similar assumptions it is also possible to discuss optimality of resampling and type procedures in high dimension convex methods allow to design and compute efficient estimators this is the reason why chatterjee has recently focused on the estimation of the mean of a high dimensional gaussian vector under convex constraints by getting a concentration inequality for the excess risk of a projected estimator chatterjee has proved the universal admissibility of this estimator the concentration inequality has then been sharpened and extended to the excess risk of estimators minimizing penalized convex criteria it is also well known see for instance that a weakness of the theory of regularized estimators in a sparsity context is that classical oracle inequalities such as in describe the performance of estimators with an amount of regularization that actually depends on the confidence level considered in the oracle inequality this does not correspond to any practice with this kind of estimators whose regularization parameter is usually fixed using a procedure recently bellec and tsybakov building on have established more satisfying oracle inequalities describing the performance of regularized estimators such as lasso group lasso and slope with a confidence level independent of the regularization parameter in particular the oracle inequalities can be integrated again the central tool to obtain such bound is a concentration inequality for the excess risk estimator at hand in this paper we extend the technology developed in in order to establish a new concentration inequality for the excess risk in regression with random design and heteroscedastic noise this is appealing since and cover only regression with fixed design and homoscedastic gaussian noise while have to assume that the law of the design is known in order to perform a linearized regression see section of our strategy is as follows we first remark that the empirical process of interest splits into two parts a linear process and a quadratic one then we prove that the linear process achieves a second order margin condition as defined in and put meaningful conditions on the quadratic process in order to handle it techniques from empirical process theory such as talagrand s type concentration inequalities and contraction arguments are at the core of our approach the paper is organized as follows the regression framework as well as some properties linked to margin relations are described in section then we state our main result in section the proofs are deferred to section heteroscedastic regression with random design setting n let xi yi be an sample taking values in x r where x is a measurable space typically a subset of rp we assume that the following relation holds yi xi xi for i n where is the regression function is the heteroscedastic noise level and e e we take a closed convex model g p x where p x is the common distribution of the s and set g arg min p g where p is the common distribution of the pairs xi yi and is the contrast defined by g x y y g x we will also denote fg g f g f g the function g will be called the of the regression function onto the model indeed if we denote the quadratic norm in p x it holds g min kg k we consider the estimator over g defined to be arg min pn g where pn pn xi yi is the empirical measure associated to the sample we want to assess the concentration of the quantity p f p g called the excess risk of the estimator on g around a single deterministic point to this end it is easy to see remark section or van de geer and wainwright that the following representation formula holds for the excess risk on g in terms of empirical process r n o p f arg min s where s max pn p f f f with fs f f p f f it is shown in for various settings that include linearized regression that the quantity actually concentrates around the following point i h arg min e s e s e s on a relation pointed on the projection of the regression function in order to prove concentration inequalities for the excess risk on g we will need to check the following relation also called quadratic curvature condition in there exists a constant c such that f f for all f f p f where f ef ef is the variance of f with respect to p x a very classical relation in statistical learning called margin relation consists in assuming that holds with f is replaced by the image of the target such relation is satisfied in regression whenever the response variable y is uniformly bounded here we do not assume that belongs to f thus f may be different from condition there exists such that from condition we deduce that and condition there exists such that sup from conditions and we deduce that the image model f is also uniformly bounded there exists k such that sup f f k f more precisely k is convenient the following proposition shows that relation is satisfied in our regression setting whenever the response variable is bounded and the model g is convex and uniformly bounded it can also be found in proposition proposition if the model g is convex and conditions and hold then there exists a constant c such that f f p f for all f f furthermore c is convenient the major gain brought proposition over the classical margin relation is that the bias of the model that is the quantity p f that is implicitly contained in the excess risk appearing in the classical margin relation is pushed away from inequality proposition is thus a refinement over the classical notion of margin relation it is stated for the contrast but it is easy to see that it can be extended to more general situations where the contrast is convex and regular in some sense see proposition section in for completeness the proof of proposition can be found in section second order quadratic margin condition first notice that the arguments of the empirical process of interest can be decomposed into a linear and a quadratic part it holds for any f fg f and any x y x fg x y f x y g x y g x y x y g g x g g x where x y y g x to this contrast expansion around the projection g of the regression function onto g we can associate two empirical processes that we will call respectively the linear and the quadratic empirical process and we will be more precisely interested by their local maxima on gs g g g g s s n o and q s max pn p g g s max pn p g g in what follows we will not directly show that the excess risk concentrates around defined in but rather around a point defined to be h i arg min s s e s it holds around a relation of the type of a second order margin relation introduced in as proved in the following lemma which proof is available in section lemma for any s it holds and also s s h i s s the fundamental difference with the second order margin relation stated in is that we require in lemma conditions on the linear part of empirical process and not on the empirical process of origin that takes in arguments contrasted functions indeed it seems that for the latter empirical process the second order margin relation does not hold or is hard to check in regression in general this difficulty indeed forced van de geer and wainwright section to work in a linearized regression context under the quite severe restriction that the distribution of design is known from the statistitian on contrary our main result stated in section below is stated for a general regression situation where the distribution of the design is unknown and the noise level is heteroscedastic main result before stating our new concentration inequality we describe the required assumptions in order to state the next condition let us denote h i s e s s max pn p g g condition there is a sequence mn and a strictly increasing function such that the function is strictly convex and such that u u u s s s mn condition take k for any s there exists a constant d s k such that sup g g d s notice that if conditions and hold then by the use of classical symmetrization and contraction arguments for all s s s s and eq s s mn mn where is a positive constant that only depends on and from now on we set j max k so that max s eq s j s for any s and also eq s d s j s we are now able to state our main result theorem if j s aj s d s s mn then it holds for any t p g hence if moreover aj r n n t ln k n t ln k n n n ln n n and aj n then aj ln n op g op inequality of theorem is a new concentration inequality related regression with random design and heteroscedastic noise on a convex uniformly bounded model in particular it extends results of related to linearized regression which is a simplified framework for regression to the classical and general framework described in section above the proof of theorem is detailed in section the following corollary provides an generic example entering into the assumptions of theorem and that is related to linear aggregation via empirical risk minimization d define m span the linear span generated by an orthonormal dictionary in p x take g b m g m the unit ball in of m centered on g the projection of onto assume that sup cm d and ln n d then if p g op d n ln n r ln n d op note that inequality relating the to the quadratic norm of the functions in the linear span of the dictionary is classical in estimation and is satisfied for the usual functional bases such as the fourier basis wavelets or piecewise polynomials over a regular partition including histograms see for instance in particular corollary extends a concentration inequality recently obtained in for the excess risk of the erm in the linear aggregation problem when the dictionary at hand is the fourier dictionary proofs proofs related to section proof of proposition take f fg f then on the one hand p f f f h i e g g x y e g g x g x x g g g x y on the other hand p f e e g g x g x x e g g x y g x x g x g g y g x g g x g g g x g g x g the latter inequality which corresponds to e g x g g x from the fact that g is convex and so g being the projection of onto g the scalar product in p x between the functions g and g g is nonpositive combining and now gives the result proof of lemma inequality derives from by taking expectation on both sides concerning the proof of it is easily seen that the function s s is concave indeed take for si i gn si arg max pn p g g for any if gb then by the triangular inequality gb which gives pn p gb g now from the concavity of s s we deduce that the function s s is convex which implies proofs related to section proof of theorem we prove the concentration of g at the right of and arguments will be of the same type for the deviations at the left take t j and set for any j j the intervals ij j we also set z t it holds for any to be chosen later r t n r kt t n n p k p k s p k s e z t where in the last inequality we used lemma furthermore by setting for all j j pj p ij s e z t a union bound gives j x pj p k s e z t now for each index j and for all s ij we have s j furthermore it holds for all u with probability r u z u n eq r u z u n e where the first inequality comes from lemma by lemma we then have j eq d j e d j j putting the previous estimates in we get for all s ij s e d j r u c z u n e d j r u z u c n with probability we require that j using the assumptions it is equivalent to require aj n whenever aj n the last display is true if aj we also require that r u c z u z t n to finish the proof we fix n and u t ln k n in particular j k n and j x pj our conditions on t become for a constant only depending on c and k r aj t ln k n t ln k n t n n since proof of corollary under assumption we have r d max pn p g g s e n where we used two times inequality hence mn furthermore by assumption we have sup g g cm s d n and j s s d are convenient we can thus apply theorem with d s cm s consequently condition ln n n turns into ln n d n and condition aj n is satisfied whenever d in the following theorem see theorem in the inequalities are direct applications of bousquet and the inequalities are deduced from klein and rio theorem if conditions and are satisfied and we set k sup f f f s sup f f f then it holds t p s e s s n r kt t p s e s s s n n r using proposition and conditions we can simplify the bounds given in theorem as follows lemma if conditions and are satisfied then with the same notations as in theorem and also mn c we have r t t p s e s n n r r t t kt p s e s n n n r proof we have c where the constant c is defined in proposition furthermore using the fact that uv u v for any u v we get j s s s mn mn c the conclusion is then easy to obtain by using a b a references arlot and bach calibration of linear estimators with minimal penalties in bengio schuurmans lafferty williams and culotta editors advances in neural information processing systems pages arlot and lerasle choice of v for v in density estimation mach learn to appear arlot and massart calibration of penalties for regression mach learn electronic arlot v improved v penalization february barron and massart risk bounds for model selection via penalization probab theory related fields bellec and a tsybakov towards the study of least squares estimators with convex penalty arxiv preprint bellec and a tsybakov slope meets lasso improved oracle bounds and optimality ann to appear and massart minimal penalties for gaussian model selection probab theory related fields boucheron and massart a wilks phenomenon probab theory related fields baudry maugis and michel slope heuristics overview and implementation stat bousquet a bennett concentration inequality and its application to suprema of empirical processes math acad sci paris bickel ritov and a tsybakov simultaneous analysis of lasso and dantzig selector ann bellec and a tsybakov bounds on the prediction error of penalized least squares estimators with convex penalty in vladimir panov editor modern problems of stochastic analysis and statistics selected contributions in honor of valentin konakov springer to appear celisse optimal in density estimation with the ann chatterjee a new perspective on least squares under convex constraint koltchinskii oracle inequalities in empirical risk minimization and sparse recovery problems volume of lecture notes in mathematics springer heidelberg lectures from the probability summer school held in d de de probability summer school ann klein and rio concentration around the mean for maxima of empirical processes ann interplay between concentration complexity and geometry in learning theory with applications to high dimensional data analysis habilitation diriger des recherches december lerasle optimal model selection for density estimation of stationary data under various mixing conditions ann massart concentration inequalities and model selection volume of lecture notes in mathematics springer berlin lectures from the summer school on probability theory held in july with a foreword by jean picard muro and van de geer concentration behavior of the penalized least squares estimator arxiv preprint to appear in statistica neerlandica navarro and saumard slope heuristics and model selection in heteroscedastic regression using strongly localized bases esaim probab saumard regular contrast estimation and the slope heuristics phd thesis rennes october https saumard optimal upper and lower bounds for the true and empirical excess risks in heteroscedastic regression electron j adrien saumard on optimality of empirical risk minimization in linear aggregation bernoulli van de geer and wainwright on concentration for regularized empirical risk minimization sankhya a
| 10 |
on geodesic ray bundles in buildings mar abstract let x be a building identified with its davis realisation in this paper we provide for each x x and each in the visual boundary of x a description of the geodesic ray bundle geo x namely of the union of all combinatorial geodesic rays corresponding to infinite minimal galleries in the chamber graph of x starting from x and pointing towards when x is locally finite and hyperbolic we show that the symmetric difference between geo x and geo y is always finite for x y x and this gives a positive answer to a question of huang sabok and shinko in the setting of buildings combining their results with a construction of bourdon we obtain examples of hyperbolic groups g with kazhdan s property t such that the on its gromov boundary is hyperfinite introduction this paper is motivated by a question of huang sabok and shinko question asking whether in a proper and cocompact hyperbolic space x the symmetric difference between two geodesic ray bundles pointing in the same direction is always finite see below for precise definitions this is motivated by the study of borel equivalence relations for the action of a hyperbolic group g on its gromov boundary the authors of give a positive answer to the above question when x is a cat cube complex and deduce that if g is a hyperbolic cubulated group namely if g acts properly and cocompactly on a cat cube complex then the on its gromov boundary is hyperfinite that is it induces a hyperfinite equivalence relation corollary as it turns out the answer to question is no in full generality are constructed in the purpose of this paper is to give a positive answer to this question when x is a hyperbolic locally finite building we underline that the class of groups acting properly and cocompactly on hyperbolic locally finite buildings includes groups with kazhdan s property t and is thus significantly different from the class of cubulated hyperbolic groups considered in see the fixed point theorem we now give a precise statement of our main result by a classical result of davis any building can be realised as a complete cat metric space x d which can be viewed as a subcomplex of the barycentric subdivision of the standard geometric realisation of let x x denote the set of barycenters of chambers of x that is x is the of the chamber graph of the boundary of x is the set of equivalence classes of asymptotic geodesic rays in x see section for precise definitions we denote for each x x and by geo x x the union of all combinatorial geodesic rays xn x if xn is the barycenter of the chamber cn postdoctoral researcher marquis then cn in an infinite minimal gallery in starting at x and pointing towards in the sense that is contained in a tubular neighbourhood of some geodesic ray towards the sets geo x are called geodesic ray bundles in this paper we give a description of geodesic ray bundles in arbitrary buildings see section and proposition when the building x is gromov hyperbolic and locally finite we deduce from this description the following theorem theorem a let x be a locally finite hyperbolic building let x y x and let then the symmetric difference of geo x and geo y is finite as an immediate consequence of theorem a and of theorem we deduce the following corollary corollary b let g be a group acting cocompactly on a locally finite hyperbolic building x and assume that g acts freely on the chambers of x then the natural action of g on its gromov boundary is hyperfinite in see also bourdon constructs a family of groups g with property t acting cocompactly on some hyperbolic building x these groups g are defined as fundamental groups of some complexes of groups a standard reference to this topic is and it follows straightaway from the form of the complexes of groups involved that g acts freely on the set of chambers of x and that x is locally finite another example of such a group with an explicit short presentation also recently appeared in in particular corollary b yields examples of hyperbolic groups with property t whose boundary action is hyperfinite corollary there exist infinite hyperbolic groups g with property t such that the on its gromov boundary is hyperfinite note that any group with property t that acts on a cat cube complex has a global fixed point in particular theorem a covers situations that are not covered by see also the last paragraph in the introduction of acknowledgement i would like to thank caprace for bringing question to my attention and for suggesting to explore it in the context of buildings i would also like to thank the anonymous referee for precious comments preliminaries cat and gromov hyperbolic spaces the standard reference for this paragraph is let x d be a complete cat namely a complete geodesic metric space in which every triangle is at least as thin as the corresponding triangle in euclidean space e with same side lengths in the sense that any two points x y of are at distance at most de y from one another where y are the points on corresponding to x y respectively and de is the euclidean distance on given two points x y x there is a unique geodesic segment from x to y which we denote x y a geodesic ray based at x x is an isometry r x with r x two geodesic rays r are called asymptotic if d r t t on geodesic ray bundles in buildings equivalently identifying r with their image in x they are asymptotic if they are at bounded hausdorff distance from one another that is if r resp is contained in a tubular neighbourhood of resp r we recall that a tubular neighbourhood of a subset s of x is just an of s for some the boundary of x denoted is the set of equivalence classes r of geodesic rays r x where two geodesic rays are equivalent if they are asymptotic we then say that the geodesic ray r points towards r for each x x and there is a unique geodesic ray starting at x and pointing towards which we denote x the space x is called gromov hyperbolic if there is some such that each triangle in x is in the sense that each side of is contained in a of the other two x is then also called hyperbolic spaces can be thought of as fattened versions of trees and their behavior is somehow opposite to that of a euclidean space if the cat space x is proper every closed ball of x is compact and cocompact there is some compact subset c x such that isom x x then x is hyperbolic if and only if it does not contain a subspace isometric to the euclidean plane there is a notion of gromov boundary of a hyperbolic space in the context of cat spaces it coincides with the boundary defined above endowed with the cone topology see buildings the standard reference for this paragraph is let be a building viewed as a simplicial complex see chapter let ch denote the set of chambers maximal simplices of a panel is a codimension simplex of two chambers are adjacent if they share a common panel a gallery between two chambers c d ch is a sequence c ck d of chambers such that and ci are distinct and adjacent for each i the integer k is called the length of if is a gallery of minimal length between c and d it is called a minimal gallery and its length is denoted dch c d the map dch ch ch n is then a metric called the chamber distance on an infinite sequence ci of chambers is called a minimal gallery if cn is a minimal gallery for each n any such is contained in an apartment a of let a be an apartment of and let c d ch a be distinct adjacent chambers in a then no chamber of a is at equal chamber distance from c and d this yields a partition ch a c d d c where c d is the set of chambers that are closer to c than to the subcomplexes of a with underlying chamber sets c d and d c are called or roots and their intersection is called the wall separating c from if m is a wall delimiting the of a we say that two subsets and are separated by a gallery ck resp ci contained in an apartment a is said to cross a wall m of a if m is the wall separating from ci for some i k resp i n the gallery a is then minimal if and only if it crosses each wall of a at most once moreover if c d ch a then the set of walls crossed by a minimal gallery from c to d depends only on c d it is independent of the choice of if a is an apartment of and c ch a there is a simplicial map c a called the retraction onto a centered at c with the following properties marquis c is the identity on a and its restriction to any apartment containing c is an isomorphism with inverse c a in particular c preserves the minimal galleries from c moreover c does not increase the distance dch c d c e dch d e for all d e ch the set of all panels of is denoted the star of a panel denoted st is the set of chambers containing for any panel and any chamber c ch there is a unique chamber c in st minimising the gallery distance from c to st it is called the projection of c on and is denoted c c it has the following gate property dch c d dch c c dch c d for all d st the building is called locally finite if st is a finite set of chambers for each davis realisation of a building the standard reference for this paragraph is chapter see also let be a building then admits a cat x d called the davis realisation of which is a complete cat space it can be viewed as a subcomplex of the barycentric subdivision of the standard geometric realisation of and contains the barycenter of each chamber and panel of in the sequel we will often identify with its davis realisation x and all related notions apartment chamber panel gallery wall with their realisation in x viewed as closed subspaces of x we set x xc c ch ch x x where xc x denotes the barycenter of the chamber if a is an apartment of x we also set a a x a combinatorial path between xc xd x is a piecewise geodesic path x which is the union of the geodesic segments xci i k for some gallery c ck d we then write xci thus combinatorial paths in x are in bijection with galleries in a combinatorial geodesic is a combinatorial path corresponding to a minimal gallery in one defines similarly infinite combinatorial paths and combinatorial geodesic rays abbreviated cgr by replacing galleries in with infinite galleries if then a combinatorial geodesic ray from x x to is a combinatorial geodesic ray starting at x and at bounded hausdorff distance from some any geodesic ray pointing towards we denote by cgr x the set of cgr from x x to if is a combinatorial geodesic from some x x to some y x and if is a combinatorial geodesic resp ray from y to some z x resp z we denote by the combinatorial path obtained as the concatenation of and each geodesic segment resp geodesic ray of x is contained in some minimal gallery and hence also in some apartment a of x in particular is covered by the boundaries of all apartments a of conversely the uniqueness of geodesic rays implies that if x a and for some apartment a of x then x a of course any combinatorial geodesic ray is also contained in some apartment of x for every apartment a and x xc a the retraction c a induces a retraction x x a with the same properties as the on geodesic ray bundles in buildings ones described in moreover d x y x z d y z for all y z x with equality if y belongs to the closed chamber c x let a be an apartment of x here are a few important properties of walls in a which can be found in see also a wall m of a that intersects a geodesic resp geodesic ray in more than one point entirely contains that geodesic resp geodesic ray in particular m is convex the subset of a has two connected components the open corresponding to m and those components are convex as we saw in a combinatorial path xci resp xci contained in a is a combinatorial geodesic resp a cgr if and only if it crosses each wall of a at most once note that x is a proper cat space if and only if it is locally finite the building x is called hyperbolic if it is hyperbolic in the sense of when equipped with the cat metric equivalently x is hyperbolic if and only if a d is hyperbolic for some resp for each apartment a of x as readily follows from the properties of retractions onto apartments note that moussong gave a characterisation of the hyperbolicity of x in terms of the type w s of see theorem x is hyperbolic if and only if wj hji j is not an affine coxeter system whenever and there is no pair of disjoint subsets i j s such that wi and wj are infinite and commute the only fact that we will need about hyperbolic buildings however is the following lemma assume that the building x is hyperbolic then there is a constant k such that for any x x and any cgr x is contained in a of x proof by proposition combinatorial geodesics are for the cat metric d so that the lemma follows from theorem here is also a basic useful fact about combinatorial geodesic rays lemma let x x and let xn be a cgr then there exists some k n such that xn is a cgr for any combinatorial geodesic from x to xk proof reasoning inductively we may assume that x and are adjacent if x is a cgr the claim is clear with k otherwise there is some m such that the combinatorial path x xm is not a combinatorial geodesic so that dch x xm let be any combinatorial geodesic from x to xm if xn is a cgr we are done with k otherwise there is some k m such that the combinatorial path xm xk is not a combinatorial geodesic so that dch x xk k let be any combinatorial geodesic from x to xk we claim that xn is a cgr yielding the lemma indeed otherwise there is some k such that the combinatorial path xn is not a combinatorial geodesic and hence dch x dch x x dch x xk xk x a contradiction combinatorial bordification of a building in this section we recall the notion of combinatorial bordification of a building introduced in and relate it to the notions introduced in section marquis let be a building as in recall that for each panel we have a projection map ch st ch associating to each chamber c the unique chamber of st closest to this defines an injective map ch y st c c q we endow st with the product topology where each star st is a discrete set of chambers the minimal combinatorial bordification of is then defined as the closure y ch st since is injective we may identify ch with a subset of and it thus makes sense to say that a sequence of chambers cn converges to some ch in if is reduced to a single apartment a this notion of convergence is transparent cn converges in a if and only if for every wall m of a the sequence cn eventually remains on the same side of on the other hand back to a general one can identify a a an apartment with the subset of consisting of limits of sequences of chambers in a and in fact see proposition a a apartment let c ch and let cn be a sequence of chambers converging to some we define the combinatorial sector based at c and pointing towards as q c conv c cn where conv c cn denotes the union of all minimal galleries from c to cn then q c indeed only depends on c and and not on the choice of sequence cn converging to and is contained in some apartment note also that if c ch is contained in q c then q c q c example let x be a building of type the apartments of x are then euclidean planes tesselated by congruent equilateral triangles if a is an apartment of x its bordification a consists of lines of points and isolated points see example which can be seen as follows let x a and if the direction is in the sense that x is not contained in a tubular neighbourhood of any wall of a then for any xn cgr x contained in a the sequence of barycenters of chambers xn converges in a to a unique a the sector q x in a is shown on figure if is singular that is if x is contained in a tubular neighbourhood of some wall of a then the set of a obtained as above as the limit of some cgr x are the vertices of some simplicial line at infinity see the dashed line on figure and the combinatorial sectors q x for on this line are represented on figure on geodesic ray bundles in buildings figure direction to see what combinatorial sectors look like we relate them to the notions introduced in as in that paragraph we identify with its davis realisation x to avoid cumbersome notations we also identify the chambers of with their barycenters in x ch with x this thus also identifies the notions of minimal resp infinite gallery and of combinatorial geodesic resp ray for each and each apartment a of x we let denote the set of walls m of a containing in their boundary m contains a geodesic ray towards we also let be the set of apartments of x with for we next define an equivalence relation on x as follows for x y x distinct adjacent chambers we write x y if for any apartment a containing x and y the wall of a separating x from y does not belong to we also write x x for x x so that becomes a symmetric and reflexive relation on x we then let be the transitive closure of for any x x we now let x x be the subcomplex of x obtained as the union of all chambers y x with y x note that x y y x y x for any y x we start by making some useful observations about the relation lemma let x y x and and assume that there exists an apartment of containing x y then x y if and only if there exists an apartment a containing x y such that the wall of a separating x from y does not belong to proof the implication is clear conversely let a be an apartment containing x y and such that the wall m of a separating x from y does not belong to and assume for a contradiction that there is an apartment containing x y such that the wall of separating x from y belongs to let z be the barycenter of the common panel of x and y then z by definition of buildings there is a simplicial isomorphism a fixing marquis a pointwise since z by assumption and m we deduce that z z hence m a contradiction the next lemma introduces some important terminology and notations for x x and we call a cgr cgr x straight if the infinite gallery corresponding to contains the geodesic ray x lemma let x x and then the following assertions hold let xn cgr x then the sequence of chambers xn converges in x we denote its limit by x x and we say that converges to let y x and let cgr x and cgr y be contained in some apartment a then if and only if and eventually lie on the same side of any given wall m if cgr x is straight it is contained in x and in every apartment a with x a moreover x x is independent of the choice of a straight cgr x proof since the cgr is contained in some apartment a this readily follows from the above description of convergence in a we have to show that if and eventually lie on different sides of a wall m of a then m but if and are cgrs separated by m then x which is contained in a tubular neighbourhood of and must be contained in a tubular neighbourhood of m as claimed let cgr x be straight and let a with x a thus a also contains x since x is not contained in any wall of a we deduce that a must contain infinitely many chambers of and hence also by convexity moreover since x does not intersect any wall in the cgr does not cross any wall in and hence x by lemma finally if cgr x is straight then both and are contained in a common apartment a by the above discussion and hence by we next give an alternative description of the sets x proposition let x x and then x x y x proof let y x we have to show that x y if and only if assume first that x y reasoning inductively on the length of a gallery x xn y from x to y such that xi for all i n there is no loss of generality in assuming that x y let xn cgr x be straight by lemma there is some k n and some combinatorial geodesic from y to xk such that xn cgr y let a be an apartment containing let cgr y be straight so that a by lemma we claim that does not cross any wall in this will imply that and do not cross any wall in and hence that by lemma as desired indeed if m separates y from xk and if is an apartment containing then the wall xk m of belongs to because xk does not increase the distance and hence xk on geodesic ray bundles in buildings xk xk a is contained in a tubular neighbourhood of both m and and separates xk from y xk y but since can not be crossed by xn it must separate the adjacent chambers x xk x and y now if is the of containing x and delimited by we find by exercise a an apartment containing and y then separates x from y in contradicting our hypothesis x conversely assume that and let us show that x y let xn cgr x and cgr y be straight by lemma there is some k n and some combinatorial geodesic from y to xk such that xn cgr y let a be an apartment containing then a by lemma note that the walls of a separating xk from y do not belong to for otherwise the cgrs xn and which do not cross any wall in would be separated by some wall of contradicting our assumption that hence x xk y by lemma yielding the claim we conclude our round of observations about the relation with the following consequences of proposition lemma let x x and then the following assertions hold let a be an apartment containing x and let y a then x y if and only if the walls of a separating x from y do not belong to let cgr x and let a be an apartment containing then converges to the walls of a crossed by do not belong to x proof the implication follows from lemma conversely assume that x y then by proposition that is for some straight cgr x and cgr y by lemma and are contained in a if now m separates x from y then it also separates from and hence a contradiction this readilfy follows from to get a better understanding of combinatorial sectors we first show that given an element x x one can choose a sequence of chambers xn of x converging to in a nice and controlled way here by nice we mean that xn can be chosen to be a cgr and by controlled we mean that we may impose further restrictions on lemma let x x then there is some and some straight yn cgr such that proof let a be an apartment of x with a let xn be a sequence of chambers of a converging to since the space a is proper the sequence of geodesic segments xn subconverges to some geodesic ray for some in other words up to extracting a subsequence we may assume that xn is contained in an a of for some we claim that there exists a finite subset s such that for each m s the neighbourhood is entirely contained in one of the delimited by indeed any wall in intersecting say in y contains the geodesic ray y see theorem and hence a marquis figure singular direction subray of in an on the other hand since a is locally finite there is some n n such that any ball of radius in a intersects at most n walls in particular there are at most n walls intersecting whence the claim recall that for any wall m of a the sequence xn eventually remains on the same side of m in particular there is some k n such that xn is entirely contained in some associated to m for each m hence for any n k the walls separating xk from xn do not lie in let yn cgr xk be straight thus xk then by lemma we know that xk a since for any n k and any wall m the chambers xn xk yn all lie on the same side of m we conclude as in the proof of lemma that as desired proposition let x x and x x then q x y x y is on a cgr starting from x and converging to proof let xn be a cgr with x and converging to to prove the inclusion we have to show that for any k n the chamber xk belongs to q x but as xk conv x xn for every n k this is clear conversely let y q x by lemmas and there exists a cgr xn starting from x and converging to let n n be such that y conv x xn since replacing the portion of between x and xn by some combinatorial geodesic from x to xn passing through y still yields a cgr the lemma follows we next wish to prove a refinement of proposition by relating the combinatorial bordification to the visual boundary of x on geodesic ray bundles in buildings for we define the transversal graph to x in the direction as the graph x with vertex set x x x x and such that with are adjacent connected by an edge if and only if there exist adjacent chambers y x such that and the elements of will also be called chambers and we define the notions of galleries and chamber distance in x as in note that by lemma any x x is of the form for some x x and that is x x example in the context of example assume that x consists of a single apartment a and that is a singular direction then x is a simplicial line the dashed line on figure the y for y a are stripes obtained as the convex hull of two adjacent walls of namely of walls of a in the direction of here is another description of in terms of cgr s proposition let x x and then x x for some cgr x proof the inclusion is clear conversely if for some xn cgr x then by lemma below there is some k n such that xn xk hence by lemma lemma let x x and for any xn cgr x the sequence is eventually constant in other words there is some k n such that xn xk proof let a be an apartment containing thus a and assume for a contradiction that is not eventually constant thus there are infinitely many walls mi i n crossed by see lemma since x a is not contained in any wall it does not intersect any wall mi i on the other hand is contained in an of x for some for each y let y x with d y y then there is some n n such that y y intersects at most n walls of a for any y now let y be such that the walls intersect the combinatorial geodesic from x to y contained in then the walls must all intersect the geodesic segment y y yielding the desired contradiction we can now formulate the announced refinement of proposition theorem let x x and for some then q x y x y is on a cgr from x to and converging to proof the inclusion follows from proposition the converse inclusion is proved exactly as in the proof of proposition the existence of a cgr from x to and converging to following from proposition marquis given x x and we next wish to show that the combinatorial sector q x is minimal in the direction in the sense that it is contained in every other combinatorial sector q x with see proposition below to this end we first need a more precise version of lemma by further improving our control of cgr s converging to a given x x lemma let x x and if y x is on some cgr from x to then y is on some cgr x converging to proof let be a cgr from x to passing through y and let resp be the combinatorial geodesic from x to y resp cgr from y to contained in so that let a be an apartment containing and let a be a straight cgr from y to and converging to in particular does not cross any wall in we claim that is a cgr yielding the lemma otherwise there is a wall m of a that is crossed by both and since m can not be crossed by it separates from for some cgr to contained in since and are at bounded hausdorff distance from one another this implies that m and hence that can not cross m a contradiction proposition let x x and then q x q x x y q x proof note that the second equality holds by proposition let y q x x then y lies on some cgr from x to by theorem and hence also on some cgr x converging to by lemma but since by proposition we then have y q x by theorem conversely if y q x then certainly y x by theorem and lemma and it remains to show that y q x let a be an apartment containing q x thus a note first that a contains q x indeed let xn cgr x be straight sso that converges to and a by lemma then t q x conv x xn a as claimed let now be a cgr from x to passing through y and converging to see theorem and let us show that y is also on a cgr from x to converging to and hence that y q x by theorem as desired write where is a combinatorial geodesic from x to y and a cgr from y to by lemma there is a cgr a from y to converging to we claim that is still a cgr as desired otherwise there is a wall m of a that is crossed by both and since m can not be crossed by it separates from for some cgr contained in since and are at bounded hausdorff distance from one another this implies that m so that can not cross m a contradiction to conclude this section we give a consequence of hyperbolicity for the building x in terms of the sets lemma assume that the building x is hyperbolic then is a bounded set of chambers for each in particular if x is moreover locally finite then is finite for all on geodesic ray bundles in buildings proof let k be as in lemma note that there exist constants and such that dch y z d y z for all y z x see proposition we also let be such that for any x x the closed chamber c of x of which x is the barycenter is contained in a neighbourhood of x see we fix some n n such that n let x x and we claim that any xn cgr x crosses at most n walls in where a is any apartment containing hence this will imply that the chambers n n of x are at gallery distance at most n from and hence the lemma will follow from proposition let thus xn cgr x and let a be an apartment containing let yn a be a straight cgr from x to by lemma we know that and are contained in a of one another assume for a contradiction that the combinatorial geodesic xn crosses n walls in for some k let n be such that d xk y thus dch xk y n but then n dch xk y d xk y a contradiction remark note that although lemma will be sufficient for our purpose it is not hard to see using moussong s characterisation of hyperbolicity for coxeter groups see theorem that its converse also holds the building x is hyperbolic if and only if each transversal graph x is bounded geodesic ray bundles in buildings throughout this section we let x be a building identified with its davis realisation as in section and we keep all notations introduced in sections and we also fix some we denote for each x x by geo x the ray bundle from x to that is geo x y x y lies on some cgr x the description of combinatorial sectors provided in section then yields the following description of ray bundles proposition geo x s q x proof this inclusion is clear by theorem conversely if geo x then converges to some by proposition and q x by theorem yielding the converse inclusion we first establish theorem a inside an apartment if x a for some apartment a we set geoa x y a y lies on some cgr x with a lemma let a then for any x a geoa x geo x a marquis proof the inclusion is clear for the converse inclusion we have to show that if y a lies on some cgr x then it also lies on some cgr x with a but we may take x because x preserves combinatorial geodesic rays from x and does not increase the distance lemma let a and let x a if y geoa x then geoa y geoa x proof reasoning inductively on dch x y we may assume that x and y are adjacent let m be the wall of a separating x from y let z geoa y and let us show that z geoa x by lemma we find some cgr a from y to going through z and converging to write for the cgr from z to contained in let also a be a cgr from x to going through y and let be the cgr from y to contained in finally let be a combinatorial geodesic from x to z we claim that is a cgr yielding the lemma otherwise there is some wall of a crossed by and then can not separate z from y because is a cgr and hence but as is a cgr the cgr can not cross m so that m separates from some cgr to contained in since and are at bounded hausdorff distance we deduce that m and hence that can not cross m a contradiction lemma assume that x is hyperbolic let a and let x y a then the symmetric difference of geoa x and geoa y is finite proof reasoning inductively on dch x y there is no loss of generality in assuming that x and y are adjacent assume for a contradiction that there is an infinite sequence yn geoa y geoa x choose for each n n some cgr y passing through yn note that if denotes the combinatorial geodesic from y to yn contained in then is disjoint from geoa x for if geoa x then yn geoa geoa x by lemma a contradiction since a is locally finite the sequence then subconverges to a cgr xn a that is disjoint from geoa x on the other hand since a is hyperbolic lemma yields that cgr y but then lemma implies that xn geoa x for all large enough n a contradiction we now turn to the proof of theorem a in the building x for the rest of this section we assume that x is hyperbolic and locally finite so that is finite by lemma lemma let x x and and let a be an apartment containing q x let s be an infinite subset of q x then there is some z q x such that q z s is infinite proof since is finite there is an infinite subset of s and some such that for all y let z by lemma we know that q x geoa z is finite and hence by proposition there is some infinite subset of contained in q z for some but then proposition implies that q z since as desired lemma let x y x then q y geo x is finite on geodesic ray bundles in buildings proof assume for a contradiction that there exists an infinite sequence yn q y x by proposition there is some z x such that q z q x q y note that this amounts to say that q x q y is nonempty which readily follows from proposition together with lemma let a be an apartment containing q y by lemma we know that q y geoa z is finite and we may thus assume up to taking a subsequence that yn geoa z hence by proposition we may further assume again up to extracting a subsequence that yn q z for some since and yn y by proposition this proposition implies that yn q z but q z q x because z q x q x yielding the desired contradiction as q x geo x by proposition theorem let x y x assume that x is hyperbolic and locally finite then geo y geo x is finite proof by lemma we know that is finite by proposition we have to prove that if then q y geo x is finite assume for a contradiction that there is an infinite sequence yn q y geo x then by lemma up to extracting a subsequence we may assume that yn q z for some z q y this contradicts lemma appendix transversal buildings let x be a building and let in section a construction of transversal building x to x in the direction is given however as pointed out to us by the referee the premises of that construction are incorrect the correct construction of x is the one given in where we called x the transversal graph to x in the direction although we did not need this fact in our proof of theorem a one can show that this transversal graph x is indeed the chamber graph of a building and therefore deserves the name of transversal building to x in the direction since this fact is used in other papers we devote the present appendix to its proof here we will follow the w approach to buildings as opposed to the simplicial approach from a standard reference for this topic is let w s be the type of x let a and view w as a reflection group acting on a let be the reflection subgroup of w generated by the reflections across the walls in by a classical result of deodhar is then itself a coxeter group moreover the polyhedral structure on a induced by the walls in can be identified with the coxeter complex of more precisely if x a is the fundamental chamber of a the chamber whose walls are associated to the reflections in s then x a which coincides with the intersection of all containing x and whose wall belong to is the fundamental chamber of the coxeter complex associated to the coxeter system where is the set of reflections across the walls in that delimit lemma the group depends only on and not on the choice of apartment a proof this is lemma and the proof of this lemma in remains valid in our context marquis lemma let and let x x with then there exists some y x with such that x and y are contained in some apartment a proof let y x with and let yn cgr y be straight by lemma there is some k n such that yn cgr x for some combinatorial geodesic from x to yk let a be an apartment containing then so that the claim follows by replacing y with yk theorem the transversal graph x to x in the direction is the graph of chambers of a building of type proof we define a weyl distance function as follows let by lemma we can write and for some x y a where a we then set where x a and x a are chambers in the coxeter complex of and is the weyl distance function on that complex see note that this definition is independent of the choice of apartment a see lemma and its proof and of chambers x y a such that and to simplify the notations we will also simply write x y to check that x is a building of type it then remains to check the axioms and of definition the axioms and are clearly satisfied because they are satisfied in the building a for any apartment a we now check let w s and with w and we have to show that sw w and that sw if sw w where n is the length function on with respect to the generating set choose some adjacent chambers x x such that and let also y x be such that and such that there is an apartment a with x y a see lemma let m be the wall of a containing the of x since x and are separated by a wall in some apartment containing x the wall m belongs to as it is the image of by the retraction x which fixes the geodesic ray x a pointwise on the other hand by exercise a there is an apartment containing and the of a delimited by m and that contains y if sw w then x and hence x y so that y x x y on the other hand if sw w so that x then letting denote the unique chamber of a that is to x and contained in we have y sw hence in that case y y sw w sw as desired on geodesic ray bundles in buildings references peter abramenko and kenneth brown buildings graduate texts in mathematics vol springer new york theory and applications martin bridson and haefliger metric spaces of curvature grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences vol berlin marc bourdon sur les immeubles fuchsiens et leur type de ergodic theory dynam systems no caprace a presentation of an infinite hyperbolic kazhdan group preprint caprace and jean combinatorial and compactifications of buildings ann inst fourier grenoble no michael davis buildings are cat geometry and cohomology in group theory durham london math soc lecture note vol cambridge univ press cambridge pp vinay deodhar a note on subgroups generated by reflections in coxeter groups arch math basel no jingyin huang marcin sabok and forte shinko hyperfiniteness of boundary actions of cubulated hyperbolic groups preprint gabor moussong hyperbolic coxeter groups thesis of the ohio state university noskov asymptotic behavior of word metrics on coxeter groups doc math graham niblo and lawrence reeves groups acting on cat cube complexes geom topol guennadi noskov and vinberg strong tits alternative for subgroups of coxeter groups j lie theory no jacek some infinite groups generated by involutions have kazhdan s property t forum math no nicholas touikan on geodesic ray bundles in hyperbolic groups preprint ucl belgium address
| 4 |
jan product lines sven christian armin and christian department of informatics and mathematics university of passau apel groesslinger lengauer school of computer science university of magdeburg kaestner technical report number department of informatics and mathematics university of passau germany june product lines sven christian armin and christian department of informatics and mathematics university of passau apel groesslinger lengauer school of computer science university of magdeburg kaestner abstract a product line is a family of programs that share a common set of features a feature implements a stakeholder s requirement represents a design decision and configuration option and when added to a program involves the introduction of new structures such as classes and methods and the refinement of existing ones such as extending methods with decomposition programs can be generated solely on the basis of a user s selection of features by the composition of the corresponding feature code a key challenge of product line engineering is how to guarantee the correctness of an entire product line of all of the member programs generated from different combinations of features as the number of valid feature combinations grows progressively with the number of features it is not feasible to check all individual programs the only feasible approach is to have a type system check the entire code base of the product line we have developed such a type system on the basis of a formal model of a language we demonstrate that the type system ensures that every valid program of a product line is and that the type system is complete introduction programming fop aims at the modularization of programs in terms of features a feature implements a stakeholder s requirement and is typically an increment in program functionality contemporary programming languages and tools such as ahead xak caesarj featurehouse and provide a variety of mechanisms that support the specification modularization and composition of features a key idea is that a feature is implemented by a distinct code unit called a feature module when added to a base program it introduces new structures such as classes and methods and refines existing ones such as extending methods a program that is decomposed into features is called henceforth a typically decomposition is orthogonal to or functional decomposition a multitude of modularization and composition mechanisms have been developed in order to allow programmers to decompose a program along multiple dimensions languages and tools provide a significant subset of these mechanisms beside the decomposition of programs into features the concept of a feature is useful for distinguishing different related programs thus forming a software product line typically programs of a common domain share a set of features but also differ in other features for example suppose an email client for mobile devices that supports the protocols imap and and another client that supports mime and ssl encryption with a decomposition of the two programs into the features imap mime and ssl both programs can share the code of the feature since mobile devices have only limited resources unnecessary features should be removed with decomposition programs can be generated solely on the basis of a user s selection of features by the composition of the corresponding feature modules of course not all combinations of features are legal and result in correct programs a feature model describes which features can be composed in which combinations which programs are valid it consists of an ordered set of features and a set of constraints on feature combinations for example our email client may have different rendering engines for html text the mozilla engine or the safari engine but only one at a time a set of feature modules along wit a feature model is called a product line an important question is how the correctness of programs in particular and product lines in general can be guaranteed a first problem is that contemporary languages and tools usually involve a code generation step during composition in which the code is transformed into a representation in previous work we have addressed this problem by modeling mechanisms directly in the formal syntax and semantics of a core language called feature featherweight java ffj the type system of ffj ensures that the composition of feature modules is in this paper we address a second problem how can the correctness of an entire product line be guaranteed a naive approach would be to all valid programs of a product line using a type checker like the one of ffj however this approach does not scale already for implemented optional features a variant can be generated for every person on the planet noticing this problem czarnecki and pietroszek and thaker et al suggested the development of a type system that checks the entire code base of the product line instead of all individual programs in this scenario a type checker must analyze all feature modules of a product line on the basis of the feature model we will show that with this information the type checker can ensure that every valid program variant that can be generated is specifically we make the following contributions we provide a condensed version of ffj which is in many respects more elegant and concise than its predecessor we develop a formal type system that uses information about features and constraints on feature combinations in order to a product line without generating every program we prove correctness by proving that every program generated from a product line is as long as the feature selection satisfies the constraints of the product line furthermore we prove completeness by proving that the typedness of all programs of a product line guarantees that the product line is welltyped as a whole we offer an implementation of ffj including the proposed type system which can be downloaded for evaluation and for experiments with further language and typing mechanisms or work differs in many respects from previous and related work see section for a comprehensive discussion most notably thaker et al have implemented a type system for product lines and conducted several case studies we take their work further with a formalization and a correctness and completeness proof furthermore our work differs in many respects from previous work on modeling and and related programming mechanisms most notably we model the mechanisms directly in ffj s syntax and semantics without any transformation to a representation and we stay very close to the syntax of contemporary languages and tools see section we begin with a brief introduction to ffj programs in ffj in this section we introduce the language ffj originally ffj was designed for featureoriented programs we extend ffj in section to support product lines to support the representation of multiple alternative program variants at a time an overview of ffj ffj is a lightweight language that has been inspired by featherweight java fj as with fj we have aimed at minimality in the design of ffj ffj provides basic constructs like classes fields methods and inheritance and only a few new constructs capturing the core mechanisms of programming but so far ffj s type system has not supported the development of product lines that is the feature modules written in ffj are interpreted as a single program we will change this in section an ffj program consists of a set of classes and refinements a refinement extends a class that has been introduced previously each class and refinement is associated with a feature we say that a feature introduces a class or applies a refinement to a class technically the mapping between and the features they belong to can be established in different ways by extending the language with modules representing features or by grouping classes and refinements that belong to a feature in packages or directories like in fj each class declares a superclass which may be the class object refinements are defined using the keyword refines the semantics of a refinement applied to a class is that the refinement s members are added to and merged with the member of the refined class this way a refinement can add new fields and methods to the class and override existing methods declared by overrides on the left side in figure we show an excerpt of the code of a basic email client called e mail c lient top and a feature called ssl bottom in ffj the feature ssl adds the class ssl lines to the email client s code base and refines the class trans in order to encrypt outgoing messages lines to this effect the refinement of trans adds a new field key line and overrides the method send of class trans lines feature e mail c lient class msg extends object string serialize class trans extends object bool send msg m emailclient object feature ssl class ssl extends object trans trans bool send msg m refines class trans key key overrides bool send msg m return new ssl this m ssl trans refines trans inherits class refinement msg refinement chain ssl feature fig a email client supporting ssl encryption typically a programmer applies multiple refinements to a class by composing a sequence of features this is called a refinement chain a refinement that is applied immediately before another refinement in the chain is called its predecessor the order of the refinements in a refinement chain is determined by their composition order on the right side in figure we depict the refinement and inheritance relationships of our email example fields are unique within the scope of a class and its inheritance hierarchy and refinement chain that is a refinement or subclass is not allowed to add a field that has already been defined in a predecessor in the refinement chain or in a superclass for example a further refinement of trans would not be allowed to add a field key since key has been introduced by a refinement of feature ssl already with methods this is different a refinement or subclass may add new methods overloading is prohibited and override existing methods in order to distinguish the two cases ffj expects the programmer to declare whether a method overrides an existing method using the modifier overrides for example the refinement of trans in feature ssl overrides the method send introduced by feature m ail for subclasses this is similar the distinction between method introduction and overriding allows the type system to check whether an introduced method inadvertently replaces or occludes an existing method with the same name and whether for every overriding method there is a proper method to be overridden apart from the modifier overrides a method in ffj is similar to a method in fj that is a method body is an expression prefixed with return and not a sequence of statements this is due to the functional nature of ffj and fj furthermore overloading of methods introducing methods with equal names and different argument types is not allowed in ffj and fj as shown in figure refinement chains grow from left to right and inheritance hierarchies from top to bottom when looking up a method body ffj traverses the combined inheritance and refinement hierarchy of an object and selects the and body of a method declaration or method refinement that is compatible this kind of lookup is necessary since we model features directly in ffj instead of generating and evaluating fj code first the ffj calculus looks for a method declaration in the refinement chain of the object s class starting with the last refinement back to the class declaration itself the first body of a matching method declaration is returned if the method is not found in the class refinement chain or in its own declaration the methods in the superclass and then the superclass superclass etc are searched each again from the most specific refinement of the class declaration itself the field lookup works similarly except that the entire inheritance and refinement hierarchy is searched and the fields are accumulated in a list in figure we illustrate the processes of method body and field lookup schematically object ref ref n ref n p ref ref ref k classn ref ref n ref n m fig order of method body and field lookup in ffj syntax of ffj before we go into detail let us explain some notational conventions we abbreviate lists in the obvious ways c is shorthand for cn c f is shorthand for cn fn c f is shorthand for cn fn t c is shorthand for tn cn c d is shorthand for cn dn note that depending on the context blanks commas or semicolons separate the elements of a list the context will make clear which separator is meant the symbol denotes the empty list and lists of field declarations method declarations and parameter names must not contain duplicates we use the metavariables for class names for field names and m for method names feature names are denoted by greek letters in figure we depict the syntax of ffj in extended an ffj program consists of a set of class and refinement declarations a class declaration l declares a class with the name c that inherits from a superclass d and consists of a list c f of fields and a list m of method a refinement declaration r consists of a list c f of fields and a list m of method declarations t terms x variable field access t method invocation new c t object creation c t cast l class declarations class c extends d c f m r refinement declarations refines class c c f m m method declarations overrides c m c x return t v values new c v object creation fig syntax of ffj in extended bnf a method m expects a list c x of arguments and declares a body that returns only a single expression t of type using the modifier overrides a method declares that it intends to override another method with the same name and signature where we want to distinguish methods that override others and methods that do not override others we call the former method introductions and the latter method refinements finally there are five forms of terms the variable field access method invocation object creation and type cast which are taken from fj without change the only values are object creations whose arguments are values as well ffj s class table declarations of classes and refinements can be looked up via a class table ct the compiler fills the class table during the parser pass in contrast to fj class and refinement declarations are identified not only by their names but additionally by the names of the enclosing features for example in order to retrieve the declaration of class trans introduced by feature m ail in our example of figure we write ct m in order to retrieve the refinement of class trans applied by feature ssl we write ct we call the qualified type of class c in feature in ffj class and refinement declarations are unique with respect to their qualified types this property is ensured because of the following sanity conditions a feature is not allowed the concept of a class constructor is unnecessary in ffj and fj its omittance simplifies the syntax semantics and type rules significantly without loss of generality to introduce a class or refinement twice inside a single feature module and to refine a class that the feature has just introduced these are common sanity conditions in languages and tools as for fj we impose further sanity conditions on the class table and the inheritance relation ct class or refines class for every qualified type dom ct feature base plays the same role for features as object plays for classes it is a symbol denoting the empty feature at which lookups terminate dom ct for every class name c appearing anywhere in ct we have dom ct for at least one feature and the inheritance relation contains no cycles incl refinement in ffj information about the refinement chain of a class can be retrieved using the refinement table rt the compiler fills the refinement table during the parser pass rt c yields a list of all features that either introduce or refine class the leftmost element of the result list is the feature that introduces the class c and then from left to right the features are listed that refine class c in the order of their composition in our example of figure rt trans yields the list e mail c lient ssl there is only a single sanity condition for the refinement table rt c for every type c dom ct with being the features that introduce and refine class in figure we show two functions for the navigation of the refinement chain that rely on rt function last returns for a class name c a qualified type in which refers to the feature that applies the final refinement to class c if a class is not refined at all refers to the feature that introduces class function pred returns for a qualified type another qualified type in which refers to the feature that introduces or refines class c and that is the immediate predecessor of in the refinement chain if there is no predecessor is returned navigating along the refinement chain rt c last c rt c pred fig refinement in ffj rt c pred subtyping in ffj in figure we show the subtype relation of ffj the subtype relation is defined by one rule each for reflexivity and transitivity and one rule for relating the type of a class to the type of its immediate superclass it is not necessary to define subtyping over qualified types because only classes not refinements declare superclasses and there is only a single declaration per class c d subtyping c c ct class c extends d c d c d d e c e fig subtyping in ffj auxiliary definitions of ffj in figure we show the auxiliary definitions of ffj function fields searches the refinement chain from right to left and accumulates the fields into a list using the comma as concatenation operator if there is no further predecessor in the refinement chain we have reached a class declaration then the refinement chain of the superclass is searched see figure if is reached the empty list is returned denoted by function mbody looks up the most specific and most refined body of a method a body consists of the formal parameters x of a method and the actual term t representing the content the search is like in fields first the refinement chain is searched from right to left and then the superclasses refinement chains are searched as illustrated in figure note that overrides means that a given method declaration may or may not have the modifier this way we are able to define uniform rules for method introduction and method refinement function mtype yields the signature b of a declaration of method the lookup is like in mbody predicate introduce is used to check whether a class has been introduced by multiple features and whether a field or method has been introduced multiple times in a class precisely it states in the case of classes whether c has not been introduced by any feature other than and whether a method m or a field f has not been introduced by or in any of its predecessors or superclasses to evaluate it we check in the case of classes whether ct yields a class declaration or not for any feature different from in the case of methods whether mtype yields a signature or not and in the case of fields whether f is defined in the list of fields returned by fields predicate refine states whether for a given refinement a proper class has been declared previously in the refinement chain the predicate override states whether a method m has been introduced before in some predecessor of and whether the previous declaration of m has the given signature fields c f field lookup fields ct class c extends d c f m fields fields last d c f ct refines class c c f m fields fields pred c f mbody m x t method body lookup overrides b m b x return t m ct class c extends d c f m mbody m x t m is not defined in m ct class c extends d c f m mbody m mbody m last d overrides b m b x return t m ct refines class c c f m mbody m x t m is not defined in m ct refines class c c f m mbody m mbody m pred mtype m c c method type lookup m b x return t m ct class c extends d c f m m is not defined in m ct class c extends d c f m mtype m mtype m last d mtype m b m b x return t m ct refines class c c f m m is not defined in m ct refines class c c f m mtype m mtype m pred mtype m b introduce valid class introduction ct class c introduce introduce f valid field introduction fields e h introduce f introduce m valid method introduction m dom mtype introduce m refine valid class refinement rt c ct class c refine override m c valid method overriding mtype m b override m c fig auxiliary definitions of ffj evaluation of ffj programs each ffj program consists of a class table and a the term is evaluated using the evaluation rules shown in figure the evaluation terminates when a value a term of the form new c v is reached note that we use a direct semantics of class refinement that is the field and method lookup mechanisms incorporate all refinements when a class is searched for fields and methods an alternative which is discussed in section would be a flattening semantics to merge a class in a preprocessing step with all of its refinements into a single declaration fields last c c f new c v vi roj n ew mbody m last c x nvk n ew new c v u x u this new c v c d d new c v new c v ast n ew ield t t nvk r ecv ti v ti t v t nvk a rg ti new c v ti t new c v t ewa rg c c ast fig evaluation of ffj programs using the subtype relation and the auxiliary functions fields and mbody the evaluation of ffj is fairly simple the first three rules are most interesting the remaining rules are just congruence rules rule roj n ew describes the projection of a field from an instantiated class a projected field fi evaluates to a value vi that has been passed as argument to the instantiation function fields is used to look up the fields of the given class it receives last c as argument since we want to search the entire refinement chain of class c from right to left cf figure rule roj i nvk evaluates a method invocation by replacing the invocation with the method s body the formal parameters of the method are substituted in the body for the refinement table is not relevant for evaluation the arguments of the invocation the value on which the method is invoked is substituted for this the function mbody is called with the last refinement of the class c in order to search the refinement chain from right to left and return the most specific method body cf figure rule ast n ew evaluates an upcast by simply removing the cast of course the premise must be that the cast is really an upcast and not a downcast or an incorrect cast type checking ffj programs the type relation of ffj consists of the type rules for terms and the rules for classes refinements and methods shown in figures and t c term typing x c x c t c t c fields last c f ci mtype m last d c t c fields last c d f new c t c c d d d c c c d c d c c d c d d c c c ield c d nvk ew ast c d stupid warning ast ast fig term typing in ffj term typing rules a term typing judgment is a triple consisting of a typing context a term t and a type c see figure rule checks whether a free variable is contained in the typing context rule ield checks whether a field access is specifically it checks whether f is declared in the type of and whether the type f equals the type of the entire term rule nvk checks whether a method invocation t is to this end it checks whether the arguments t of the invocation are subtypes of the types of the m ok a method typing x b this c introduce m last d ct class c extends d c f m m b x return ok a x b this c ct class c extends d c f m override m last d b overrides m b x return ok a x b this c ct refines class c c f m introduce m pred m b x return ok a x b this c ct refines class c c f m override m pred b overrides m b x return ok a l ok a class typing f f introduce f last d introduce m ok a class c extends d c f m ok a r ok a refinement typing f f introduce f pred refine refines class c c f m ok a fig rules of ffj m ok a formal parameters of m and whether the return type of m equals the type of the entire term rule ew checks whether an object creation new c t is in that it checks whether the arguments t of the instantiation of c are subtypes of the types d of the fields of c and whether c equals the type of the entire term the rules ast ast and ast check whether casts are in each rule it is checked whether the type c the term is cast to is a subtype supertype or unrelated type of the type of and whether c equals the type of the entire rules in figure we show ffj s rules of classes refinements and methods the typing judgments of classes and refinements are binary relations between a class or refinement declaration and a feature written l ok a and r ok a the rule of classes checks whether all methods are in the context of the class qualified type moreover it checks whether none of the fields of the class declaration is introduced multiple times in the combined inheritance and refinement hierarchy and whether there is no feature other than that introduces a class c using introduce the rule of refinements is analogous except that the rule checks whether a corresponding class has been introduced before using refine the typing judgment of methods is a binary relation between a method declaration and the qualified type that declares the method written m ok a there are four different rules for methods from top to bottom in figure that do not override another method and that are declared by classes that override another method and that are declared by classes that do not override another method and that are declared by refinements that override another method and that are declared by refinements all four rules check whether the type of the method body is a subtype of the declared return type of the method declaration for methods that are being introduced it is checked whether no method with an identical name has been introduced in a superclass rule or in a predecessor in the refinement chain rule for methods that override other methods it is checked whether a method with identical name and signature exists in the superclass rule or in a predecessor in the refinement chain rule ffj programs finally an ffj program consisting of a term a class table and a refinement table is if the term is checked using ffj s term typing rules all classes and refinements stored in the class table are checked using ffj s rules and the class and refinement tables are ensured by the corresponding sanity conditions rule ast is needed only for the small step semantics of ffj and fj in order to be able to formulate and prove the type preservation property ffj and fj programs whose type derivation contains this rule the premise stupid warning appears in the derivation are not further considered cf type soundness of ffj the type system of ffj is sound we can prove this using the standard theorems of preservation and progress t heorem preservation if t c and t then for some t heorem progress suppose t is a term if t includes new t as a subterm then fields last c f for some c and if t includes new t u as a subterm then mbody m last x and for some x and we provide the proofs of the two theorems in appendix a product lines in ffjpl in this section our goal is to define a type system for product lines a type system that checks whether all valid combinations of features yield programs in this scenario the features in question may be optional or mutually exclusive so that different combinations are possible that form different programs since there may be plenty of valid combinations type checking all of them individually is usually not feasible in order to provide a type system for product lines we need information about which combinations of features are valid which features are mandatory optional or mutually exclusive and we need to adapt the subtype and type rules of ffj to check that there are no that lead to terms the type system guarantees that every program derived from a product line is a ffj program ffj together with the type system for checking featureoriented product lines is henceforth called ffjpl an overview of product lines a product line is made up of a set of feature modules and a feature model the feature modules contains the features implementation and the feature model describes how the feature modules can be combined in contrast to the featureoriented programs of section typically some features are optional and some are mutually exclusive also other relations such as disjunction negation and implication are possible they are broken down to mandatory optional and mutually exclusive features as we will generally in a derivation step a user selects a valid subset of features from which subsequently a program is derived in our case derivation means assembling the corresponding feature modules for a given set of features in figure we illustrate the process of program derivation typically a wide variety of programs can be derived from a product line the challenge is to define a type system that guarantees on the basis of the feature modules and the feature model that all valid programs are once a program is derived from such a product line we can be sure that it is and we can evaluate it using the standard evaluation rules of ffj see section product line programs feature modules a b c d e f a b c program program a b e a b d e derivation a b d b f e e f user s feature selection program program a b c e a b c d e feature model fig the process of deriving programs from a product line managing variability feature models the aim of developing a product line is to manage the variability of a set of programs developed for a particular domain and to facilitate the reuse of feature implementations among the programs of the domain a feature model captures the variability by explicitly or implicitly defining an ordered set of all features of a product line and their legal feature combinations a feature order is essential for field and method lookup see section different approaches to product line engineering use different representations of feature models to define legal feature combinations the simplest approach is to enumerate all legal feature combinations in practice commonly different flavors of tree structures are used sometimes in combination with additional propositional constraints to define legal combinations as illustrated in figure for our purpose the actual representation of legal feature combinations is not relevant in ffjpl we use the feature model only to check whether feature specific program elements are present in certain circumstances a design decision of ffjpl is to abstract from the concrete representation of the underlying feature model and rather to provide an interface to the feature model this has to benefits we do not need to struggle with all the details of the formalization of feature models which is well understood by researchers and outside the scope of this paper and we are able to support different kinds of feature model representations a tree structures grammars or propositional formulas the interface to the feature model is simply a set of functions and predicates that we use to ask questions like may or may not feature a be present together with feature b or is program element m present in every variant in which also feature a is present is program element m always reachable from feature a challenges of type checking let us explain the challenges of type checking by extending our email example as shown in figure suppose our basic email client is refined to process incoming text messages feature t ext lines optionally it is enabled to process html messages using either mozilla s rendering engine feature m ozilla lines or safari s rendering engine feature s afari lines to this end the features m ozilla and s afari override the method render of class display line and in order to invoke the respective rendering engines field renderer lines and instead of the text printing function line feature t ext refines class trans unit receive msg msg return do something new display msg class display unit render msg msg display message in text format feature m ozilla refines class display mozillarenderer renderer overrides unit render msg m render html message using the mozilla engine feature s afari refines class display safarirenderer renderer overrides unit render msg m render html message using the safari engine fig a email client using mozilla s and safari s rendering engines the first thing to observe is that the features m ozilla and s afari rely on class display and its method render introduced by feature t ext in order to guarantee that every derived program is the type system checks whether display and render are always reachable from the features m ozilla and s afari whether in every program variant that contains m ozilla and s afari also feature t ext is present the second thing to observe is that the features m ozilla and s afari both add a field renderer to display lines and both of which have different types in ffj a program with both feature modules would not be a program because the field renderer is introduced twice however figure is not intended to represent a single program but a product line the features m ozilla and s afari are mutually exclusive as defined in the product line s feature model stated earlier and the type system has to take this fact into account let us summarize the key challenges of type checking product lines a global class table contains classes and refinements of all features of a product line even if some features are optional or mutually exclusive so that they are present only in some derived programs that is a single class can be introduced by multiple features as long as the features are mutually exclusive this is also the case for multiple introductions of methods and fields which may even have different types the presence of types fields and methods depends on the presence of the features that introduce them a reference from the elements of a feature to a type a field projection or a method invocation is valid if the referenced element is always reachable from the referring feature in every variant that contains the referring feature like references an extension of a program element such as a class or method refinement is valid only if the extended program element is always reachable from the feature that applies the refinement refinements of classes and methods do not necessarily form linear refinement chains there may be alternative refinements of a single class or method that exclude one another as explained below collecting information on feature modules for type checking the ffjpl compiler collects various information on the feature modules of the product line before the actual type checking is performed the compiler fills three tables with information the class table ct the introduction table it and the refinement table rt the class table ct of ffjpl is like the one of ffj and has to satisfy the same sanity conditions except that there may be multiple declarations of a class or field or method as long as they are defined in are mutually exclusive features and there may be cycles in the inheritance hierarchy but no cycles for each set of classes which are reachable from any given feature the introduction table it maps a type to a list of mutually exclusive features that introduce the type the features returned by it are listed in the order prescribed by the feature model in our example of figure a call of it display would return a list consisting only of the single feature t ext likewise the introduction table maps field and method names in combination with their declaring classes to features for example a call of it would return the list m ozilla s afari the sanity conditions for the introduction table are straightforward it c for every type c dom ct with being the features that introduce class it for every field f contained in some class c dom ct with being the features that introduce field it for every method m contained in some class c dom ct with being the features that introduce method much like in ffj in ffjpl there is a refinement table rt a call of rt c yields a list of all features that either introduce or refine class c which is different from the introduction table that returns only the features that introduce class as with it the features returned by rt are listed in the order prescribed by the feature model the sanity condition for ffjpl s refinement table is identical to the one of ffj namely rt c for every type c dom ct with being the features that introduce and refine class feature model interface as said before in ffjpl we abstract from the concrete representation of the feature model and define instead an interface consisting of proper functions and predicates there are two kinds of questions we want to ask about the feature model which we explain next first we would like to know which features are never present together which features are sometimes present together and which features are always present together to this end we define two predicates never and sometimes and a function always predicate never indicates that feature is never reachable in the context there is no valid program variant in which the features and feature are present together predicate sometimes indicates that feature is sometimes present when the features are present there are variants in which the features and feature are present together and there are variants in which they are not present together function always is used to evaluate whether feature is always present in the context either alone or within a group of alternative features there are three cases if feature is always present in the context always returns the feature again always if feature is not always present but would be together with a certain group of mutually exclusive features one of the group is always present always returns all features of this group always if a feature is not present at all neither alone nor together with other mutually exclusive features always returns the empty list always the above predicates and function provide all information we need to know about the features relationships they are used especially for field and method lookup second we would like to know whether a specific program element is always present when a given set of features is present this is necessary to ensure that references to program elements are always valid not dangling we need two sources of information for that first we need to know all features that introduce the program element in question determined using the introduction table and second we need to know which combinations of features are legal determined using the feature model for the field renderer of our example the introduction table would yield the features m ozilla and s afari and from the feature model it follows that m ozilla and s a fari are mutually exclusive never m ozilla s afari but it can happen that none of the two features is present which can invalidate a reference to the field the type system needs to know about this situation to this end we introduce a predicate validref that expresses that a program element is always reachable from a set of features for example validref c holds if type c is always reachable from the context validref holds if field f of class c is always reachable from the context and validref holds if method m of class c is always reachable from the context applying validref to a list of program elements means that the conjunction of the predicates for every list element is taken finally when we write validref c a we mean that program element c is always reachable from a context in a subset of features of the product line in our prototype we have implemented the above functions and predicates using a sat solver that reasons about propositional formulas representing constraints on legal feature combinations see section as proposed by batory and czarnecki and pietroszek refinement in ffjpl in figure we show the functions last and pred for the navigation along the refinement chain the two functions are identical to the ones of ffj cf figure however in ffjpl there may be alternative declarations of a class and in the refinement chain refinement declarations may even precede class declarations as long as the declaring features are mutually exclusive let us illustrate refinement in ffjpl by means of the example shown in figure class c is introduced in the features and feature refines class c introduced by feature and feature refines class c introduced by feature feature and are never present when feature or are present and vice versa a call of rt c would return the list a call of last c would return the qualified type and a call of pred would return the qualified type and so on navigating along the refinement chain rt c last c rt c pred rt c pred fig refinement in ffjpl c c c c mutually exclusive fig multiple alternative refinements subtyping in ffjpl the subtype relation is more complicated in ffjpl than in ffj the reason is that a class may have multiple declarations in different features each declaring possibly different superclasses as illustrated in figure that is when checking whether a class is a subtype of another class we need to check whether the subtype relation holds in all alternative inheritance paths that may be reached from a given context for example foobar is a subtype of barfoo because barfoo is a superclass of foobar in every program variant since always but foobar is not a subtype of foo and bar because in both cases a program variant exists in which foobar is not a indirect subclass of the class in question foo bar a a a m d d b b b m b b barfoo barfoo d d d d and are mutually exclusive and one of them is always present together with foobar e e fig multiple inheritance chains in the presence of alternative features in figure we show the subtype relation of ffjpl the subtype relation c e a is read as follows in the context type c is a subtype of type e type c is a subtype of type e in every variant in which also the features are present the first rule in figure covers reflexivity and terminates the recursion over the inheritance hierarchy the second rule states that class c is a subtype of class e if at least one declaration of c is always present tested with validref and if every of c s declarations that may be present together with tested with sometimes declares some type d as its supertype and d is a subtype of e in the context that is e must be a direct or indirect supertype of d in all variants in which the features are present additionally supertype d must be always reachable from the context when traversing the inheritance hierarchy in each step the context is extended by the feature that introduces the current class in question is extended with interestingly the second rule subsumes the two ffj rules for transitivity and direct superclass declaration because some declarations of c may declare e directly as its superclass and some declarations may declare another superclass d that is in turn a subtype of e and the rule must be applicable to both cases simultaneously c e a subtyping c c a validref c ct class c extends d it c sometimes validref d d e a c e a fig subtyping in ffjpl applied to our example of figure we have foobar foobar a because of the reflexivity rule we also have foobar barfoo a because foobar is reachable from feature and every feature that introduces foobar namely contains a corresponding class declaration that declares barfoo as foobar s superclass and barfoo is always reachable from however we have foobar foo a and foobar bar a because foobar s immediate superclass barfoo is not always a subtype of foo respectively of bar auxiliary definitions of ffjpl extending ffj toward ffjpl makes it necessary to add and modify some auxiliary functions the most complex changes concern the field and method lookup mechanisms field lookup the auxiliary function fields collects the fields of a class including the fields of its superclasses and refinements since alternative class or refinement declarations may introduce alternative fields or the same field with identical or alternative types fields may return different fields for different feature selections since we want to all valid variants field returns multiple field lists a list of lists that cover all possible feature selections each inner list contains field declarations collected in an alternative path of the combined inheritance and refinement hierarchy for legibility we separate the inner lists using the delimiter for example looking up the fields of class foobar in the context of feature figure yields the list a a d d e e b b d d e e because the features and are mutually exclusive and one of them is present in each variant in which also is present for readability we use the metavariables f and g when referring to inner field lists we abbreviate a list of lists of fields fn by analogously f is shorthand for fnm function fields receives a qualified type and a context of selected features if we want all possible field lists the context is empty if we want only field lists for a subset of feature selections only the fields that can be referenced from a term in a specific feature module we can use the context to specify one or more features of which we know that they must be selected the basic idea of ffjpl s field lookup is to traverse the combined inheritance and refinement hierarchy much like in ffj there are four situations that are handled differently the field lookup returns the empty list when it reaches the field lookup ignores all fields that are introduced by features that are never present in a given context the field lookup collects all fields that are introduced by features that are always present in a given context references to these fields are always valid the field lookup collects all fields that are introduced by features that may be present in a given context but that are not always present in this case a special marker is added to the fields in question because we can not guarantee that a reference to this field is safe in the given it is up to the type system to decide based on the marker whether this situation may provoke an error the type system ignores the marker when looking for duplicate fields but reports an error when type checking object creations a special situation occurs when the field lookup identifies a group of alternative features in such a group each feature is optional and excludes every other feature of the group and at least one feature of the group is always present in a given context once the field lookup identifies a group of alternative features we split the result list each list containing the fields of a feature of the group and the fields of the original list fields c f field lookup fields never fields fields pred sometimes always ct class c extends d c f m fields append fields last d c f sometimes always ct refines class c c f m fields append fields pred c f sometimes always ct class c extends d c f m fields append fields last d c f sometimes always ct refines class c c f m fields append fields pred c f sometimes always fields fields fields fig field lookup in ffjpl in order to distinguish the different cases we use the predicates and functions defined in section especially never sometimes and always the definition of note that the marker is generated during type checking so we do not include it in the syntax of ffj tion fields shown in figure follows the intuition described above once is reached the recursion terminates when a feature is never reachable in the given context fields ignores this feature and resumes with the previous one when a feature is mandatory always present in a given context the fields in question are added to each alternative result list which were created in rule and when a feature is optional the fields in question annotated with the marker are added to each alternative result list and when a feature is part of an alternative group of features we can not immediately decide how to proceed we split the result list in multiple lists by means of multiple recursive invocations of fields in which we add one of the alternative features to each context passed to an invocation of fields mtype m b method type lookup mtype m sometimes m b x m ct class c extends d c f m mtype m mtype m pred mtype m last d b m b x m sometimes ct refines class c c f m mtype m mtype m pred b m is not defined in m never ct class c extends d c f m mtype m mtype m pred mtype m last d m is not defined in m never ct refines class c c f m mtype m mtype m pred fig method lookup in ffjpl method type lookup like in field lookup in method lookup we have to take alternative definitions of methods into account but the lookup mechanism is simpler than in fields because the order of signatures found in the combined inheritance and refinement hierarchy is irrelevant for type checking hence function mtype yields a simple list b of signatures for a given method name for example calling mtype m in the context of figure yields the list d a b b function append adds to each inner list of a list of field lists a given field its implementation is straightforward and omitted for brevity in figure we show the definition of function mtype for the empty list is returned if a class that is sometimes reachable introduces a method in question its signature is added to the result list and all possible predecessors in the refinement chain using pred and all possible subclasses are searched using last likewise if a refinement that is sometimes reachable introduces a method with the name searched its signature is added to the result list and all possible predecessors in the refinement chain are searched using pred if a class or refinement does not declare a corresponding method and or the a class is never reachable the search proceeds with the possible superclasses or predecessors the current definition of function mtype returns possibly many duplicate signatures a straightforward optimization would be to remove duplicates before using the result list which we omitted for simplicity introduce valid class introduction ct class c extends d c f m sometimes introduce introduce f valid field introduction e h fields f introduce f introduce m valid method introduction mtype m introduce m refine valid class refinement rt c validref c a refine valid method overriding override m c rt c validref a b mtype m c b override m c fig valid introduction refinement and overriding in ffjpl valid introduction refinement and overriding in figure we show predicates for checking the validity of introduction refinement and overriding in ffjpl predicate introduce indicates whether a class with the qualified type has not been introduced by any other feature that may be present in the context likewise introduce holds if a method m or a field f has not been introduced by a qualified type including possible predecessors and superclasses that may be present in the given context to this end it checks either whether mtype yields the empty list or whether f is not contained in every inner list returned by fields for a given refinement predicate refine indicates whether a proper class which is always reachable in the given context has been declared previously in the refinement chain we write validref c a in order to state that a declaration of class c has been introduced in the set of features which is only a subset of the features of the product line namely the features that precede the feature that introduces class predicate override indicates whether a declaration of method m has been introduced and is always reachable in some feature introduced by before the feature that refines m and whether every possible declaration of m in any predecessor of a has the same signature type relation of ffjpl the type relation of ffjpl consists of type rules for terms and rules for classes refinements and methods shown in figure and figure term typing rules a term typing judgment in ffjpl is a quadruple consisting of a typing context a term t a list of types c and a feature that contains the term see figure a term can have multiple types in a product line because there may be multiple declarations of classes fields and methods the list c contains all possible types a term can have rule is standard and does not refer to the feature model it yields a list consisting only of the type of the variable in question rule ieldpl checks whether a field access is in every possible variant in which also is present based on the possible types e of the term the field f is accessed from the rule checks whether f is always reachable from using validref note that this is a key mechanism of ffjpl s type system it ensures that a field being accessed is definitely present in every valid program variant in which the field access occurs without generating all these variants furthermore all possible fields of all possible types e are assembled in a nested list f c f g in which c f denotes a declaration of the field f the call of fields last e is shorthand for fields last fields last en in which the individual result lists are concatenated finally the list of all possible types cnm of field f becomes the list of types of the overall field access note that the result list may contain duplicates which could be eliminated for optimization purposes rule nvkpl checks whether a method invocation t is in every possible variant in which also is present based on the possible types e of the term the method m is invoked on the rule checks whether m is always reachable from t term typing x x e e validref fields last e f c f g e a cnm a e e validref e a c c d d d c d a t mtype m last e d b t bnm a validref c d g f c c c d a t fields last c f new c t c a e a ieldpl validref c e e e c a c e a c c a validref c stupid warning e a e e c e a e c a c c a fig term typing in ffjpl nvkpl ewpl astpl astpl m ok a method typing x b this c e a e e e a introduce m last d validref b ct class c extends d c f m m b x return ok a x b this c e a e e e a validref b override m last d b ct class c extends d c f m overrides m b x return ok a x b this c e a e e e a validref b introduce m pred ct refines class c c f m m b x return ok a x b this c e a e e e a override m pred b validref b ct refines class c c f m overrides m b x return ok a l ok a class typing validref d f f introduce f last d validref c introduce m ok a class c extends d c f m ok a r ok a refinement typing validref c f f introduce f pred refine refines class c c f m ok a fig rules of ffjpl m ok a using validref as with field access this check is essential it ensures that in generated programs only methods are invoked that are also present furthermore all possible signatures of m of all possible types e are assembled in the nested list d b and it is checked that all possible lists c of argument types of the method invocation are subtypes of all possible lists d of parameter types of the method this implies that the lengths of the two lists must be equal a method invocation has multiple types assembled in a list that contains all result types of method m determined by mtype as with field access duplicates should be eliminated for optimization purposes rule ewpl checks whether an object creation new c t is in every possible variant in which also is present specifically it checks whether there is a declaration of class c always reachable from furthermore all possible field combinations of c are assembled in the nested list f and it is checked whether all possible combinations of argument types passed to the object creation are subtypes of the types of all possible field combinations this implies that the number of arguments types must equal the number of field types the fields of the result list must not be annotated with the marker since optional fields may not be present in every variant and references may become invalid see field lookup an object creation has only a single type rules astpl and astpl check whether casts are in every possible variant in which also is present this is done by checking whether the type c the term is cast to is always reachable from and whether this type is a subtype supertype or unrelated type of all possible types e the term can have we have only a single rule astpl for and downcasts because the list e of possible types may contain and subtypes of c simultaneously if there is a type in the list which leads to a stupid case we flag a stupid warning a cast yields a list containing only a single type rules in figure we show the rules of classes refinements and methods like in ffj the typing judgment of classes and refinements is a binary relation between a class or refinement declaration and a feature the rule of classes checks whether all methods are in the context of the class qualified type moreover it checks whether the class declaration is unique in the scope of the enclosing feature whether no other feature that may be present together with feature introduces a class with an identical name using introduce furthermore it checks whether the superclass and all field types are always reachable from using validref finally it checks whether none of the fields of the class declaration have been introduced before using introduce the rule of refinements is analogous except that the rule checks that there is at least one class declaration reachable that is refined and that has been introduced before the refinement using refine the typing judgment of methods is a binary relation between a method declaration and the qualified type that declares the method like in ffj there are four different rules for methods from top to bottom in figure that do not override another method and that are declared by classes the treatment of is semiformal but simplifies the rule that override another method and that are declared by classes that do not override another method and that are declared by refinements that override another method and that are declared by refinements all four rules check whether all possible types e of the method body are subtypes of the declared return type of the method and whether the argument types b are always reachable from the enclosing feature using validref for methods that are introduced it is checked using introduce whether no method with identical name has been introduced in any possible superclass rule or in any possible predecessor in the refinement chain rule for methods that override other methods it is checked using override whether a method with identical name and signature exists in any possible superclass rule or in any possible predecessor in the refinement chain rule ffjpl product lines an ffjpl product line consisting of a term a class table an introduction table and a refinement table is if the term is checked using ffjpl s term typing rules all classes and refinements stored in the class table are checked using ffjpl s rules and the class introduction and refinement tables are ensured by the corresponding sanity conditions type safety of ffjpl type checking in ffjpl is based on information contained in the class table introduction table refinement table and feature model the first three are filled by the compiler that has parsed the code base of the product line the feature model is supplied directly by the user or tool the compiler determines which class and refinement declarations belong to which features the classes and refinements of the class table are checked using their rules which in turn use the rules for methods and the term typing rules for method bodies several rules use the introduction and refinement tables in order to map types fields and methods to features and the feature model to navigate along refinement chains and to check the presence of program elements what does type safety mean in the context of a product line the product line itself is never evaluated rather different programs are derived that are then evaluated hence the property we are interested in is that all programs that can be derived from a welltyped product line are in turn furthermore we would like to be sure that all ffjpl product lines from which only ffj programs can be derived are we formulate the two properties as the two theorems correctness of ffjpl and completeness of ffjpl correctness t heorem correctness of ffjpl given a ffjpl product line pl including with a term t class introduction and refinement tables ct it and rt and a feature model fm every program that can be derived with a valid feature selection fs is a ffj program cf figure pl t ct it rt fm pl is derive pl fs is fs is valid in fm function derive collects the feature modules from a product line according to a user s selection fs feature modules are removed from the derived program after this derivation step the class table contains only classes and refinements stemming from the selected feature modules we define a valid feature selection to be a list of features whose combination does not contradict the constraints implied by the feature model the proof idea is to show that the type derivation tree of an ffjpl product line is a superimposition of multiple type derivation slices as usual the type derivation proceeds from the root an initial type rule that checks the term and all classes and refinements of the class table to the leaves type rules that do not have a premise of the type derivation tree each time a term has multiple types a method has different alternative return types which is caused by multiple mutually exclusive method declarations the type derivation splits into multiple branches with branch we refer only to positions in which the type derivation tree is split into multiple subtrees in order to type check multiple mutually exclusive term definitions each subtree from the root of the type derivation tree along the branches toward a leaf is a type derivation slice each slice corresponds to the type derivation of a program let us illustrate the concept of a type derivation slice by a simplified example suppose the application of an arbitrary type rule to a term t somewhere in the type derivation term t has multiple types c due to different alternative definitions of t s subterms for simplicity we assume here that t has only a single subterm like in the case of a field access t in which the overall term t has multiple types depending on s and f s types the rule can be easily extended to multiple subterms by adding a predicate per subterm the type rule ensures the of all possible variants of t on the basis of the variants of t s subterm furthermore the type rule checks whether a predicate predicate c d holds for each variant of the subterm with its possible types e written predicate ei the possible types c of the overall term follow in some way from the possible types e of its subterm predicate validref is used to check whether all referenced elements and types are present in all valid variants including different combinations of optional features for the general case this can be written as follows predicate predicate e always t predicate en pl the different uses of predicate in the premise of an ffjpl type rule correspond to the branches in the type derivation that denote alternative definitions of subterms hence the premise of the ffjpl type rule is the conjunction of the different premises that cover the different alternative definitions of the subterms of a term the proof strategy is as follows assuming that the ffjpl type system ensures that each slice is a valid ffj type derivation see lemma in appendix and that each valid feature selection corresponds to a single slice since alternative features have been removed see lemma in appendix each program that corresponds to a valid feature selection is guaranteed to be note that multiple valid feature selections may correspond to the same slice because of the presence of optional features it follows that for every valid feature selection we derive a wellformed ffj program since its type derivation is valid whose evaluation satisfies the properties of progress and preservation see appendix a in appendix b we describe the proof of theorem in more detail completeness t heorem completeness of ffjpl given an ffjpl product line pl including a term t class introduction and refinement tables ct it and rt and a feature model fm and given that all valid feature selections fs yield ffj programs according to theorem pl is a product line according to the rules of ffjpl pl t ct it rt fm fs fs is valid in fm derive pl fs is pl is the proof idea is to examine three basic cases and to generalize subsequently pl has only mandatory features pl has only mandatory features except a single optional feature pl has only mandatory features except two mutually exclusive features all other cases can be formulated as combinations of these three basic cases to this end we divide the possible relations between features into three disjoint sets a feature is reachable from another feature in all variants a feature is reachable from another feature in some but not in all variants two features are mutually exclusive from these three possible relations we can prove the three basic cases in isolation and subsequently construct a general case that can be phrased as a combination of the three basic cases the description of the general case and the reduction finish the proof of theorem in appendix b we describe the proof of theorem in detail implementation discussion we have implemented ffj and ffjpl in haskell including the program evaluation and type checking of product lines the ffjpl compiler expects a set of feature modules and a feature model both of which together represent the product line a feature module is represented by a directory the files found inside a feature module s directory are assigned to belong to the enclosing feature the ffjpl compiler stores this information for type checking each file may contain multiple classes and class refinements in figure we show a snapshot of our test environment which is based on eclipse and a haskell we use eclipse to interpret or compile our ffj and ffjpl type systems and interpreters specifically the figure shows the directory structure of our email system the file contains the user s feature selection and the feature model of the product line fig snapshot of the test environment of the haskell implementation the feature model of a product line is represented by a propositional formula following the approach of batory and czarnecki and pietroszek propositional formulas are an effective way of representing the relationships between features of specifying which feature implies the presence and absence of other features and of machine checking whether a feature selection is valid for example we have implemented predicate sometimes as follows sometimes fm satisfiable fm the feature model is an propositional formula feature are variables and satisfiable is a satisfiability solver likewise we have implemented predicate always on the basis of logical reasoning on propositional formulas always fm satisfiable fm for a more detailed explanation of how propositional formulas relate to feature models and feature selections we refer the interest to the work of batory in figure we show the textual specification of the feature model of our email system which can be passed directly to the ffjpl compiler http features emailclient imap mime ssl text mozilla safari model emailclient implies imap or imap implies emailclient implies emailclient mime implies emailclient ssl implies emailclient text implies imap or mozilla implies imap or safari implies imap or mozilla implies not safari safari implies not mozilla fig feature model of an email client product line the first section features of the file representing the feature model defines an ordered set of names of the features of the product line and the second section model defines constraints on the features presence in the derived programs in our example each email client supports either the protocols imap or both furthermore every feature requires the presence of the base feature e mail c lient feature t ext requires either the presence of imap or or both the same for m ozilla and s afari finally feature m ozilla requires the absence of feature s afari and vice versa on the basis of the feature modules and the feature model ffjpl s type system checks the entire product line and identifies valid program variants that still contain type errors a sat solver is used to check whether elements are never sometimes or always reachable if an error is found the product line is rejected as if not a program guaranteed to be can derived on the basis of a user s feature selection this program can be evaluated using the standard evaluation rules of ffj which we have also implemented in haskell in contrast to previous work on type checking product lines our type system provides detailed error messages this is possible due to the finegrained checks at the level of individual term typing and rules for example if a field access succeeds only in some program variants this fact can be reported to the user and the error message can point to the erroneous field access previously proposed type systems compose all code of all features of a product line and extract a single propositional formula which is checked for satisfiability if the formula is not satisfiable a type error has occurred it is not possible to identify the location that has caused the error at least not without further information see section for a detailed discussion of related approaches we made several tests and experiments with our haskell implementation however tests were not feasible because of two reasons first in previous work it has been already demonstrated that product lines require proper type systems and that type checking entire product lines is feasible and useful second like fj ffj is a core language into which all java programs can be compiled and which by its relative simplicity is suited for the formal definition and proof of language properties in our case a type system and its correctness and completeness but a core language is never suited for the development of programs this is why our examples and test programs are of similar size and complexity as the fj examples of pierce type checking our test programs required acceptable amounts of time in the order of magnitude of milliseconds per product line we do not claim to be able to handle product lines by them in ffjpl rather this would require an expansion of the type system to full java including support for features as provided by ahead or featurehouse an enticing goal but one for the future especially as java s informal language specification has pages our work lays a foundation for implementing type systems in that it provides evidence that core mechanisms are type sound and type systems of product lines can be implemented correctly and completely still we would like to make some predictions on the scalability of our approach the novelty of our type system is that it incorporates alternative features and consequently alternative definitions of classes fields and methods this leads to a type derivation tree with possibly multiple branches denoting alternative term types hence performing a type derivation of product line with many alternative features may consume a significant amount of computation time and memory it seems that this overhead is the price for allowing alternative implementation of program parts nevertheless our approach minimizes the overhead caused by alternative features compared to the naive approach in the naive approach all possible programs are derived and type checked subsequently in our approach we type check the entire code base of the product line and branch the type derivation only at terms that really have multiple alternative types and not at the level of entire program variants as done in the naive approach our experience with product lines shows that usually there are not many alternative features in a product line but mostly optional features for example in the berkeley db product line je edition lines of code there are feature modules but only two pairs of them alternative in the graph product line there are feature modules of which only three pairs are alternative a further observation is that most alternative features that we encountered do not alter types that is there are multiple definitions of fields and methods but with equal types for example gpl and berkeley db contain alternative definitions of a few methods but only with identical signatures type checking these product lines with our approach the type derivation would have almost no branches in the naive approach still many program variants exist due to optional features hence our approach is preferable for example in a product line with n features and c n variants with c being a constant in our approach the type system would have to check n feature modules with some few branches in the type derivation and solving few simple sat problems see below and in the naive approach the type system would have to check at least n feature modules but commonly n m with m for product lines with a higher degree of variability with or even variants the benefit of our approach becomes even more significant we believe that this benefit can make a difference in real world product line engineering a further point is that almost all typing and rules contain calls to the sat solver this results in possibly many invocations of the sat solver at type checking time determining the satisfiability of a propositional formula is in general an n problem however it has been shown that the structures of propositional formulas occurring in software product lines are simple enough to scale satisfiability solving to thousands of features furthermore in our experiments we have observed that many calls to the sat solver are redundant which is easy to see when thinking about type checking product lines where the presence of single types or members is checked in many type rules we have implemented a caching mechanism to decrease the number of calls to the sat solver to a minimum finally the implementation in haskell helped us a lot with the evaluation of the correctness of our type rules it can serve other researchers to reproduce and evaluate our work and to experiment with further language mechanisms the implementations of ffj and ffjpl along with test programs can be downloaded from the related work we divide our discussions of related work into two parts the implementation formal models and type systems of programs and of feature oriented product lines programs ffj has been inspired by several languages and tools most notably featurehouse and prehofer s java extension their key aim is to separate the implementation of software artifacts classes and methods from the definition of features that is classes and refinements are not annotated or declared to belong to a feature there is no statement in the program text that defines explicitly a connection between code and features instead the mapping of software artifacts to features is established via containment hierarchies which are basically directories containing software artifacts the advantage of this approach is that a feature s implementation can include beside classes in the form of java files also other supporting documents documentation in the form of html files grammar specifications in the form of javacc files or build scripts and deployment descriptors in the form of xml files to this end feature composition merges not only classes with their refinements but also other artifacts such as html or xml files with their respective refinements another class of programming languages that provide mechanisms for the definition and extension of classes and class hierarchies includes contextl scala and the difference to languages is that they provide explicit language constructs for aggregating the classes that belong to a feature family classes classboxes or layers this implies that software artifacts can not be included in a feature however ffj still models a subset of these languages in particular class refinement similarly related work on a formalization of the key concepts underlying featureoriented programming has not disassociated the concept of a feature from the level of http code especially calculi for mixins traits family polymorphism and virtual classes types open classes dependent classes and nested inheritance either support only the refinement of single classes or expect the classes that form a semantically coherent unit that belong to a feature to be located in a physical module that is defined in the host programming language for example a virtual class is by definition an inner class of the enclosing object and a classbox is a package that aggregates a set of related classes thus ffj differs from previous approaches in that it relies on contextual information that has been collected by the compiler the features composition order or the mapping of code to features a different line of research aims at the reasoning about features the calculus gdeep is most closely related to ffj since it provides a type system for languages that is the idea is that the recursive process of merging software artifacts when composing hierarchically structured features is very similar for different host languages for java c and xml the calculus describes formally how feature composition is performed and what type constraints have to be satisfied in contrast ffj does not aspire to be languageindependent although the key concepts can certainly be used with different languages the advantage of ffj is that its type system can be used to check whether terms of the host language java or fj violate the principles of feature orientation whether methods refer to classes that have been added by other features due to its language independence gdeep does not have enough information to perform such checks product lines our work on type checking product lines was motivated by the work of thaker et al they suggested the development of a type system for featureoriented product lines that does not check all individual programs but the individual feature implementations they have implemented an incomplete type system and in a number of case studies on real product lines they found numerous hidden errors using their type rules nevertheless the implementation of their type system is in the sense that it is described only informally and they do not provide a correctness and completeness proof our type system has been inspired by their work and we were able to provide a formalization and a proof of type safety in a parallel line of work delaware et al have developed a formal model of a language called lightweight feature java lfj and a type system for product lines their work was also influenced by the practical work of thaker et al so it is not surprising that it is closest to ours however there are numerous differences first their formal model of a language is based on lightweight java lj and not on featherweight java fj while lj is more expressive it is also more complex we decided for the simpler variant fj omitting constructors and mutable state second delaware et al do not model featureoriented mechanisms such as class or method refinements directly in the semantics and type rules of the language instead they introduce a transformation step in which lfj code is compiled down to lj code they flatten refinement chains to single classes proceeding likewise we would have to generate first an fj program from an ffj product line and type check the fj program that consists of some or all possible features of the product line subsequently we refrained from such a transformation step in order to model the semantics of mechanisms directly in terms of dedicated field and method lookup mechanisms as well as special rules for method and class refinements lagorio et al have shown that a flattening semantics and a direct semantics are equivalent an advantage of a direct semantics is that it allows a type checking and error reporting at a finer grain in lfj all feature modules are composed and a single propositional formula is generated and tested for satisfiability if the formula is not satisfiable it is difficult to identify precisely the point of failure in ffjpl the individual type rules consult the feature model and can point directly to the point of failure a further advantage of our approach is that it leaves open when feature composition is performed currently feature composition is modeled in as a static process done before compilation but with our approach it becomes possible to model dynamic feature composition at run time by making the class and feature tables and the feature model dynamic allowing them to change during a computation with lfj this is not possible hutchins has shown that feature composition can be performed by an interpreter and partial evaluation can be used to the parts of a composition that are static however delaware et al have developed a machinechecked model of their type system formalized with the theorem prover coq our proof is but we have a haskell implementation of the ffj and ffjpl calculi that we have tested thoroughly even previously to the work of thaker et czarnecki et al presented an automatic verification procedure for ensuring that no uml model template instances will be generated from a valid feature selection that is they type check product lines that consist not of java programs but of uml models they use ocl object constraint language constraints to express and implement a type system for model composition in this sense their aim is very similar to that of ffjpl but limited to model artifacts although they have proposed to generalize their work to programming languages et al have implemented a tool called cide that allows a developer to decompose a software system into features via annotations in contrast to other languages and tools the link between code and features is established via annotations if a user selects a set of features all code that is annotated with features using background colors that are not present in the selection is removed et al have developed a formal calculus and a set of type rules that ensure that only welltyped programs can be generated from a valid feature selection for example if a method declaration is removed the remaining code must not contain calls to this method cide s type rules are related to the type rules of ffjpl but so far mutually exclusive features are not supported in cide in some sense ffjpl and cide represent two sides of the same coin the former aims at the composition of feature modules the latter at the annotation of code conclusion a product line imposes severe challenges on type checking the naive approach of checking all individual programs of a product line is not feasible because of the combinatorial explosion of program variants hence the only practical option is to check the entire code base of a product line including all features and based on the information of which feature combinations are valid to ensure that it is not possible to derive a valid program variant that contains type errors we have developed such a type system based on a formal model of a featureoriented language called feature featherweight java ffj a distinguishing property of our work is that we have modeled the semantics and type rules for core mechanisms directly without compiling code down to a representation such as java code the direct semantics allows us to reason about core mechanisms in terms of themselves and not of generated code a further advantage is the error reporting and that the time of feature composition may vary between compile time and run time we have demonstrated and proved that based on a valid feature selection our type system ensures that every program of a product line is and that our type system is complete our implementation of ffj including the type system for product lines indicates the feasibility of our approach and can serve as a testbed for experimenting with further mechanisms acknowledgment this work is being funded in part by the german research foundation dfg project number ap references ancona lagorio and zucca a java extension with mixins acm transactions on programming languages and systems toplas anfurrutia and trujillo on refining xml artifacts in proceedings of the international conference on web engineering icwe volume of lncs pages apel and towards the development of ubiquitous middleware product lines in software engineering and middleware volume of lncs pages springerverlag apel and hutchins an overview of the gdeep calculus technical report department of informatics and mathematics university of passau apel janda trujillo and model superimposition in software product lines in proceedings of the international conference on model transformation icmt volume of lncs pages apel and lengauer feature de composition in functional programming in proceedings of the international conference on software composition sc volume of lncs pages apel and lengauer an overview of feature featherweight java technical report department of informatics and mathematics university of passau apel and lengauer feature featherweight java a calculus for featureoriented programming and stepwise refinement in proceedings of the international conference on generative programming and component engineering gpce pages acm press apel and lengauer featurehouse automated software composition in proceedings of the international conference on software engineering icse pages ieee cs press apel leich and saake on the symbiosis of featureoriented and programming in proceedings of the international conference on generative programming and component engineering gpce volume of lncs pages apel leich and saake aspectual feature modules ieee transactions on software engineering tse batory feature models grammars and propositional formulas in proceedings of the international software product line conference splc volume of lncs pages batory sarvela and rauschmayer scaling refinement ieee transactions on software engineering tse bergel ducasse and nierstrasz controlling the scope of change in java in proceedings of the international conference on programming systems languages and applications oopsla pages acm press bertot and casteran interactive theorem proving and program development coq art the calculus of inductive constructions texts in theoretical computer science an eatcs series bono patel and shmatikov a core calculus of classes and mixins in proceedings of the european conference on programming ecoop volume of lncs pages bracha and cook inheritance in proceedings of the european conference on programming ecoop and international conference on objectoriented programming systems languages and applications oopsla pages acm press clarke drossopoulou noble and wrigstad tribe a simple virtual class calculus in proceedings of the international conference on software development aosd pages acm press clements and northrop software product lines practices and patterns addisonwesley clifton millstein leavens and chambers multijava design rationale compiler implementation and applications acm transactions on programming languages and systems toplas czarnecki and eisenecker generative programming methods tools and applications czarnecki and pietroszek verifying model templates against wellformedness ocl constraints in proceedings of the international conference on generative programming and component engineering gpce pages acm press delaware cook and batory a model of safe composition in proceedings of the international workshop on foundations of languages foal pages acm press ducasse nierstrasz wuyts and a black traits a mechanism for finegrained reuse acm transactions on programming languages and systems toplas ernst ostermann and cook a virtual class calculus in proceedings of the international symposium on principles of programming languages popl pages acm press flatt krishnamurthi and felleisen classes and mixins in proceedings of the international symposium on principles of programming languages popl pages acm press gasiunas mezini and ostermann dependent classes in proceedings of the international conference on programming systems languages and applications oopsla pages acm press gosling joy steele and bracha the java language specification the java series edition hirschfeld costanza and nierstrasz programming journal of object technology jot hutchins eliminating distinctions of class using prototypes to model virtual classes in proceedings of the international conference on programming systems languages and applications oopsla pages acm press hutchins pure subtype systems a type theory for extensible software phd thesis school of informatics university of edinburgh igarashi b pierce and wadler featherweight java a minimal core calculus for java and gj acm transactions on programming languages and systems toplas igarashi saito and viroli lightweight family polymorphism in proceedings of the asian symposium on programming languages and systems aplas volume of lncs pages kamina and tamai mcjava a design and implementation of java with in proceedings of the asian symposium on programming languages and systems aplas volume of lncs pages kang cohen hess novak and peterson domain analysis foda feasibility study technical report software engineering institute carnegie mellon university and apel software product lines a formal approach in proceedings of the international conference on automated software engineering ase pages ieee cs press apel and batory a case study implementing features using aspectj in proceedings of the international software product line conference splc pages ieee cs press apel and kuhlemann granularity in software product lines in proceedings of the international conference on software engineering icse pages acm press apel trujillo kuhlemann and batory guaranteeing syntactic correctness for all product line variants a approach in proceedings of the international conference on objects models components patterns tools europe volume of lnbi pages lagorio servetto and zucca featherweight jigsaw a minimal core calculus for modular composition of classes in proceedings of the european conference on objectoriented programming ecoop lncs liquori and spiwack feathertrait a modest extension of featherweight java acm transactions on programming languages and systems toplas and batory a standard problem for evaluating methodologies in proceedings of the international conference on generative and componentbased software engineering gcse volume of lncs pages batory and cook evaluating support for features in advanced modularization technologies in proceedings of the european conference on objectoriented programming ecoop volume of lncs pages batory and lengauer a disciplined approach to aspect composition in proceedings of the international symposium partial evaluation and semanticsbased program manipulation pepm pages acm press madsen and virtual classes a powerful mechanism in objectoriented programming in proceedings of the international conference on programming systems languages and applications oopsla pages acm press masuhara and kiczales modeling crosscutting in mechanisms in proceedings of the european conference on programming ecoop volume of lncs pages mendonca wasowski and czarnecki analysis of feature models is easy in proceedings of the international software product line conference splc software engineering institute carnegie mellon university mezini and ostermann variability management with programming and aspects in proceedings of the international symposium on foundations of software engineering fse pages acm press murphy lai walker and robillard separating features in source code an exploratory study in proceedings of the international conference on software engineering icse pages ieee cs press nystrom chong and myers scalable extensibility via nested inheritance in proceedings of the international conference on programming systems languages and applications oopsla pages acm press odersky cremet and zenger a nominal theory of objects with dependent types in proceedings of the european conference on programming ecoop volume of lncs pages odersky and zenger scalable component abstractions in proceedings of the international conference on programming systems languages and applications oopsla pages acm press ostermann dynamically composable collaborations with delegation layers in proceedings of the european conference on programming ecoop volume of lncs pages b pierce types and programming languages mit press prehofer programming a fresh look at objects in proceedings of the european conference on programming ecoop volume of lncs pages reenskaug andersen berre hurlen a landmark lehne nordhagen oftedal skaar and stenslet oorass seamless support for the creation and maintenance of systems journal of programming joop siegmund sunkle apel leich and saake sql la carte toward data management in datenbanksysteme in business technologie und web fachtagung des datenbanken und informationssysteme volume of lni pages gesellschaft informatik siegmund saake and apel code generation to support static and dynamic composition of software product lines in proceedings of the international conference on generative programming and component engineering gpce pages acm press siegmund schirmeier sincero apel leich spinczyk and saake data management solutions for embedded systems in proceedings of the edbt workshop on software engineering for data management setmdm pages acm press siegmund heidenreich apel and saake bridging the gap between variability in client application and database schema in datenbanksysteme in business technologie und web fachtagung des datenbanken und informationssysteme volume of lni pages gesellschaft informatik smaragdakis and batory mixin layers an implementation technique for refinements and designs acm transactions on software engineering and methodology tosem sewell and parkinson the java module system core design and semantic definition in proceedings of the international conference on programming systems languages and applications oopsla pages acm press tarr ossher harrison and sutton n degrees of separation multidimensional separation of concerns in proceedings of the international conference on software engineering icse pages ieee cs press thaker batory kitchin and cook safe composition of product lines in proceedings of the international conference on generative programming and component engineering gpce pages acm press vanhilst and notkin using role components in implement designs in proceedings of the international conference on programming systems languages and applications oopsla pages acm press wright and felleisen a syntactic approach to type soundness information and computation a type soundness proof of ffj before giving the main proof we state and proof some required lemmas l emma if mtype m last d c then mtype m last c c for all c proof straightforward induction on the derivation of c there are two cases first if method m is not defined in the declaration or in any refinement of class c then mtype m last c should be the same as mtype m last e where ct class c extends e for some this follows from the definition of mtype that searches e s refinement chain from right to left if m is not declared in c s refinement chain second if m is defined in the declaration or in any refinement of class c then mtype m last c should also be the same as mtype m last e with ct class c extends e for some this case is covered by the rules for methods that use the predicate override to ensure that m is properly overridden the signatures of the overridden and the overriding declaration of m are equal and that m is not introduced twice overloading is not allowed in ffj t u l emma term substitution preserves typing if x b t d and s a where a b then x s t c for some c proof by induction on the derivation of x b t c ase t x x if x x then the result is trivial since x s x on the other hand if x xi and d bi then since x s x si letting c ai finishes the case c ase ield t x b fields last c f d ci by the induction hypothesis there is some such that x s and it is easy to check that fields last fields last d g for some d therefore by ield x s ci the fact that the refinements of a class may add new fields does not cause problems d g contains all fields that including all of its refinements add to c ase nvk t t x b x b t d d e mtype m last e d by the induction hypothesis there are some and c such that x s x s t c c by lemma we have mtype m last e moreover c e by the transitivity of therefore by nvk x s x s t the key is that subclasses and refinements may override methods but the rules of methods ensure that the method s type is not altered there is no overloading in ffj c ase ew t new d t fields last d d f x b t c c d by the induction hypothesis x s t e for some e with e we have e d by the transitivity of therefore by rule ew new d x s t although refinements of class d may add new fields rule ew ensures that the arguments of the object creation match the overall fields of d including all refinements in number and types that is the number of arguments t equals the number of fields f which function fields returns c ase ast t d x b c c d by the induction hypothesis there is some e such that x s e and e we have e d by the transitivity of which yields d x s d by ast c ase ast t d x b c d c d c note that x s x is an abbreviation for xn sn x it means that all occurrences of the variables xn in the term x are substituted with the corresponsing terms sn by the induction hypothesis there is some e such that x s e and e if e d or d e then d x s d by ast or ast respectively if both d e and e d then d x s d with a stupid warning by ast c ase ast t d x b c d c c d by the induction hypothesis there is some e such that x s e and e this means that e d because in ffj each class has just one superclass and if both e c and e d then either c d or d c which contradicts the induction hypothesis so d x s d with a stupid warning by ast t u l emma weakening if t c then x d t c proof straightforward induction the proof for ffj is similar to the proof for fj t u l emma if mtype m last d d and mbody m last x t then for some and some c d we have and x d this t c proof by induction on the derivation of mbody m last the base case in which m is defined in the most specific refinement of is easy since m is defined in ct last and the of the class table implies that we must have derived x d this t c by the rules of methods the induction step is also straightforward if m is not defined in ct last then mbody searches the refinement chain from right to left if m has not been found the superclass refinement chain is searched there are two subcases first m is defined in the declaration or in any refinement of this case is similar to the base case second m is defined in a superclass of or in one of s refinements in this case the of the class table implies that we must have derived x d this t c by the wellformedness rules of methods which finishes the case t u note that this lemma holds because method refinements do not change the types of the arguments and the result of a method overloading is not allowed and this points always to the class that is introduced or refined t heorem preservation if t c and t then for some proof by induction on a derivation of t with a case analysis on the final rule c ase roj n ew t new v vi fields last d f from the shape of t we see that the final rule in the derivation of t c must be ield with premise new v for some and that c di similarly the last rule in the derivation of new v must be ew with premises v c and c d and with in particular vi ci which finishes the case since ci di c ase nvk n ew t new v u mbody m last x x u this new v the final rules in the derivation of t c must be nvk and ew with premises new v u c c d and mtype m last d by lemma we have x d this t b for some and b with and b by lemma x d this b then by lemma we have x u this new v e for some e b by the transitivity of we obtain e letting e completes the case c ase ast n ew t d new v d new v the proof of d new v c must end with ast since ending with tsc ast or ast would contradict the assumption of the premises of ast give us new v and d c finishing the case the cases for the congruence rules are easy we show just the case ast c ase ast t d d there are three subcases according to the last typing rule used s ubcase ast d by the induction hypothesis for some by transitivity of therefore by ast c c with no additional stupid warning s ubcase ast d by the induction hypothesis for some if c or c then c c by ast or ast without any additional stupid warning on the other hand if both c or c then c c with a stupid warning by ast s ubcase ast d d by the induction hypothesis for some then also c and c therefore c c with a stupid warning if c then c since c and therefore c c with stupid war ning if c then c c by ast with no additional stupid warning this subcase is analogous to the case ast of the proof of lemma t u t heorem progress suppose t is a term if t includes new t as a subterm then fields last c f for some c and if t includes new t u as a subterm then mbody m last x and for some x and proof if t has new t as a subterm then by of the subterm it is easy to check that fields last is and fi appears in it the fact that refinements may add fields that have not been defined already does not invalidate this conclusion note that for every field of a class including its superclasses and all its refinements there must be a proper argument similarly if t has new t u as a subterm then it is also easy to show that mbody m last x and from the fact that mtype m last c d where this conclusion holds for ffj since a method refinement must have the same signature than the method refined and overloading is not allowed t u t heorem type soundness of ffj if t c and t with a normal form then is either a value v with v d and d c or a term containing d new c t in which c proof immediate from theorem and nothing changes in the proof of theorem for ffj compared to fj t u b type soundness proof of ffjpl in this section we provide proof sketches of the theorems correctness of ffjpl and completeness of ffjpl a further formalization would be desirable but we have stopped at this point as is often the case with formal systems there is a between formal precision and legibility we decided that a development of the proof strategies are the best fit for our purposes correctness t heorem correctness of ffjpl given a ffjpl product line pl including with a term t class introduction and refinement tables ct it and rt and a feature model fm every program that can be derived with a valid feature selection fs is a ffj program cf figure pl t ct it rt fm pl is derive pl fs is fs is valid in fm the proof strategy is as follows assuming that the ffjpl type system ensures that each slice is a valid ffj type derivation lemma and that each valid feature selection corresponds to a single slice lemma it follows that the corresponding program is before we prove theorem we develop two required lemmas that cover the two assumptions of our proof strategy l emma given a ffjpl product line every slice of the product line s type derivation corresponds to a set of valid type derivation s in ffj proof proof sketch given a ffjpl product line the corresponding type derivation consists of possibly multiple slices the basic case is easy there is only a simple derivation without branches due to mutually exclusive features optional features may be present in this case each term has only a single type which is the one that would also be determined by ffj furthermore ffjpl guarantees that referenced types methods and fields are present in all valid variants using the predicate validref let us illustrate this with the rule ieldpl the other rules are analogous e e validref e a fields last e f c f g cnm a ieldpl in the basic case there are no branches in the type derivation and thus the term has only a single type for the same reason fields returns only a simple list of fields that contains the declaration of field finally ieldpl checks whether the declaration of f is present in all valid variants using validref hence in the basic case an ffjpl derivation that ends at the rule ieldpl is equivalent to a set of corresponding ffj derivations which do not contain alternative and optional features and thus has a single type fields returns a simple list of fields that contains the declaration of f and the declaration of f is present the reason that an ffjpl derivation without mutually exclusive features a single slice corresponds to multiple ffj derivations is that the ffjpl derivation may contain optional features whose different combinations correspond to the different ffj derivations using predicate validref all type rules of ffjpl ensure that all possible combinations of optional features are welltyped in the case that there are multiple slices in the ffjpl derivation a term may have multiple types the type rules of ffjpl make sure that every possible shape of a given term is each possible type of the term leads to a branch in the derivation tree the premise of ieldpl checks whether all possible shapes of a given term are by taking the conjunction of all branches of the derivation hence if ieldpl is successful each individual branch holds each slice corresponds to a ffj program ensuring that in the presence of optional features all relevant subterms are all referenced elements are present in all valid variants a slice covers a set of ffj derivations that correspond to different combinations of optional features like in the basic case for example in a field projection the subterm has multiple types for all these types fields yields all possible combinations of fields declared by the variants of the types it is checked whether for each type of the subterm each combination of fields contains a proper declaration of field the different types of f become the possible types of the overall field projection term like in the basic case it is checked whether every possible type of is present in all valid variants using validref so that each slice corresponds a valid ffj derivation a whole set of derivations covering different combinations of optional features t u l emma given a ffjpl product line each valid feature selection corresponds to a single slice in the corresponding type derivation proof proof sketch by definition a valid feature selection does not contain mutually exclusive features considering only a single valid feature selection each term has only a single type but the type derivation of the overall product line contains branches corresponding to alternative types of the terms a successive removal of mutually exclusive features removes these branches until only a single branch remains consequently a valid feature selection corresponds to a single slice t u proof proof sketch of theorem correctness of ffjpl the fact that the ffjpl type system ensures that each slice is a valid ffj type derivation lemma and that each valid feature selection corresponds to a single slice lemma implies that each program that corresponds to a valid feature selection is t u completeness t heorem completeness of ffjpl given an ffjpl product line pl including a term t class introduction and refinement tables ct it and rt and a feature model fm and given that all valid feature selections fs yield ffj programs according to theorem pl is a product line according to the rules of ffjpl pl t ct it rt fm fs fs is valid in fm derive pl fs is pl is proof proof sketch of theorem completeness of ffjpl there are three basic cases pl has only mandatory features pl has only mandatory features except a single optional feature pl has only mandatory features except two mutually exclusive features proving theorem for the first basic case is trivial since only mandatory features exist only a single ffj program can be derived from the product line if the ffj program is the product line is too because all elements are always reachable and each term has only a single type in fact the type rules of ffjpl and ffj become equivalent in this case in the second basic case two ffj programs can be derived from the product line one including and one excluding the optional feature the difference between the two programs is the content of the optional feature the feature can add new classes refine existing classes by new methods and fields and refine existing methods by overriding if the two programs are then the overall product line is as well since the reachability checks succeed in every type rule of ffjpl otherwise at least one of the two programs would not be since in this case the reachability checks are the only difference between ffjpl s and ffj s type rules as in the first case each term has only a single type since there are no mutually exclusive features the fact that the two fj programs are implies that all elements are reachable in the type derivations of two ffj programs thus the reachability checks of the ffjpl derivation succeed in every case the product line in question is in the third basic case two ffj programs can be derived from the product line one including the first alternative and the other including the second alternative of the feature in question the difference between the two programs is on the one hand the program elements one feature introduces that are not present in the other and on the other hand the alternative definitions of similar elements like two alternative definitions of a single class the first kind of difference is already covered by the second basic case alternative definitions of a program element second kind of difference that are in the context of their enclosing ffj programs are in ffjpl because they lead to two new branches in the derivation tree which are handled separately and the conjunction of their premises must hold since the corresponding ffj type rule for the element succeeds in both ffj programs their conjunction in the ffjpl type rule always holds the product line in question is finally we it remains to show that all other cases all other combinations of mandatory optional and alternative features can be reduced to combinations of the three basic cases which proves theorem to this end we divide the possible relations between features into three disjoint sets a feature is reachable from another feature in all variants a feature is reachable from another feature in some but not in all variants two features are mutually exclusive from these three possible relations we construct a general case that can be reduced to a combination of the three basic cases assume a feature that is mandatory with respect to a set of features that is optional with respect to a set of features and that is alternative to a set of features we use arrows to illustrate to which of the three basic cases a pairwise relation between and each element of a list is reduced aa aa a such an arrow diagram can be created for every feature of a product line the reason is that the three kinds of relations are orthogonal and there are no further relations relevant for type checking hence the general case covers all possible relations between features and combinations of features the description of the general case and the reduction finish the proof of theorem ffjpl s type system is complete t u
| 6 |
aug spanning simplicial complexes of multigraphs imran ahmed shahid muhmood abstract a multigraph is a nonsimple graph which is permitted to have multiple edges that is edges that have the same end nodes we introduce the concept of spanning simplicial complexes g of multigraphs g which provides a generalization of spanning simplicial complexes of associated simple graphs we give first the characterization of all spanning trees of a r multigraph un m with n edges including r multiple edges within and outside the cycle of length then we determine the facet ideal r r if un m of spanning simplicial complex un m and its primary decomposition the euler characteristic is a topological and homotopic invariant to classify surfaces finally we device a formula for r euler characteristic of spanning simplicial complex un m key words multigraph spanning simplicial complex euler characteristic mathematics subject classification primary secondary introduction let g g v e be a multigraph on the vertex set v and a spanning tree of a multigraph g is a subtree of g that contains every vertex of we represent the collection of all of the spanning trees of a multigraph g by s g the facets of spanning simplicial complex g is exactly the edge set s g of all possible spanning trees of a multigraph therefore the spanning simplicial complex g of a multigraph g is defined by g hfk fk s g i which gives a generalization of the spanning simplicial complex g of an associated simple graph the spanning simplicial complex of a simple connected finite graph was firstly introduced by anwar raza and kashif in many authors discussed algebraic and combinatorial properties of spanning simplicial complexes of various classes of simple connected finite graphs see for instance and let be simplicial complex of dimension we denote fi by the number of of simplicial complex then the euler characteristic of is given ahmed and muhmood by x i fi which is a topological and homotopic invariant to classify surfaces see and r the multigraph un m is a connected graph having n edges including r multiple edges within and outside the cycle of length our aim is to give some algebraic and topological characterizations of spanning simplicial r complex un m in lemma we give characterization of all spanning trees of a r multigraph un m having n edges including r multiple edges and a cycle of r length in proposition we determine the facet ideal if un m of r spanning simplicial complex un m and its primary decomposition in theorem we give a formula for euler characteristic of spanning simplicial r complex un m basic setup a simplicial complex on n n is a collection of subsets of n satisfying the following properties j for all j n if f then every subset of f will belong to including empty set the elements of are called faces of and the dimension of any face f is defined as and is written as dim f where is the number of vertices of f the vertices and edges are and dimensional faces of respectively whereas dim the maximal faces of under inclusion are said to be the facets of the dimension of is denoted by dim and is defined by dim max dim f f if fq is the set of all the facets of then fq a simplicial complex is said to be pure if all its facets are of the same dimension a subset m of n is said to be a vertex cover for if m has intersection with every fk m is said to be a minimal vertex cover for if no proper subset of m is a vertex cover for definition let g g v e be a multigraph on the vertex set v and a spanning tree of a multigraph g is a subtree of g that contains every vertex of definition let g g v e be a multigraph on the vertex set v and edgeset let s g be the of all possible spanning trees of we define a simplicial complex g on e such that the facets of g are exactly the elements of s g we call g as the spanning simplicial complex of g and given by g hfk fk s g spanning simplicial complexes of multigraphs r definition a multigraph un m is a connected graph having n edges including r multiple edges within and outside the cycle of length let be simplicial complex of dimension then the chain complex is given by d cd each ci is a free abelian group of rank fi the boundary homomorphism cd is defined by p i vi v d of course hi zi where zi ker and bi im are the groups of simplicial icycles and simplicial respectively therefore rank hi rank zi rank bi one can easily see that rank bd due to bd for each i there is an exact sequence i zi ci moreover fi rank ci rank zi rank therefore characteristic of can be expressed as pd pdthe euler i i fi rank zi rank p p i rank zi i rank changing index of summation in the last sum and using the fact that rank rank bd we get p p i rank zi rank bi p p i rank zi rank bi i rank hi thus the euler characteristic of is given by d x i where rank hi is the betti number of see and r topological characterizations of un m r let un m be a multigraph having n edges including r multiple edges within and outside the cycle of length we fix the labeling of the edge r set e of un m as follows e e e e e e ev where eiti are the multiple edges of edge of cycle with i r while e are the single edges of the cycle and ejtj are the multiple edges of the edge outside the cycle with m j m r moreover ev are single edges appeared outside the cycle ahmed and muhmood r we give first the characterization of s un m r lemma let un m be the multigraph having n edges including r multiple edges and a cycle of length m with the edge set e given above a r if and only if twiw subset e twiw e will belong to s un m e e i e i ev ewiw for some ih th with h r m h m r and iw tw with w r or iw with r w m for some w h and iw ih appeared in twiw r proof by cutting down method the spanning trees of un m can be obtained by removing exactly th edges from each multiple edge such that h r m h m r and in addition an edge from the resulting cycle need to be removed therefore the spanning trees will be of the form twiw e e i e i ev ewiw for some ih th with h r m h m r and iw tw with w r or iw with r w m for some w h and iw ih appeared in twiw in the following result we give the primary decomposition of facet ideal r if un m r proposition let un m be the spanning simplicial complex of unir cyclic multigraph un m having n edges including r multiple edges within and outside the cycle of length then r if un m xiti xa r r xiti xbtb xjtj where ti with i r is the number of multiple edges appeared in the edge of the cycle and tj with m j m r is the number of multiple edges appeared in the outside the cycle r proof let if un m be the facet ideal of the spanning simplicial complex r un m from proposition minimal prime ideals of the facet ideal if have correspondence with the minimal vertex covers of the simplicial complex therefore in order to find the primary decomposition r it is sufficient to find all the minimal vertex of the facet ideal if un m r covers of un m r as ea a v is not an edge of the cycle of multigraph un m r and does not belong to any multiple edge of un m therefore it is clear by definition of minimal vertex cover that ea a v is a minimal vertex r cover of un m moreover a spanning tree is obtained by removing exactly th edges from each multiple edge with h r m m r and spanning simplicial complexes of multigraphs r in addition an edge from the resulting cycle of un m we illustrate the result into the following cases r case if atleast one multiple edge is appeared in the cycle of un m then we can not remove one complete multiple edge and one single edge from the cycle r of un m to get spanning tree therefore xiti with i r r k m is a minimal vertex cover of the spanning simplicial complex r r un m having intersection with all the spanning trees of un m r moreover two single edges can not be removed from the cycle of un m to get spanning tree consequently for r k l m is a minimal r vertex cover of un m having intersection with all the spanning r trees of un m r case if atleast two multiple edges are appeared in the cycle of un m then r two complete multiple edges can not be removed from the cycle of un m to get spanning tree consequently xiti xbtb for i b r is a r minimal vertex cover of un m having intersection with all the r spanning trees of un m r case if atleast one multiple edge appeared outside the cycle of un m then r one complete multiple edge outside the cycle of un m can not be removed to get spanning tree so xjtj for m j m r is a minimal vertex r cover of un m having intersection with all the spanning trees r of un m this completes the proof r we give now formula for euler characteristic of un m r theorem let un m be spanning simplicial complex of r multigraph un m having n edges including r multiple edges and a cycle of p r length then dim un m n ti tj r and the euler r characteristic of un m is given by q p n r i ti un m ij p p q p tk j p with j p ij ij r m ij q ij p tk tj such that r and r are the number of multiple ti and edges appeared within and outside the cycle respectively proof let e e e e e e ev be the edge set of r multigraph un m having n edges including r multiple edges and a cycle of length m such that r and r are the number of multiple edges appeared within and outside the cycle respectively ahmed and muhmood r is of the same dimension one can easily see that each facet twiw of un m r p p p n ti tj r n r with ti and tj see lemma by definition fi is the number of subsets of e with i elements not q containing cycle and multiple edges there are ti number of subsets of e containing cycle but not containing any multiple edge within the cycle there are n m l n m r subsets of e containing cycle and multiple edges of un m outside the cycle but not containing any multiple edge within the cycle there are n m n r m i m x n m l l r subsets of e containing cycle and multiple edges of un m outside the cycle but not containing any multiple edge within the cycle continuing in similar manner the number of subsets of e containing cycle and two edges from a multiple edge outside the cycle but not containing any multiple edge within the cycle is given p q p tk choices of two edges from a there are multiple edge outside the cycle therefore we obtain x x y n m l tk the number of subsets of e with elements containing cycle and all possible choices of two edges from a multiple edge outside the cycle but not containing multiple edges within the cycle now we use inclusion exclusion principal to obtain number of subsets of e containing cycle but not containing multiple edges q ti number of subsets of e with i elements containing cycle but not containing multiple edges within the cycle number of subsets of e with i elements containing cycle and multiple edges outside the cycle but spanning simplicial complexes of multigraphs not containing multiple edges within the cycle number of subsets of e with i elements containing cycle and multiple edges outside the cycle but not containing multiple edges within the cycle number of subsets of e with i elements containing cycle and two edges from a multiple edge cycle but not containing multiple edges within the cycle q p ti p p p q tk q ti ij q p p p tk j ij therefore we compute fi number of subsets of e with elements number of subsets of e with i elements containing cycle but not containing multiple edges number of subsets of e with elements containing multiple edges number of subsets of e with elements containing multiple edges number of subsets of e with i elements containing two edges from a multiple edge r of un m q n ti ij q p p p tk j ij p p ij r p m q p tk q n ti ij p q p p tk j p j ij ij r m ij q ij p tk ahmed and muhmood e e e e e e e figure example let e be the edge set of cyclic multigraph having edges including multiple edges and a cycle of length as shown in figure by method we obtain s by definition fi is the number of subsets of e with i elements not containing cycle and multiple edges since and are subsets of e containing one element it implies that there are subsets of e containing two elements but not containing cycle and multiple edges so we know that the spanning trees of are facets of the spanning simplicial complex therefore thus now we compute the euler characteristic of by using theorem we observe that n r r r m and i d where d n r is the dimension of by substituting we these values in theorem fi alternatively we compute and we compute now the betti numbers of the facet ideal of is given by spanning simplicial complexes of multigraphs if i we consider the chain complex of ker with i the homology groups are given by hi im therefore the betti number of is given by rank hi rank ker rank im with i now we compute rank and nullity of the matrix of order fi with i the boundary homomorphism can be expressed as the boundary homomorphism can be written as then by using matlab we compute rank of nullity of rank of nullity of therefore the betti numbers are given by rank ker im rank ker rank im rank ker rank im alternatively the euler characteristic of is given by references anwar raza and kashif spanning simplicial complexes of graphs algebra colloquium faridi the facet ideal of a simplicial complex manuscripta harary graph theory reading ma hatcher algebraic topology cambridge university press kashif anwar and raza on the algebraic study of spanning simplicial complex of graphs gn r ars combinatoria pan li and spanning simplicial complexes of graphs with a common vertex international electronic journal of algebra ahmed and muhmood rotman an introduction to algebraic topology new york villarreal monomial algebras dekker new york zhu shi and spanning simplicial complexes of graphs with a common edge international electronic journal of algebra comsats institute of information technology lahore pakistan address drimranahmed comsats institute of information technology lahore pakistan address shahid nankana
| 0 |
dec on the conjecture abed abedelfatah abstract it has been conjectured by eisenbud green and harris that if i is a homogeneous ideal in k xn containing a regular sequence fn of degrees deg fi ai where an then there is a homogeneous an ideal j containing xa xn with the same hilbert function in this paper we prove the conjecture when fi splits into linear factors for all i introduction let s k xn be a polynomial ring over a field the ring s sd is graded by deg xi for all i in proved that if i id is a graded ideal in s then there exists a lex ideal l such that l has the same hilbert function as i every hilbert function in s is attained by a lex ideal let m be a monomial ideal in it is natural to ask if we have the same result in in clements and proved that every hilbert function in xann is attained by a lex ideal where an in the case an the result was obtained earlier by katona and kruskal another generalizations of macaulay s theorem can be found in and let fn be a regular sequence in s such that deg an deg fn a well known result says that fn has the same hilbert function as xann see exercise of it is natural to ask what happens if i s is a homogeneous ideal containing a regular sequence in fixed degrees this question bring us to the conjecture denoted by egh conjecture egh if i is a homogeneous ideal in s containing a regular sequence fn of degrees deg fi ai where an then i has the same hilbert function as an ideal containing xann the original conjecture see conjecture is equivalent to conjecture in the case ai for all i see proposition the egh conjecture is known to be true in few cases the conjecture has been proven in the case n caviglia and maclagan have proven that the egh conjecture is true if aj ai for all j richert says that the egh conjecture in degree ai for all i holds for n but this result was not published herzog and popescu proved that if k is a field of characteristic zero and i is minimally generated by generic quadratic forms then the egh conjecture in degree holds cooper has done some work in a geometric direction she studies the egh conjecture for some cases with n key words and phrases hilbert function egh conjecture regular sequence let fn be a regular sequence in s such that fi splits into linear factors for all i for all i n let pi such that pi since pn must be a independent it follows that the map s s defined by xi pi for all i n is a graded isomorphism so the hilbert function is preserved under this map and we may assume that pi xi for all i in section we give background information to the egh conjecture in section we study the dimension growth of some ideals containing a regular sequence xn ln where li for all i in section we prove the egh conjecture when fi splits into linear factors for all i this answers a question of chen who asked if the egh conjecture holds when fi xi li where li for all i n see example of background a proper ideal i in s is called graded or homogeneous if it has a system of homogeneous generators let r where i is a homogeneous ideal the hilbert function of i is the sequence h r h r t where h r t dimk rt dimk st for simplicity sometimes we denote the dimension of a space v by instead of dimk v for a space v sd where d we denote by v the space spanned by xi v i n v v throughout this paper a an zn where an for a subset a of s we denote by mon a the set of all monomials in a and let au j xj where u mon s the support of the polynomial f s au u where au k is the set supp f u mon s au a monomial w s is called if w for all i we define the lex order on mon s by setting xb lex xc if either deg xb deg xc or deg xb deg xc and bi ci for the first index i such that bi ci we recall the definitions of lex ideal and ideal definition a graded ideal is called monomial if it has a system of monomial generators a monomial ideal i s is called lex if whenever i z lex w where w z are monomials of the same degree then w i a monomial ideal i is if there exists a lex ideal l such that i xann example the ideal i is a ideal in k because i and is a lex ideal in k by s theorem we obtain that for any graded ideal containing xann there is a an ideal with the same hilbert function be the unique macaulay expansion of p with let p and sqq sq in eisenbud respect to q set q and p q q green and harris made the following conjecture conjecture if i s is a graded ideal such that contains a regular sequence of maximal length and d then h d h d d on the conjecture conjecture is true if the ideal contains the squares of the variables this follows from the theorem see in the following proposition we prove the equivalence of conjecture and the egh conjecture in degree first we need the following definition definition let m be a monomial ideal in s and d a monomial vector space ld in d is called lexsegment if it is generated by the t biggest monomials with respect to the lex order in d sd for some t for example if l is a lex ideal in s then lj is lexsegment for all j if ld is a lexsegment space in d where m is a monomial ideal in s then ld is lexsegment in see proposition of proposition let fn be a regular sequence of degrees in the following are equivalent a if i is a graded ideal in s containing fn then there is a graded ideal j in s containing such that h h b if i is a graded ideal in s containing fn then h d h d d for all d proof first we prove that a implies b let i be a graded ideal in s containing by a it follows that there is a graded ideal j in s containing such that h h by theorem it follows that h h d h d d h d d for all d now we prove that b implies a let i be a graded ideal in s containing fn set m and p fn for every d let ld be the space spanned by the first monomials in lex order of sd such that md let k kj lj mj we need to show that k is an ideal let d by proposition of we obtain that ld ld d by the hypothesis of b we obtain d d so ld ld ld this implies that ld since and ld are lexsegments in it follows that ld so kd for all d which implies that k is a graded ideal in clearly h h the following lemma helps us to study the egh conjecture in each component of the homogeneous ideal lemma let i be a graded ideal in s containing a regular sequence fn of degrees deg fi ai the following are equivalent a there exists a graded ideal j in s containing xann such that h h b for every d there exists a graded ideal j in s containing xann such that h d h d and h d h d proof clearly a implies b we will show that b implies a for every d there exists an ideal jd in s containing xann such that h d h d and h d h d by s theorem we may assume that jd is a ideal for all let j jj j where jj j is the component of jj since dim jd dim dim it follows that jd for all so jd d jd for all thus j is an ideal clearly h h we will use the following lemma on regular sequences see chapter lemma let fn be a sequence of homogeneous polynomials in s with deg fi ai and p fn then a if fn is a regular sequence then h h xann b fn is a regular sequence if and only if the following condition holds if gn fn for some gn s then gn p c if fn is a regular sequence and sn is a permutation then n is a regular sequence the dimension growth of some ideals containing a reducible regular sequence let fn xn ln be a regular sequence in s where li for all i set p fn and m let vd be a vector space spanned by pd and monomials wt in sd and wd be the vector space spanned by md and wt in this section we prove that dim vd dim wd we also compute dim kd where kd is the space generated by pd and the biggest in lex order monomials vt in sd for a matrix a k we denote by a ir the submatrix of a formed by rows ir and columns ir where r n and ir we begin with the following lemma which characterize the structure of fn lemma example of let fn xn ln be a sequence of homogeneous polynomials in s where li aij xj with aij k and a be the n n matrix aij then fn is a regular sequence if and only if det a ir for all r n and ir proof assume that fn is regular we prove that det a ir for all r n and ir n by induction on n starting with n let n assume that ir n where r n let j ir note that xj lj is regular modulo an ideal i if and only if both xj and lj are regular modulo i by lemma xj fn is a regular sequence so fn is a regular sequence in by the inductive step we obtain that det a ir it remains to show that det a from the permutability property of regular sequences of homogeneous polynomials we obtain that ln is a regular sequence so ln is independent assume now det a ir for all r n and ir we prove that fn is a regular sequence by induction on n starting with n let n by the inductive step the sequence is regular in so xn is a regulae sequence in it remains to show that ln is a regular sequence since det a it follows that the map s s defined by xi li for all i is an isomorphism by the inductive step ln xn is a regular sequence so ln is a regular sequence as desired the special structure of the regular sequence in implies the following lemma on the conjecture lemma let fn xn ln be a regular sequence of homogeneous polynomials in s where li aij xj with aij k and p fn if g p is a homogeneous polynomial in s then g h mod p where deg h deg g and h is a combination of monomials proof since g p we have deg g it is sufficient to prove the lemma when g p is a monomial in of degree we prove by induction on deg g the lemma is true when deg g since aii for all i let g be a monomial in of degree d and a be the n n matrix aij by the inductive step we may assume that xgi is a monomial for some i by lemma we have det a j j ag so there exist scalars cj such that cj lj xi mod j ag it follows that xi cj lj cj xj where cj k for all j ag then g cj lj xgi cj xj xgi let h cj xj xgi note that h is a combination of monomials of degree since g cj lj xi p we obtain that g h mod p by the proof of lemma we obtain the following remark let p be as in lemma and d if w is a monomial in sd and q then qw where p and is a combination of monomials example assume that s c and in this case a is the matrix that defined in lemma since det a ir for all r and ir we have that is a regular sequence in set p and let g since mod p we have mod p so g mod p also we see that p so g mod p and remark lemma is not true if fn is an arbitrary regular sequence for example consider the sequence in c note that is a regular sequence and are regular sequences and are regular sequences and are regular elements in c and c respectively so is a regular sequence let g it is easy to show that g if g mod for some a c then there exist c not all zero such that g but the equation implies that a contradiction as a result of lemma we obtain the following lemma if p as in lemma then the set of all monomials form a of proof denote by a the set of all monomials in lemma shows that generated by a let w assume that w p since h h it follows that there is a polynomial f sn such that f p by lemma f mod p where b since w p it follows that f p a contradiction so w p suppose that aw w p where aw k and aw for almost all w a assume that aw for some let v a be a monomial with minimal degree such that av so v i av in the ring i av a contradiction lemma let p be as in lemma if w is a monomial in sd where d n then a w b w wt w w wt for every monomials wt of degrees d such that wi w for all i proof a let q ci li where ci k for all i such that qw assume that cj for some j aw since qw xk p it follows that cj lj w xk p thus cj lj w xk hn fn where hi s for all i so xj hj cj w xk lj hn fn which implies that xj hj cj w xk fn so w xk fn in the ring a contradiction to lemma it follows that q belong to the space li i aw on the other hand li w p for all i aw so w dim li w i aw b first we show that w wt w w wt assume that qw wt where q there exist f wt and g such that qw g f if f p then qw w so assume that f p by we may assume that f is a combination of monomials also we obtain that qw where p and is a combination of monomials so f p which implies that f wt hence qw w w wt and we obtain that the desired equality it remains to show that w wt let qw wt where q by a we have q cj lj where cj k for all j aw for every j t let ij awj and let b ij j t by the hypothesis we obtain that qw qi wi where qi for all i so on the conjecture qw in the ring j which implies that cj lj by we obtain that cj for all j aw thus qw remark part b of lemma is not true if we replace w wt by homogeneous polynomials which are a combination of monomials in sd for example let s k and p suppose that h and computation with shows that h and h h in the case that w is a homogeneous polynomial in part a of lemma the dimension is always bounded by the degree this is a result of the following proposition proposition let p be as in lemma if g p is a homogeneous polynomial of degree d then g proof we prove by induction on if n then g or g k where a if g k then g and if g then g let n we prove by induction on d starting with d let d if d n then and so g assume that d by there exists a combination of monomials h sd such that g h mod pd clearly h g let h ai wi where ai k and wi mon sd for all i let j if lj h then lj in the ring i aw i a contradiction so lj h for all j in particular there exists a variable xi such that xi h we have two cases case h pd in the ring let h ps h be a basis of h in the ring by the inductive step we obtain that s if f h then f h ps h xi q where q sd since f h it follows that xi q rh where r since xi h it follows that xi so f h ps h xi h therefore h h ps h xi h if h s then xi h a contradiction case h pd in the ring so h xi q mod pd where q since h is the unique combination of monomials such that xi q h mod p we obtain that h xi where if f h then f pxi for some p clearly xfi since f p it follows that pd in the ring so xfi pd in if in then xi h p a contradiction let ps be a basis of pd by the inductive step we obtain that s d so xfi ps li q which implies that f h ps h li xi q therefore h s now we prove the main results of this section theorem let p be as in lemma and m assume that v pd wt and w md wt where wi is a monomial of degree d for all i then dim w dim proof we may assume that d and prove by induction on if t then dim w dim dim dim dim dim dim dim let t and set md pd and z wt by lemma and the inductive step we have dim w dim dim wt dim wt dim dim wt dim wt dim dim wt dim wt dim z dim dim wt dim wt dim z dim dim wt dim wt dim proposition let p be as in lemma and v pd wt be the space spanned by pd and the t biggest in lex order monomials in sd then t n dim v n m wi where m wi max j xj i proof we claim that t t t v wi wi wi we prove the claim by induction on if t then v let t and pd by the inductive step we obtain that v is equal to t wi wi wi wt v by lemma we have wt wt wt we proved the claim let j if i m wj such that xi wj then xi wj so wi m wj therefore v t n tn td m wi d t n tn td m wi d t n tn td m wi d t n n m wi on the conjecture the main result in this section we prove that the egh conjecture is true if fi splits into linear factors for all i we begin with the following lemma lemma let p fn be an ideal of s generated by a regular sequence with deg fi ai and n assume that fn where qs then a h h for all m k b h p h p for all j s and j m k proof first we will prove a let m k note that p and p are ideals in and respectively generated by note also that qm and qk are regular sequences by part c of lemma we obtain that is a regular sequence in and by part a of lemma we obtain that h h now we prove b let j s and j m k assume that h p where p and since p it follows that gn fn where gn s gn since is a regular sequence it follows that gn so in the ring which implies that h conversely fi p for all i n so p is an ideal in generated by similarly p is an ideal in generated by by lemma it follows that h p h p theorem assume that the egh conjecture holds in k where n if i is a graded ideal in s k xn containing a regular sequence fn of degrees deg fi ai such that qi for all i s then i has the same hilbert function as a graded ideal in s containing xann proof we check the property b of lemma let d we need to find a graded ideal k in s containing xann such that h d h d and h d h d let j to be the ideal generated by fn and id by renaming the linear polynomials qs we may assume without loss of generality that for all i s j j for all i s j j for all i s j j by considering the short exact sequences j j j j j j j j j j we see that h t is equal to h t h j t i h j t s for all t let j j and for i s let h for all ji j note that ji and h i i s set s k for all i s is isomorphic with the to s so by the hypothesis there is an ideal in s containing same hilbert function as ji for all i s let li be the ideal such that h h in s containing claim li j j for all i s and j d i where li j is the component of the ideal li proof of the claim assume that i if j d then by part a of lemma we obtain j j if j d then by our assumption we obtain d d this means that h j h j for all j so h j h j for all j since and are ideals it follows that j j for all j let i s if j d i then by part b of lemma we obtain j j j p j p j j if j d i then by our assumption we obtain j j j j j j j similarly we conclude that li j j for all j d i and proving the claim on the conjecture let ks z mon s j and ki zxin z mon li for all n i s define k to be the ideal generated by ki since xsn ks and xai i for all i n it follows that xann claim if w is a monomial in k of degree t where t d then w ki proof of the claim there exists a monomial u in ki such that w vu for some monomial v if u ks then w ks assume that u zxin ki where z li for some i s if xn v then w ki assume that xn let r max j xjn if i r s then w ks so we may assume that i r by the previous claim we obtain that li j j for all j d i r since deg z d i r it follows that z so xvr z and then n v n xrn w hence we proved the claim we conclude that the number of monomials in k of degree t where t d is equal to i since i it follows that t i so h t h t i h t i h t in particular h d h d h d and h d h d h d corollary if i is a graded ideal in s containing a regular sequence fn with deg fi ai such that fi splits into linear factors for all i then i has the same hilbert function as a graded ideal in s containing xann since the egh conjecture holds when n we obtain the following corollary let n if i is a graded ideal in s containing a regular sequence fn with deg fi ai such that fi splits into linear factors for all i n then i has the same hilbert function as a graded ideal in s containing xann by the egh conjecture is equivalent to the following conjecture conjecture if i is a homogeneous ideal in s containing a regular sequence fn of degrees deg fi ai then i has the same hilbert function as an ideal containing a regular sequence gn of degrees deg gi ai where gi splits into linear factors for all i example let s c fi xi xj for all i and a since det a ir for all r and ir it follows that is a regular sequence in assume that i in this example we construct an ideal in s with the same hilbert function as i using the hilbert functions of i and i computation with shows that and are the hilbert sequence of i and respectively denote by r the polynomial ring c let r and note that and are ideals in we can see that and w w mon and w j rj for all j so we have also we have and so we have j rj for all j let k to be the ideal in s generated by mon w mon then k it is clear that and since it follows that also we have and kj sj for all j thus example let s c fi xi xj for all i and since is a regular sequence it follows that is a regular sequence in assume that i computation with shows that also we have and i i we construct an ideal in s with the same hilbert function as i using the hilbert functions of i i and i denote by and the ideals i i and i respectively let r c and an easy calculation shows that is a ideal in r and let we can see that is a ideal and let on the conjecture also we have that is a ideal in r and let k to be the ideal in s generated by mon w mon w mon the ideal k generated by computation with shows that references abedelfatah rings journal of algebra aramova herzog and hibi gotzmann theorems for exterior algebras and combinatorics journal of algebra caviglia and maclagan some cases of the conjecture mathematical research letters chen some special cases of the conjecture clements and a generalization of a combinatorial theorem of macaulay journal of combinatorial theory cooper growth conditions for a family of ideals containing regular sequences journal of pure and applied algebra cooper the conjecture for ideals of points eisenbud green and harris higher castelnuovo theory herzog and hibi monomial ideals volume springer verlag herzog and popescu hilbert functions and generic forms compositio mathematica katona a theorem of finite sets theory of graphs pages kruskal the number of simplices in a complex mathematical optimization techniques page macaulay some properties of enumeration in the theory of modular systems proceedings of the london mathematical society matsumura commutative ring theory volume of cambridge studies in advanced mathematics cambridge university press cambridge mermin and peeva lexifying ideals mathematical research letters richert a study of the lex plus powers conjecture journal of pure and applied algebra da shakin piecewise lexsegment ideals sbornik mathematics department of mathematics university of haifa mount carmel haifa israel address abed
| 0 |
jul classifying virtually special tubular groups daniel woodhouse abstract a group is tubular if it acts on a tree with vertex stabilizers and z edge stabilizers we prove that a tubular group is virtually special if and only if it acts freely on a locally finite cat cube complex furthermore we prove that if a tubular group acts freely on a finite dimensional cat cube complex then it virtually acts freely on a three dimensional cat cube complex introduction a tubular group g splits as a graph of groups with vertex groups and z edge groups equivalently g is the fundamental group of a graph of spaces denoted by x with each vertex space homeomorphic to a torus and each edge space homeomorphic to s the graph of spaces x is a tubular space in this paper all tubular groups will be finitely generated and therefore have compact tubular spaces tubular groups have been studied from various persectives brady and bridson provided tubular groups with isoperimetric function for all in a dense subset of in cashen determined when two tubular groups are wise determined whether or not a tubular group acts freely on a cat cube complex and classified which tubular groups are cocompactly cubulated the author determined a criterion for finite dimensional cubulations button has proven that all groups that are also tubular groups act freely on finite dimensional cube complexes the main theorem of this paper is theorem a tubular group g acts freely on a locally finite cat cube complex if and only if g is virtually special haglund and wise introduced special cube complexes in the main consequence of a group being special is that it embeds in a right angled artin group see or for a full outline of wise s program structure of the paper in wise obtained free actions of tubular groups on cat cube complexes by first finding equitable sets that allow the construction e w of immersed walls such a set of immersed walls determines a wallspace x e w which g acts freely on wallspaces were which yields a dual cube complex c x first introduced by haglund and paulin and the dual cube complex construction classifying virtually special tubular groups was first developed by sageev in the author defined a criterion called dilation that determines if an immersed wall produces infinite or finite dimensional e w is cubulations more precisely if the immersed walls are then c x finite dimensional we recall the relevant definitions and background in section section establishes a technical result using techniques from it is shown that that immersed walls can be replaced with primitive immersed walls without losing the finite dimensionality or local finiteness of the associated dual cube complex the reader is encouraged to either read this section alongside or skip it on a first reading e w in the finite dimensional case to establish a set of in section we analyse c x e w is virtually special we decompose c x e w conditions that imply that x e and then under the assumpas a tree of spaces with the same underlying tree as x e w maps into tion that the walls are primitive we show that c x e where r is the standard cubulation of r and e is the underlying graph rd e a further criterion the notion of a fortified immersed wall determines when of e w is locally finite combining these results allow us to give criterion for c x e w to be virtually special x in section we consider a tubular group acting freely on a cat cube complex e y we show that we can obtain from such an action immersed walls that preserve the important properties of ye more precisely we prove the following proposition let g be a tubular group acting freely on a cat cube complex ye then there is a tubular space x and a finite set of immersed walls in e w is the associated wallspace then moreover if x e w g acts freely on c x e w is finite dimensional if ye is finite dimensional c x e w is finite dimensional and locally finite if ye is locally finite c x this proposition is sufficient to allow us to prove in section we further exploit the results obtained in section to obtain the following demonstrating that the cubical dimension of tubular groups with finite dimensional cubulations are virtually within of their cohomological dimension theorem a tubular group acting freely on a finite dimensional cat cube complex has a finite index subgroup that acts freely on a cat cube complex acknowledgements i would like to thank dani wise and mark hagen classifying virtually special tubular groups background tubular groups and their cubulations let g be a tubular group with associated tubular space x and underlying graph given an edge e in a graph we will let and respectively denote the initial and terminal vertices of let xv and xe denote vertex and edge spaces in this graph of spaces let and be the boundary circles of xe and denote the attaching maps by e xe and xe note that and e denote the respectively represent generators of ge in and we will let x eve and x eee denote vertex and edge spaces in the universal universal cover of x let x e and let e denote the tree we will assume that each vertex cover x space has the structure of a nonpositively curved geodesic metric space and that attaching maps e and define locally geodesic curves in and equitable sets and intersection numbers given a pair of closed curves in a torus s t the intersection points are the elements p q s s such that p q for a pair of homotopy classes of closed curves in a torus t their geometric intersection number is the minimal number of intersection points realised by a pair of representatives from the respective classes this number is realised by any pair of geodesic representatives of the classes if b is a finite set of homotopy classes of curves in t then b p viewing and as elements of t we can compute that det given an identification of with t the elements of are identified with homotopy classes of curves in t so it makes sense to consider their geometric intersection number an equitable set for a tubular group g is a collection of sets sv where sv is a finite set of distinct geodesic curves in xv disjoint from the attaching maps of adjacent edge spaces such that sv generate a finite index subgroup of xv gv and e note that equitable sets can also be given with sv a finite subset of gv that generates a finite index subgroup of gv and satisfies the corresponding equality for intersection numbers this is how wise formulates equitable sets and its equivalence follows from exchanging elements of gv xv with geodesic closed curves in xv that represent the corresponding elements an equitable set is fortified if for each edge e in there exists e and such that an equitable set is primitive if every element sv represents a primitive element in gv immersed walls from equitable sets immersed walls are constructed from circles and arcs for each sv let be the domain of the disjoint union f over all sv and v v are the circles since e classifying virtually special tubular groups there exists a bijection from the intersection points between curves in and e and the intersection points between curves in and let p q and q be corresponding intersection points between and then an arc a has its endpoints attached to p and p the endpoints of a are mapped into and so the interior of a can be embedded in xe after attaching an arc for each pair of corresponding intersection points we obtain a set of connected graphs that map into x called immersed walls each graph has its own graph of groups structure with infinite cyclic vertex groups and trivial edge groups as in all immersed walls in this paper are immersed walls constructed from equitable sets as above this means that we are free use the results obtained in ei x e is a two sided embedding in x e separating x e into two halfspaces a lift of e i to x e are horizontal walls wh the vertical walls the images of the lifts of wv are obtained from the lifts of curves s xe given by the inclusion s s the set w wh wv of all horizonal and vertical walls e w where the on x e also gives an action on the gives a wallspace x main theorem of is that a tubular group acts freely on a cat cube complex if and only if there exists an equitable set a set of immersed walls is fortified if they are obtained from a fortified equitable set a set of immersed walls is primitive if they are obtained from a primitive equitable set e and e be horizontal walls in e an point x e e is a regular intersection let eve and the lines e x eve and x eve are point if it lies in a vertex space x e e is a intersection point and either x x eve otherwise a point x eve e eve or x x eee where an infinite cube in a cat cube complex ye is an sequence of cn such that cn is an in ye and cn is a face of in a dilation function is constructed for each immersed wall r and an immersed wall is said to be dilated if r has infinite image the following is thm from that paper e w the wallspace obtained from a theorem let x be tubular space and x finite set of immersed walls in x the following are equivalent e w is infinite dimensional the dual cube complex c x e w contains an infinite cube the dual cube complex c x one of the immersed walls is dilated the following result is also obtained from by combining thm prop and prop the last part follows from the last paragraph of the proof of prop classifying virtually special tubular groups e w the wallspace obtained from proposition let x be tubular space and x e w is infinite dimensional then w a finite set of immersed walls in x if c x contains an set of pairwise regularly intersecting walls of infinite cardinality that e w moreover the infinite correspond to the hyperplanes in an infinite cube in c x cube contains a canonical primitive immersed walls the following result uses the techniques in section of to compute the dilation function let be an immersed wall in x and let r be its dilation function if r has finite image then is let q be the quotient map obtained by crushing each circle to a vertex note that the arcs in correspond to the arcs in the dilation function r factors through so there exists a function such that r we can therefore determine if is dilated by computing the function we orient each arc in so that all arcs embedded in the same edge space are oriented in the same direction we orient the arcs in accordingly we define a weighting e let xe be an edge space in x and let be an arc mapped into xe connecting the circles c and c let c and c be the corresponding elements in the equitable set then e e if is an edge path in where is an oriented arc in and then lemma let x be a tubular space and let g x let be a set of immersed walls in x obtained from an equitable set sv then there exists a set of primitive immersed walls in x obtained from an equitable set moreover if are then so are if are fortified then so are proof each decomposes as the union of disjoint circles which are the domain of locally geodesic closed paths in the equitable set and arcs suppose that sv where xv gv is primitive let be the immersed wall containing the circle n corresponding to a new equitable set is obtained by replacing in sv with n locally geodesic curves xv with disjoint images in xv that are isotopic to in ev this remains an equitable set since n pn for x classifying virtually special tubular groups any locally geodesic curve in xv new immersed walls are obtained from by replacing with and reattaching the arcs that were attached to the intersection points in n to the corresponding intersection points on let be the new set of immersed walls obtained in this way note that each arc in corresponds to a unique arc in assume that is we claim that the new immersed walls are also let qi and qij be the quotient maps obtained by crushing the circles to vertices let u be the vertex in corresponding to n let ri and rij be the dilation functions let and be the unique maps such that ri qi and rij qij let and be the respective weightings of the arcs in and by assumption ri and have finite image as the arcs in correspond to arcs in there is a map we show is by showing that let be an oriented arc in the edge qij embeds in an edge space xe if the vertices of are disjoint from u then if the endpoints of are contained in ij v and correspond to the circles and then n n e e e n n e e e suppose that exactly one endpoint of is contained in ij u if terminates a vertex in ij u corresponding to and the initial vertex corresponds to a circle that is the domain of a locally geodesic curve then n n e e e n if starts at a vertex in ij u corresponding to and the terminal vertex correspond to a circle that is the domain of a locally geodesic curve then n n e e e n n n therefore given an edge path in since the number of edges exiting vertices in ij v is the same as the number of vertices entering this procedure produces immersed walls with one fewer element in the equitable set repeating this procedure for each element in the equitable set produces a primitive set of immersed walls it is also clear that if are fortified then so are the new immersed walls classifying virtually special tubular groups finite dimensional dual cube complexes e w be the wallspace obtained let x be a tubular space and let g x let x from a set of immersed walls constructed from an equitable set and a vertical immersed wall in each edge space we emphasize that in this section all immersed walls are assumed to be even when it e c x e w and let z e by theorem is not explicitly stated let z e being finite dimensional the immersed walls being is equivalent to z e let e ee denote the vertical wall in x eee for each edge ee in we refer to for full background on the dual cube complex construction a e is a choice of halfspace z e of e for each e w such that z in z e w then z e z e if e w such that x e if x x then there are only finitely many z e e for all but precisely one hyperplane two are adjacent if e the joining and is dual to the hyperplane corresponding to e an is then present wherever the of an appears we e e face each other in z if z e is not contained in z e say that two disjoint walls and vice versa e therefore ze decomproposition there is a map f ze eee f e poses as a tree of spaces with zeve f e v and z e is the carrier of the e ee wv hyperplane corresponding to e such that f c by f e we mean the union of all cubes c in z proof as there is a vertical wall in each edge space and since the vertical walls are e wv with the tree e of e we define a all disjoint we can identify c x e let z be a in z then define f z by letting f z e e z e e map f ze e e for precisely one wall e if e if are adjacent then is a horizontal wall then f f and the joining them is also mapped e e wv then f and f are adjacent in e so the to the same vertex if joining and maps to the edge joining f and f as f is defined e then on the the map extends uniquely to the entire cube complex eve f e z v and zeee f e e is the carrier of the hyperplane corresponding to e proposition implies that z decomposes as a graph of spaces with vertex spaces e zv edge spaces ze and underlying graph the following proposition which collects the principal consequences of finite dimensionality is prop in classifying virtually special tubular groups proposition let x be tubular space with geodesic attaching maps and let e w be the wallspace obtained from a finite set of immersed walls in x if the x e w is finite dimensional then the horizontal walls in w dual cube complex c x can be partitioned into a collection p of subsets such that the partition p is preserved by g for each a p the walls in a are pairwise e a p be a wall intersecting x eve there exists h gve stabilizing an let eve perpendicular to e eve such that a hr e axis in x any partition of the horizontal walls in w satisfying conditions in proposition will be called a stable partition e w be the wallspace obtained from a lemma let x be a tubular space and x finite set of immersed walls in x let p be a stable partition of the horizontal walls e only finitely many a p contain walls intersecting in then for each ve eve x e is a wall intersecting x eve then by condition of a stable proof suppose that e eve such that partition there exists some h gve that is perpendicular to e by we can deduce that each of the of hr e is also in there are only finitely many such translates therefore each hr eve is contained in finitely many elements of the claim then of a wall in x follows from the fact that there are only finitely many of walls intersecting eve x the immersed walls are and therefore ze is finite dimensional so by proposition there exists a stable partition p of the horizontal walls eve let pee be the in let pve be the subpartition containing walls intersecting x eee by lemma both pve and pee are finite subsubpartition of walls intersecting x partitions if ee is incident to the vertex e v then pee pve let pve adve e i such that hi gve stablizes an by criterion of a stable partition ai hri eve perpendicular to ei x eve the action of gve preserves both the partition axis in x pve and the ordering of the walls in each ai let r denote the cubulation of r with a vertex for each integer and an edge joining consecutive integers therefore each in rd is an element of zd we construct a free action of gve on rdve let g gve and let be a e i ej in rdve define the map g such that g j d as g permutes the walls in pve the map g is a bijection on the in r ve if e i e j then necessarily g e i e j so adjacent are g i j i j mapped to adjacent and the map extends to an isomorphism of rdve if classifying virtually special tubular groups g then g would stabilize all the walls in pve which eve since gve acts freely on z eve this would would imply that it fixed every in z imply that g and hence gve acts freely on rdve eve then every wall we also define an embedding zeve rdve if z is a in z e that is either vertical or not in contained in the subpartition pve has x eve z e e for e in pve for i de therefore z is entirely determined by z v the set r ei x eve is an infinite collection of disjoint parallel lines in x eve as all the hi e for each z in zeve there exists a unique z walls in ai are disjoint in x e e such that hi and hi face each other in z let z note that the map is injective and sends adjacent to adjacent so the map on the extends to an embedding of the entire cube complex eve rdve is lemma the embedding z proof let g gve if z and g z then e i z e i z e j which implies that gz gz i i j v e z let ee be an edge adjacent to e v then either e ve or e ve we define a free d e action of gee on r after reindexing let pee ade pve where dee dve let be a in rd and let g gee then e i e j as in the case g such that g i j of vertex spaces this map extends to an isomorphism of rde as with the vertex spaces there is a embedding zeee d r e let z be a in zeee then for each i dee there exe i faces e i in z and x z e ee define ists a unique such that i i z let ve the free action of gve on rdve restricts to a free action of gee we claim that we can embed rde into rdve in a way let hee ze e e as zeee is the carrier of hee we can identify be the hyperplane corresponding to eee with hee note that hee embeds as a subspace in zeve and z de restricts to an embedding e ee hee r where v d d e v e we construct an embedding r r recall that pee ade e j aj then x eee z hr e pve adve for dee j dve if hrj j j for all e j faces h j z in zeee therefore there is a unique z such that hj j j e z in zee and dee j dve thus we define ee ee ee the of ee will require a further assumption e j for every classifying virtually special tubular groups lemma the following commutative square is provided the immersed walls are primitive hee e rde zeve e rdve moreover ee is a inclusion that is equivalent to extending the geed action on r e by a trivial action on rdve proof let z be a in hee then by construction ee ee e z ee z to verify that is let g gee for i dee there exists e i e j for dee i dve the intersection j dee and be such that g i j ei x eve is a geodesic line parallel to x eee x eve thus gee stabilizes ei x eve as the e immersed walls are primitive we can deduce that gee stabilizes for dee i dve e i e i for z and conclude we deduce that g i i g g g observe that gee acts trivially on the last dve dee coordinates e which is finite since there are only finitely many let d max ve v vertex orbits proposition if the immersed walls are primitive then g acts freely e such that the action on the e factor is the action of g on the on rd e tree moreover there is a embedding ze rd proof the gve and on rdve and rde can be equivariantly extended to actions on rd such that gve and gee act trivially on the additional factors therefore the square in lemma can be extended hee e rde rd eve z e rdve rd classifying virtually special tubular groups e and a embedding of the therefore we obtain a on rd e tree of spaces ze rd proposition ze is locally finite if and only if are fortified eve and an proof if is not fortified then there exists a vertex space x eee such that every horizontal wall e in pve intersects x eve as a adjacent edge space x eve that intersects x eve x eee therefore every horizontal wall intersecting line eve intersects x eee so pee pve let eei be an enumeration of the x e ee intersects all the horizontal walls in pve of ee then peei pve and i e z e for e e ee let z be a in zeve there is a zi such that zi i e ee e e and zi z to verify zi is a note that every wall in pve intersects e that is not e e has x eee zi e therefore zi e ee zi e for and every other wall e w e ee for any walls e e w e ee the intersection zi e zi e all e z e finally if x x e then x zi e for all but finitely many e w z because it is true for z which differs from zi on precisely one wall each zi is adjacent to z since they differ on precisely one wall so zi is an infinite e is not locally finite collection of distinct adjacent to z and z to show that converse we first observe that the embedding zeve rdve proves that zeve is always locally finite irrespective of whether the immersed walls eve and let u e via an edge ee are fortified let z be a in z e be adjacent to ve in eue such that z e zee e for then z can be adjacent to at most one zee in z e w except e e this zee may not always define a however let ee be an all e pve edge adjacent to ve as the immersed walls are fortified there exists hr e eve is an infinite set of lines parallel to x eee x eve as hr e is a such that hr e and e are facing in z there set of disjoint walls there exists r such that hr eg ee z hr e z e are only finitely many edges ee gm ee gveee such that x i r eg ee is not contained in z h e z h e then either if ge e is an edge such that x i r e zgee e e or zgee h e zgee e e so zgee is not a as there are zgee h only finitely many of edges incident to ve we conclude that zeee is a for finitely many edges ee incident to ve proposition if are primitive fortified immersed walls then g is virtually special proof by proposition there is a free action of g on rd aut d so g is e e therefore there is a a subgroup of isom rd zd aut d aut d d projection g z aut each vertex group gve embeds in zd and the mapping is invariant under conjugation as there are only finitely orbits of vertices in e there exists a finite index subgroup dz d zd such that if ee is incident to ve then classifying virtually special tubular groups dz d is generated by a primitive element in gve dz d let dz d e such that then g is a finite index subgroup that embeds in dz d aut each edge group is generated by an element that is primitive in the adjacent vertex e groups by proposition there is a embedding ze rd as does not permute the factors of rd we can deduce that the hyperplanes in e do not so neither do the hyperplanes in e indeed rd they are also and can not let be a finite index subgroup such that the underlying graph has girth at least let ee be an edge such that e ve as are fortified we e conclude that dve dee and zee is a proper subcomplex of zeve as is primitive if e d hr e ee in a direction g then ghrdve dve dve as g acts by translation on xv v e ed x eve thus is not stabilizes by g so we can deduce that to v e ee therefore embeds in v e ee ee ee ee v e e let z be a in let he be the vertical hyperplane let z contained in and dual to an edge incident to as the attaching maps of are embeddings and has girth at least s we deduce that z can only be incident to one end of a single intersected by he therefore he does not e that projects to a in rd the of let be a in rd is a set of that all project to the same factor of rd since does not permute the factors of rd as does not invert hyperplanes after subdividing rd we can assume that the of is a disjoint set of therefore after the corresponding subdivision we conclude that the horizontal hyperplanes in z don t we note that the requirement in proposition that the immersed walls are fortified is necessary as the following example demonstrates example let g ha b t a b ai we can decompose g as the cyclic hnn extension of the vertex group gv ha bi with stable letter thus g is a tubular group let x be the corresponding tubular space with a single vertex space xv and edge space xe there is an equitable set where is a geodesic curve in xv representing ab gv and is a geodesic curve in xv representing gv note that each attaching map e and intersects each curve in the equitable set precisely once therefore we obtain a pair of embedded immersed horizontal walls and by connecting respective intersection points with e and by an arc a vertical wall is also embedded in xe e w we can decompose w into three sets of disjoint walls the in the wallspace x walls that cover the walls that cover and the walls we that cover these walls are disjoint since the immersed walls are embedded furthermore the classifying virtually special tubular groups e w walls in different sets pairwise intersect therefore we can conclude that c x e as this is not locally finite x e w can not be virtually special revisiting equitable sets although wise proved in that acting freely on a cat cube complex ye implied the existence of an equitable set and thus a system of immersed walls as in section no relationship was established between ye and the resulting dual e w proposition gives the relationship required to reduce theorem c x to considering cubulations obtained from equitable sets this section will apply the following theorem from a cubical quasiline is a cat cube complex to theorem let g be virtually zn suppose g acts properly and without inversions on a cat cube complex ye then g stabilizes a finite dimensional e ye that is isometrically embedded in the combinatorial metric and subcomplex z q m z ci where each ci is a cubical quasiline and m moreover stabg e is a subgroup for each hyperplane in theorem allows us to prove the following lemma let g be a tubular group acting freely on a cat cube complex ye ev ye let gv be a vertex group in g then there exists a gv subspace x eve has a metric such that the intersection of a homeomorphic to moreover x ev is either empty or a geodesic line hyperplane in ye with x proof by theorem there exists a gv subcomplex yev ye that isoqm metrically embeds in the combinatorial metric and such that yev ci where each ci is a cubical quasiline by the flat torus theorem gv stabilizes a flat ev yev that is a convex subset in the cat metric of yev as the stabilizers of x hyperplanes in yev are subgroups of gv the intersection of a hyperev is either empty or a geodesic line in the cat metric inherited plane in ye with x from yev if s is a subset of a cat cube complex ye then let hull s denote the combinatorial convex hull of the combinatorial convex hull of s is the minimal convex subcomplex containing equivalently hull s is the intersection of all closed halfspaces containing definition let x be a tubular space and let y be a nonpositively curved cube complex a map f x y is an amicable immersion if x y is an isomorphism classifying virtually special tubular groups e ye embeds each vertex space x eve in ye the map fe x e e each xve has a euclidean metric such that if h y is a hyperplane then eve is either the empty set or a single geodesic line in the intersection h x eve x eee is emdedded transverse to the hyperplanes each edge space x eve eee is contained in hull s x each x e v eve is not the subspace metric induced from note that the euclidean metric on each x e lemma let x be a tubular space and let y be a nonpositively curved cube complex let f x y be an isomorphism then there is an amicable immersion f x y such that f proof use f to identify g x with y the claim is proven by constructing e ye by lemma for each a map between the tree of spaces x e we can embed a euclidean flat x eve in ye such that if h ye ve v eve is either the empty set or a single is a hyperplane then the intersection h x s ee is the eve moreover we can ensure that geodesic line in x e xv v eee can then be inserted transverse to the hyperplanes in ye so that the edges spaces x eee x eve with adjacent vertex spaces is not contained in a hyperplane intersections x eve eee is contained inside hull s x in ye and x e e lemma let x y be an amicable immersion where y is finite dimensional e then hull x eve embeds as a subcomplex of rd for some if ve is a vertex in eve if h is a hyperplane in ye proof let g x let y be a in hull x let y h denote the halfspace of h containing y each is determined by the halfspace containing it for each hyperplane if h is a hyperplane that doesn t eve then y h is the halfspace containing x eve and therefore is y h fixed intersect x eve for all y in hull x eve let h hve the intersection let hve denote the hyperplanes intersecting x e e xve h is a geodesic line in xve let g gve be an isometry that stabilizes an eve that is not parallel to x eve then gr h is an infinite family of axis in x eve is a set of disjoint parallel lines in x eve as hyperplanes such that gr h x ye is finite dimensional there exists an n such that h and g n h do not intersect otherwise gr h would be an infinite set of pairwise intersecting hyperplanes which would imply that there are cubes of arbitrary dimension in ye therefore as there are only finitely many of hyperplanes intersecting eve there exists a finite set of hyperplanes hd hve and gd g such x that hve gdr hd and each gir hi is a disjoint set of hyperplanes classifying virtually special tubular groups e is a set of disjoint geodesic lines in x eve thus in ye therefore gir hi e given a y there exists a unique yi z such that y giyi hi and y giyi hi eve rd by letting properly intersect each other therefore construct hull x y yd for each y the map extends to the of eve since adjacent lie on the opposite sides of precisely one hyperplane hull x eve therefore extends to the higher dimensional cubes and thus hull x e ye be the lift of the universal cover of an amicable immersion x y let x eee be an edge space adjacent to a vertex space x eve a hyperplane h in ye let x eve parallel to x eee if h eve is a geodesic line parallel to x eve otherwise intersects x eve is a geodesic line that is not parallel to x eee x eve then we say h intersects if h x eve to x eee x e suppose lemma let x y be an amicable immersion let ee be an edge in to x eee then h intersects that h ye is a hyperplane intersecting x to x eee moreover there is an arc in h x eee joining h x to x h and x eee x are non parallel proof let g x the geodesic lines h x and therefore intersect in a single point p x eee x as h is two sided in x in ye and x the vertex and edge spaces are transverse to h the intersection of h e therefore p is contained inside a curve in with x is also locally two sided in e e h xee as xee is and only finitely many hyperplanes separate any two e we can deduce that p is an endpoint of a compact curve in h x eee points in x x eee thus h must also intersect x with its other endpoint contained in x eee to x lemma let x y be an amicable immersion where y is a finite dimensional locally finite nonpositively curved cube complex if x then eve and adjacent edge space x eee there is a hyperplane h in ye for every vertex space x eve parallel to x eee that intersects x e proof let g x there are precisely two vertex orbits and one edge orbit in e let h denote the set of all hyperplanes in ye intersecting assume that ye hull x e let hve denote the set of all hyperplanes intersecting x eve x for each vertex ve there is precisely one of adjacent edges ee therefore eve it is to all if h hve is to an adjacent edge space to x adjacent edge spaces and by lemma it must intersect all adjacent vertex spaces to all adjacent edge spaces therefore we deduce that any hyperplane e will either intersect a vertex space that doesn t intersect every vertex space in x classifying virtually special tubular groups parallel to its adjacent edge spaces or its intersection will be a line contained in an edge space eve such that no hyperplane in hve suppose that there exists a vertex space x eve parallel to the adjacent edge spaces let h hve every intersects x hyperplane in h must intersect each wall in so we deduce that ye eve c ye see lem furthermore hgev hve for all g by hull x eve embeds in rd since x eve is contained inside some subcomplex lemma hull x eve c ye where c is the determined by orienting all hyperplanes hull x eve we can conclude that only finitely many hyperplanes in intersect towards x eve the of x eue be an vertex space adjacent to x eve and let x eee be the edge space conlet x eve be another vertex space adjacent to x eue and let x eee be the necting them let x eve and x eve are in the same gue the edge space connecting them note that x eue x eee and x eue x eee are parallel in x eue let d x eue be the subgeodesic lines x eee x eee space isometric to r a b bounded by these parallel lines let u d x finitely many hyperplanes in intersect eve that is to let be an isometry in that stabilizes an axis in x eve x eee similarly let be an isometry in gve that stabilizes an axis the geodesic x eve that is to the geodesic x eve x eee note that f i is a free in x group on two generators let r be such that u is contained in the eve as there are only finitely many hyperplanes in intersecting nr x eve of x n e there must exist an n such that stabilizes those walls similarly since is a eve we can deduce that there are only finitely many hyperplanes in translate of x eve and there must exist an m such that gm stabilizes those walls intersecting nr x eve and nr x eve we can deduce that the let f i as u lies in both nr x hyperplanes in that intersect the f of u are precisely the hyperplanes e f eve f x eve f u then hull z e hull x eve k ye intersecting u let z e which is a where k is a compact cube complex then f acts freely on hull z contradiction since number of intersecting the of a eve k grows polynomially with r and therefore can not permit a free in hull x f lemma is a special case of the following more general statement corollary let x y be an amicable immersion where y is a finite dimensional locally finite nonpositively curved cube complex then for every vertex space eve and adjacent edge space x eee there is a hyperplane h in ye that intersects x eve x eee parallel to x classifying virtually special tubular groups e there is a subgroup i g such that proof for every edge ee in let y then there is an amicable immersion x y therefore by lemma there is a hyperplane e and x x x such that x e ee parallel to x eee and similarly for x intersecting x the following proposition is a strengthening of one direction of theorem in let a c and b c be maps between topological spaces a b the fiber product a b a b a b a b note that there are natural projections a b a and a b b proposition let g be a tubular group acting freely on a cat cube complex ye then there is a tubular space x with a finite set of immersed walls such that the e w has the following properties associated wallspace x e w g acts freely on c x e w is finite dimensional if ye is finite dimensional c x e w is finite dimensional and locally finite if ye is locally finite c x proof let y let x y be an amicable immersion assume that e so every immersed hyperplane in y intersects x therefore as x ye hull x is compact there are finitely many immersed hyperplanes hm in y let hi y be an immersed hyperplane in y we obtain horizontal immersed walls in x by considering the components of the fiber product x hi of x y and h y each component has a natural map into x the components of x h that have image in x contained in an edge space are ignored let be a component of x h whose image in x intersects a vertex space xv x we will show that after a minor adjustment to we obtain a horizontal immersed wall and by considering all such components we obtain a set of horizontal walls in x obtained from an equitable set using the map x we can decompose into the components of the preimages of vertex space and edge spaces as the intersection of each hyperplane h ye eve is either empty or a geodesic line the intersection of each with each vertex space x hi with xv is a set of geodesic curves so restricted to the preimage of xv is a set of geodesic curves by lemma each hyperplane h ye that intersects a vertex eee will intersect x eee as an arc with space xve to an adjacent edge space x and x thus the components of the intersection xe hi that endpoints in x intersect or are arcs with endpoints in both and therefore decomposes into circles that map as local geodesics into vertex spaces and arcs that map into edges spaces xe with an endpoint in each and classifying virtually special tubular groups let be the set of all such components of x hi that intersect vertex spaces let svp be the set curves that map the circles in to the vertex space xv the elements of svp and the attaching maps e of the edge spaces in x are locally geodesic curves and since both sides are equal to the number of arcs in the walls that map into xe as g acts freely on eee as geodesics in at ye there must be hyperplanes intersecting each vertex space x p least two parallelism classes this implies that sv contains curves generating at least two cyclic subgroups of gv and therefore svp generates a finite index subgroup of gv svp is almost an equitable set the images of the curves in svp may not be disjoint suppose that sv be a maximal set of curves that have identical image in xv let q denote the of a subset q of either y or ye with respect to the cat metric let be such that the neighbourhood y only contains the images of and the arcs connected to them there is f f a homotopy of x that is the identity outside of such that are homotoped to a disjoint set of geodesic curves in xv transverse or disjoint from all the other curves in svp by choosing small enough f we can perform such a homotopy x such that all sets of overlapping curves in svp become disjoint and such that is the identity map outside of the of the overlapping curves the restriction of to x is an immersed wall that we will denote by thus the immersed walls obtained from an equitable set sv we refer to as the e as the note that ep x immersed and the lifts i have regular and intersections in the same way that walls do e w be the wallspace obtained from the immersed walls k and let x e x e covers an adding a single vertical wall for each edge space each wall immersed wall x there exists a homotopy of x to the corresponding immersed x this homotopy lifts to a homotopy from the e x e to a unique e p e note that each wall is immersed wall contained in the of its corresponding each e in corresponds to the intersection of a unique hyperplane in ye with the image of x ye therefore each wall in w corresponds to a unique hyperplane in ye e be a wall in w and let e p be the corresponding note that e x eve let p e eve are either parallel geodesic lines or both empty intersections therefore and e e if w are a pair of regularly intersecting walls then they correspond to a pair of regularly intersecting which correspond to a pair of intersecting hyperplanes in ye classifying virtually special tubular groups e p and e p are disjoint then the corresponding walls in if a pair of e in w are also disjoint moreover since e is contained in the and e p a halfspace of e determines a halfspace of e p and therefore a halfspace of the of e hyperplane h corresponding to e w were infinite dimensional then by proposito prove suppose that c x tion there would exists an infinite set of pairwise regularly intersecting walls in w which implies there is an infinite set of pairwise regularly intersecting therefore there is an infinite set of pairwise intersecting hyperplanes in ye this would imply that ye is an infinite dimensional cat cube complex therefore if e w ye is finite dimensional then so is c x to prove we first prove the following e w is finite dimensional claim if ye is locally finite then c x e w is infinite dimensional then proof suppose that ye is locally finite if c x by lemma it contains an infinite cube containing a canonical z let e e n be the set of infinite pairwise crossing walls corresponding to the e p e pn be the corresponding set of infinite pairwise crossing infinite cube let and let hn be the corresponding infinite family of pairwise crossing hyperplanes suppose that q is a subcomplex in ye let u q denote the cubical neighborhood of q which is the union of all cubes in u q that intersect q as ye is locally finite if q is compact then u q is also compact by lem if q is convex then so is u q let u n q denote the cubical neighborhood of u q e be a point determining the canonical z in c x e w let x be let x x contained in a cube c in ye as c is compact and convex u n c is also compact and convex and therefore can only be intersected by finitely many hi there exists an hi such that hi intersects u n c but not u n c for some n since hi does not intersect u n c nor u n c there must exist a hyperplane h intersecting u n c that separates u n c from hi note that dye x h n e p be the corresponding to h and e be the and dye h hi let p p e e e corresponding wall as separates x from we can conclude separates x e i since the e and e i are respectively contained in the of ep from e p this contradicts the fact that z is incident to a dual to hyperplane and i e corresponding to e w is finite dimensional we can apply corollary to each edge group as c x in g to deduce that are fortified therefore by proposition we deduce e w is locally finite that c x classifying virtually special tubular groups we can now prove the main theorem of this paper theorem a tubular group g acts freely on a locally finite cat cube complex if and only if g is virtually special proof suppose that g is virtually special then g embeds as the subgroup of a finitely generated right angled artin group and therefore acts freely on the universal cover of the corresponding salvetti complex which is necessarily locally finite conversely suppose that g acts freely on a locally finite cat cube complex let x be a tubular space such that g x by proposition there exists a e w finite set of immersed walls such that the dual of the associated wallspace c x is finite dimensional and locally finite by lemma we can assume that the immersed walls are also primitive therefore by proposition g is virtually special virtual cubical dimension lemma let x be a tubular space and g x suppose there exists an equitable set that produces primitive immersed walls in x there exists a finite index subgroup g such that for each vertex group of the induced splitting of the natural map is an injection as a summand proof note that g and its finite index subgroups is a summand of two factors the first factor g is generated by the image of the vertex groups of g and the second factor g is generated by the stable letters in the graph of groups presentation e be the tree by proposition since there are immersed walls let e such that gve fixes the that are primitive and g acts freely on rd e therefore g is a subgroup of aut rd e vertex ve in zd aut d e there is a finite quotient zd aut d aut e aut d aut so let g be the finite index subgroup contained in the kernel note that e let p zd be the projection onto the first factor embeds in zd aut each vertex group survives in the image of p and therefore we have embedding zd e there is a finite index subgroup av zd such for each vertex v in that p av is a summand of av let a av and a each vertex group in will be a factor in a as a is free abelian the map a will factor through the so we can deduce that each vertex group survives as a retract in therefore each vertex group in survives as a summand in the first homology classifying virtually special tubular groups theorem let g be a tubular group acting freely on a finite dimensional cat cube complex then there is a finite index subgroup g such that acts freely on a cat cube complex proof let x be a tubular space such that g x by proposition there exists immersed walls in x that are and by lemma we can assume that they are also primitive let g be the finite index subgroup given by lemma and let x be the corresponding covering space let zd be the summand in the first homology generated by the vertex groups by lemma there is an inclusion and a projection map pv suppose that d choose any pair of elements a b that generate we claim that s sv pv a pv b v v is an equitable set by construction each sv generates the edge group hge i is adjacent to and and the respective inclusions are e and there is an isomorphism a sl z that maps e ge in gu to ge in gv therefore pu a e ge apu a a ge pv a ge a congruent set of equalities exist for b these equalities also imply that the choice of arcs for the equitable sets can be chosen so that they only join circles that are the image of the same elements of therefore a set of embedded immersed walls is obtained with precisely two immersed walls intersecting each vertex space such a set of horizontal immersed walls along with a vertical wall for each edge will give a three dimensional dual cube complex if d then there exist vertex groups gu and gv such that they embed into v as distinct summands let gu hgu i and gv hgv i we can assume since they are distinct summands that gu is disjoint from the image of gv in and that gv is disjoint from the image of gu in by attaching an edge space in x connecting xu and xv that have attaching maps representing gu and gv respectively we obtain a new graph of spaces the resulting tubular group has so by induction we can obtain the specified graph of spaces given a set of immersed walls for with a dual cube complex of dimension at most we obtain immersed walls for x by deleting the arcs that map into the the edge space that was added to construct the immersed walls obtained still give a dual cube complex with dimension at most references brady and bridson there is only one gap in the isoperimetric spectrum geom funct classifying virtually special tubular groups martin bridson and haefliger metric spaces of curvature volume of grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences berlin button tubular free by cyclic groups and the strongest tits alternative caprace and michah sageev rank rigidity for cat cube complexes geom funct christopher cashen between tubular groups groups geom haglund and paulin de groupes d automorphismes d espaces courbure in the epstein birthday schrift volume of geom topol pages electronic geom topol coventry haglund and daniel wise special cube complexes geom funct michah sageev ends of group pairs and curved cube complexes proc london math soc daniel wise research announcement the structure of groups with a quasiconvex hierarchy electron res announc math daniel wise from riches to raags artin groups and cubical geometry volume of cbms regional conference series in mathematics published for the conference board of the mathematical sciences washington dc by the american mathematical society providence ri daniel wise cubular tubular groups trans amer math wise and hruska finiteness properties of cubulated groups submitted for publication woodhouse classifying finite dimensional cubulations of tubular groups submitted for publication woodhouse a generalized axis theorem for cube complexes submitted for publication address
| 4 |
convolutional classification oct dingding cai ke chen yanlin qian image classification methods learn subtle details between visually similar classes but the problem becomes significantly more challenging if the details are missing due to low resolution encouraged by the recent success of convolutional neural network cnn architectures in image classification we propose a novel deep model which combines convolutional image and convolutional classification into a single model in an manner extensive experiments on multiple benchmarks demonstrate that the proposed model consistently performs better than conventional convolutional networks on classifying object classes in images index image classification super resolution convoluational neural networks deep learning the problem of image classification is to categorise images according to their semantic content person plane finegrained image classification further divides classes to their such as the models of cars the species of birds the categories of flowers and the breeds of dogs categorisation is a difficult task due to small variance between visually similar subclasses the problem becomes even more challenging when available images are lr images where many details are missing as compared to their hr counterparts since the rise of convolutional neural network cnn architectures in image classification the accuracy of finegrained image classification has dramatically improved and many extensions have been proposed however these works assume sufficiently good image quality and high resolution typically for alexnet while with low resolution images the cnn performance quickly collapses the challenge raises from the problem of how to recover necessary texture details from images our solution is to adopt image sr techniques to enrich imagery details in particular inspired by the recent work on image sr by deng et al we propose a unique deep learning framework that combines cnn and cnn classification a convolutional neural network racnn for object categorisation in images to our best knowledge our work is the first learning model for object classification our main principle is simple the higher image resolution the easier for classification our research questions are can computational recover some of the important details required for image classification and can fig owing to the introduction of the convolutional superresolution sr layers the proposed deep convolutional model the bottom pipelines achieves superior performance for low resolution images such sr layers be added to an deep classification architecture to this end our racnn integrates deep residual learning for image into typical convolutional classification networks alexnet or vggnet on one hand the proposed racnn has deeper network architecture more network parameters than the straightforward solution of conventional cnn on images bicubic interpolation our racnn learns to refine and provide more texture details for images to boost classification performance we conduct experiments on three benchmarks stanford cars and oxford flower dataset our results answer the aforementioned questions improves classification and srbased classification can be designed into a supervised learning framework as depicted in figure illustrating the difference between racnn and conventional cnn r elated w ork image categorisation recent algorithms for discriminating classes such as animal species or plants and objects can be divided into two main groups the first group of methods utilises discriminative visual cues from local parts obtained by detection or segmentation the second group of methods focuses on discovering interclass label dependency via hierarchical structure of labels or visual attributes significant performance improvement is achieved by convolutional neural networks cnns but this requires a massive amount of high quality training images classification from images is yet challenging and unexplored the method proposed by peng et al transforms detailed texture information in hr images to lr via to boost the accuracy of recognizing finegrained objects in lr images however in their strong assumption requiring hr images available for training limits its generalisation ability in addtion the same assumption also occurs in wang s work chevalier et al design a object classifier with respect to varying image resolutions which adopts ordinary convolutional and layers but misses considering superresolution specific layers in convolutional classification networks on contrary owing to the introduction of layers in racnn our method can consistently gain notable performance improvement over conventional cnn for image classification on classification datasets convolutional layers yang et al grouped existing sr algorithms into four groups prediction models methods image statistical methods and methods recently convolutional neural networks have been adopted for image achieving performance the first attempt using convolutional neural networks for image was proposed by dong et al their method learns a deep mapping between and resolution patches and has inspired a number of in an additional deconvolution layer is added based on srcnn to avoid general of input patches for accelerating cnn training and testing kim et al adopt a deep recursive layer to avoid adding weighting layers which does not need to pay any price of increasing network parameters in a convolutional deep network is proposed to learn the mapping between lr image and its residue between lr and hr image to speed up cnn training for very deep network convolutional layers designed for image superresolution namely convolutional layers have been verified their effectiveness to improve the quality of images in this work we incorporate the residual cnn layers for image into a convolutional categorisation network for classifying objects alexnet and googlenet in the experiments convolutional layers are verified to improve classification performance contributions our contributions are our work is the first attempt to utilise specific convolutional layers to improve convolutional image classification we experimentally verify that the proposed racnn achieves superior performance on images which make ordinary cnn performance collapse ii r esolution c onvolutional n eural n etworks given a set of n training images and corresponding class x i yi i n the goal of a conventional cnn labels x x the typical model is to learn a mapping function y f x cross entropy ce loss lce on softmax classifer is adopted x to measure the performance between class estimates f x and ground truth class labels y l lce y y j log j where j refers to the index of element in vectors and l denotes the dimension of softmax layer the number of classes in this sense cnn solves the following minimisation problem with gradient descent back propagation n min lce f xx i yi for categorisation in images we propose a novel convolutional neural network which is illustrated in fig in general our racnn consists of two parts convolutional layers see sec and convolutional categorisation layers see sec in sec we describe an training scheme for the proposed racnn convolutional layers in this section we present convolutional specific layers for the cnn the goal of which is to recover texture details of images to feed into the following convolutional categorisation layers we first investigate the conventional cnn for the superresolution task given k training pairs of and x lr x hr i i k a direct images x x lr from x lr input mapping function g x hr servation to x output target is learned by minimising the mean square ms loss x lr x hr lms x k kxx hr g xx lr inspired by recent the residual convolutional network to achieve high efficacy we design convolutional layers as shown on the left hand side of fig similar to our convolutional layers learn a mapping function from lr images x lr to residual images x hr x lr object function of the proposed convolutional layers is as the following min k kxx hr x lr g xx lr fig pipeline of the proposed convolutional neural network racnn for recognition with images convolutional classification layers from alexnet are adopted for illustrative purpose which can be readily replaced by those from other cnns such as or googlenet the better performance of residual learning yields from the fact that since the input lr and output images hr are largely similar it is more meaningful to learn their residue where similarities are removed it is obvious that detailed imagery information in the form of residual images is easier for cnns to learn than direct cnn models we utilise three typical stacked layers with filters as convolutional sr layers in racnn following the empirical basic setting of the layers is and which are also illustrated in the left hand side of fig where fm and nm donate the size and number of the filters of the mth layer respectively the output of the last convolutional sr layer is summed with the input image x lr to construct the full image fed into the remaining convolutional and classification layers of racnn categorisation layers the second part in our racnn is convolutional and fullyconnected classification layers with high quality images after layers a number of cnn frameworks have been proposed for image categorisation and in this paper we consider three popular convolutional neural networks alexnet and googlenet all cnns typically consist of a number of stacks followed by several fullyconnected layers on the of fig the typical alexnet is visualised and employed as convolutional categorisation layers in racnn alexnet the baseline cnn for image classification over imagenet consists of convolutional layers and and layers and vggnet is made deeper from layers of alexnet to layers and more advanced over alexnet by using very small convolution filters in our paper we choose the with layers for our experiments denoted as in the rest of the paper googlenet comprises layers but has much less number of parameters than alexnet and owing to the smaller amount of weights of layers googlenet generally generates three outputs at various depths for each input but for simplicity only the last output the deepest output is considered in our experiments in our experiments all three networks are on the imagenet x y from data as the data and with x baseline for fair comparison we the identical pretrained cnn models as our convolutional categorisation layers x y by replacing the dimension of final with x layer with the size of object classes network training the key difference between the proposed cnn and conventional cnn lies in the introduction of three convolutional layers evidently racnn is deeper than corresponding cnn due to the three convolutional sr layers which can store more knowledge network parameters before learning racnn in an fashion we consider two weight initialization strategies for convolutional sr layers in racnn standard gaussian weights and weights on the imagenet data for fair comparison we adopt the identical network structure for both initialisation schemes for racnn with gaussian initial weights we train the whole network to minimise loss directly during training we set learning rates and weight decays for the first two sr layers and and both learning rate and weight decay are set with for the third convolutional sr layer while learning rates and weight decays are and for all categorisation layers except the last layer which uses both learning rate and weight decay we consider an alternative initialisation strategy for better initial weights for convolutional sr layers to this end we fig image samples after removing background from the stanford cars and birds benchmarks the three convolutional sr layers by enforcing the minimal of the mean square loss on ilsvrc imagenet object detection testing dataset which consists of images given the weights in convolutional sr layers racnn is trained by minimising the loss function for categorisation for the goal of direct utilisation of output of convolutional sr layers we train sr layers in rgb color space with all the channels instead of only on luminance channel y in ycbcr color space specifically we generate lr images from hr images pixels via firstly hr images to pixels and then to the original image size by bicubic interpolation we then sample image patches using sliding window and thus obtain thousands of pairs of lr and hr image patches to be consistent with the setting of racnn using guassian initial weights the layers are trained with image patches by setting learning rates being and weight decays being for the first two sr layers and and both learning rate and weight decay being for the third sr layer finally we jointly learn both convolutional sr and classification layers in an learning manner with learning rates and weight decays for all classification layers except the last layer with both learning rate and weight decay set to iii e xperiments datasets and settings we evaluate racnn on three datasets the stanford cars the and the oxford category flower datasets the first one was released by krause et al for categorisation and contains images from classes of cars and each class is typically at the level of brand model and year by following the standard evaluation protocol we split the data into images for training and for testing is another challenging finegrained image dataset aimed at subordinate category classification by providing a comprehensive set of benchmarks and annotation types for the domain of birds the dataset contains images of bird species among which there are images for training and for testing oxford category flower dataset consists of images which commonly appear in the united kingdom these images belong to categories and each category contains between to images in the standard evaluation protocol the whole dataset is divided into images for training for validation and for testing in our experiments the training and validation data are merged together to train the networks images from these datasets are first cropped with provided bounding boxes to remove the background cropped images are to lr images of the size pixels and then to pixels by bicubic interpolation to fit the conventional cnn which follows the settings in sample lr images from the both benchmarks are illustrated in fig which verify our motivation to mitigate the suffering from low visual discrimination due to we compare our racnn with multiple methods the corresponding cnn model for classification alexnet and googlenet and stagedtraining cnn proposed by the proposed racnn is implemented on caffe we adopt the average accuracy for the both datasets the higher value denotes the better performance in our experiments we used a lenovo desktop with one intel cpu and one nvidia gpu the proposed racnn has deeper structure than the competing networks alexnet vggnet googlenet which requires longer training times as indicated in table i table i training times of racnns and competing cnns seconds epoch methods cars birds flowers alexnet racnnalexnet vggnet racnnvggnet googlenet racnngooglenet b comparative evaluation in fig we compare our results with alexnet and alexnet for classification in images it is evident that our racnnalexnet consistently achieves the best performance on both benchmarks precisely alexnet achieves and evaluation of convolutional sr layers fig comparison to two methods for classification average accuracies table ii evaluation on effect of convolutional sr layers to recover high resolution details we fix all convolutional and layers except the last layer extracted features correspond to those with high resolution images and denote the proposed racnn with weights initialized with gaussian and pretrained weights for the convolutional sr layers methods cars birds flowers alexnet googlenet in this experiment we employ all layers in the alexnet and googlenet as categorisation layers in racnn note that different from the previous experiments we freeze all categorisation layers by setting learning rates and weights decays to besides the last layers of the baseline cnns and our racnn is then with data such setting treats categorisation layers in racnn as an identical classifier for evaluating the effect of adding convolutional sr layers racnn with initial gaussian and weights are called as gracnn and respectively comparative results are shown in table ii and fig both and consistently outperform the baseline cnns in all experiments with the same experimental setting except different initial weights for convolutional layers the results of and are reported test set accuracies in table ii and fig show that is superior to gracnn and share the same network structure but differ only in network weights initialisation of convolutional sr layers in this sense better performance of is credited to the knowledge about refining lowresolution images weights which verifies our motivation to boost image classification via image it is noteworthy that since the feature extraction layers are frozen the networks are not to specific features but all performance boost are owing to recovered details important for classifcation by the layers evaluation on varying resolution table iii comparison with varying resolution level res level on the birds dataset res level curacies collected from for the stanford cars and birds datasets respectively knowledge transfer between varying resolution images alexnet can improve classification accuracy that is for the stanford cars and for the birds however the alexnet relies on the strong assumption that images are available for training which limits to its usage to other tasks note that our method is more generic and transforms knowledge of super resolution across datasets which indicates that our method can be readily applied to other image classification tasks the proposed racnnalexnet significantly beats its direct competitor alexnet on the stanford cars dataset and on the caltechucsd birds dataset with the same settings and training samples the performance gap can only be explained by the novel network structure of racnn alexnet we further evaluate our proposed racnn method with respect to varying resolutions on the birds dataset all images are first to the input image size before training models the better performance of racnnalexnet over conventional alexnet is achieved for image classification which is shown in table iii we observe that our method performs much better for lower resolution images than relatively high resolution images in details increases the accuracy by above for pixel images but less than improvement on resolution images the reason is that the sr layers of racnn play a significant role in introducing texture details especially when missing more visual cues of object classification in lower quality images which further demonstrates our observation and motivation a alexnet b c googlenet fig training process of alexnet vggnet and googlenet on the birds dataset in the weights for convolutional sr layers are only with imagenet images but our racnn is applied to varying resolution levels and further improvement on classification performance shows the generalisation of weights for varying resolution levels which demonstrates the generalisation ability of racnn with sr weights iv c onclusion we propose and verify a simple yet effective resolutionaware convolutional neural network racnn for image classification of images the results from extensive experiments indicate that the introduction of convolutional layers to conventional cnns can indeed recover fine details for images and clearly boost performance in classification this result can be explained by the fact that the layers learn to recover high resolution details that are important for classification when trained manner together with the classification layers the concept of our paper is generic and the existing convolutional superresolution and classification networks can be readily combined to cope with image classification r eferences krause stark deng object representations for categorization in international conference on computer vision workshops pp wah branson welinder perona belongie the caltechucsd dataset nilsback zisserman automated flower classification over a large number of classes in indian conference on computer vision graphics image processing ieee pp khosla jayadevaprakash yao li novel dataset for finegrained image categorization stanford dogs krizhevsky sutskever hinton imagenet classification with deep convolutional neural networks in advance in neural information processing systems zhang donahue girshick darrell for category detection in european conference on computer vision lin roychowdhury maji bilinear cnn models for finegrained visual recognition in ieee international conference on computer vision pp krause jin yang recognition without part annotations in ieee conference on computer vision and pattern recognition pp chen zhang learning to classify categories with privileged misalignment ieee transactions on big data akata reed walter lee schiele evaluation of output embeddings for image classification in ieee conference on computer vision and pattern recognition branson horn belongie perona bird species categorization using pose normalized deep convolutional nets in british machine vision conference chevalier thome cord fournier henaff dusch for classification with varying resolution in ieee international conference of image processing pp liu qian chen huttunen fan saarinen incremental convolutional neural network training in international conference of pattern recognition workshop on deep learning for pattern recognition zeyde elad protter on single image using sparserepresentations in international conference on curves and surfaces pp yang wright huang ma image via sparse representation ieee transactions on image processing chang yeung xiong through neighbor embedding in ieee conference on computer vision and pattern recognition glasner bagon irani from a single image in ieee international conference on computer vision pp dong loy he tang image using deep convolutional networks ieee transactions on pattern analysis and machine intelligence dai wang chen van gool is image helpful for other vision tasks in ieee winter conference on applications of computer vision pp kim kwon lee mu lee accurate image using very deep convolutional networks in ieee conference on computer vision and pattern recognition pp simonyan zisserman very deep convolutional networks for largescale image recognition keys cubic convolution interpolation for digital image processing ieee transactions on acoustics speech and signal processing angelova zhu efficient object detection and segmentation for recognition in ieee conference on computer vision and pattern recognition pp stark krause pepik meger j little schiele koller categorization for scene understanding international journal of robotics research maji rahtu kannala blaschko vedaldi visual classification of aircraft arxiv preprint zhang farrell iandola darrell deformable part descriptors for recognition and attribute prediction in international conference on computer vision chai lempitsky zisserman symbiotic segmentation and part localization for categorization in international conference on computer vision gavves fernando snoek smeulders tuytelaars local alignments for categorization international journal of computer vision shotton johnson cipolla semantic texton forests for image categorization and segmentation in ieee conference on computer vision and pattern recognition hwang grauman sha semantic kernel forests from multiple taxonomies in advances in neural information processing systems mittal blaschko zisserman torr taxonomic multiclass prediction and person layout using efficient structured ranking in european conference on computer vision deng ding jia frome murphy bengio li neven adam object classification using label relation graphs in european conference on computer vision zhang paluri ranzato darrell bourdev panda pose aligned networks for deep attribute modeling in ieee conference on computer vision and pattern recognition fu hospedales xiang fu gong transductive multiview embedding for recognition and annotation in european conference on computer vision peng hoffman stella saenko knowledge transfer for image classification in ieee international conference of image processing pp wang chang yang liu huang studying very low resolution recognition using deep networks in proceedings of the ieee conference on computer vision and pattern recognition pp yang ma yang a benchmark in european conference on computer vision pp irani peleg improving resolution by image registration computer vision graphics and image processing graphical models and image processing fattal image upsampling via imposed edge statistics in acm transactions on graphics vol huang mumford statistics of natural images and models in ieee conference on computer vision and pattern recognition pp huang singh ahuja single image from transformed in ieee conference on computer vision and pattern recognition pp yang lin cohen fast image based on inplace example regression in ieee conference on computer vision and pattern recognition pp freedman fattal image and video upscaling from local selfexamples acm transactions on graphics dai timofte van gool jointly optimized regressors for image in computer graphics forum vol pp schulter leistner bischof fast and accurate image upscaling with forests in ieee conference on computer vision and pattern recognition pp kim kwon lee mu lee convolutional network for image in ieee conference on computer vision and pattern recognition pp dong loy tang accelerating the convolutional neural network in european conference on computer vision pp szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper with convolutions in proceedings of the ieee conference on computer vision and pattern recognition pp he zhang ren j sun deep residual learning for image recognition in ieee conference on computer vision and pattern recognition pp deng dong socher li li imagenet a hierarchical image database in ieee conference on computer vision and pattern recognition pp russakovsky deng su krause satheesh ma huang karpathy khosla bernstein berg feifei imagenet large scale visual recognition challenge international journal of computer vision jia shelhamer donahue karayev j long girshick guadarrama darrell caffe convolutional architecture for fast feature embedding in acm international conference on multimedia pp
| 1 |
preprint institute of statistics rwth aachen university asymptotics for covariance matrices and quadratic forms with applications to the trace functional and shrinkage nov ansgar and rainer von sachs institute of statistics rwth aachen university aachen germany steland and institut de statistique biostatistique et sciences actuarielles isba catholique de louvain voie du roman pays belgium july we establish large sample approximations for an arbitray number of bilinear forms of the sample matrix of a vector time series using weighting vectors estimation of the asymptotic covariance structure is also discussed the results hold true without any constraint on the dimension the number of forms and the sample size or their ratios concrete and potential applications are widespread and cover highdimensional data science problems such as projections onto sparse principal components or more general spanning sets as frequently considered in classification and dictionary learning as two specific applications of our results we study in greater detail the asymptotics of the trace functional and shrinkage estimation of the covariance matrices in shrinkage estimation it turns out that the asymptotics differs for weighting vectors bounded away from orthogonaliy and nearly orthogonal ones in the sense that their inner product converges to ams subject classifications primary secondary keywords brownian motion linear process long memory strong approximation quadratic form trace introduction a large number of procedures studied to analyze vector time series of dimension dn depending on the sample size n relies on projections by projecting the observed random vector onto a spanning set of a lower dimensional subspace of dimension ln examples include sparse principal component analysis see in order to reduce dimensionality of data sparse portfolio replication and index tracking as studied by or dictionary learning see where one aims at representing input data by a sparse linear combination of the elements of a dictionary frequently obtained as the union of several bases historical data b n wn when studying projections it is natural to study the associated bilinear form d vn wn r n representing the dependence structure in terms of the projections covariances b n is the uncentered sample matrix here and throughout the paper in order to conduct inference large sample distributional approximations are needed for a vector time series model given by correlated linear processes we established in a strong steland von sachs approximation by a brownian motion for a single quadratic form provided the weighting vectors are uniformly bounded in the it turned out that the result does not require any condition on the ratio of the dimension and the sample size contrary to many asymptotic results in highdimensional statistics and probability in the present article we study the more general case of an increasing number of quadratic forms as arising when projecting onto a sequence of subspaces whose dimension converges to noting that the analysis of autocovariances of a stationary linear time series appears as a special case of our approach there are a few recent results related to our work established a central limit theorem for a finite number of autocovariances whereas in the case of long memory series has been studied has studied the asymptotic theory for detecting a change in mean of a vector time series with growing dimension to treat the case of an increasing number of bilinear forms we consider two related but different frameworks the first framework uses a sequence of euclidean spaces rdn equipped with the usual euclidean norm the second framework embeds those spaces in the sequence space equipped with the it is shown that in both frameworks an increasing number of say ln quadratic forms can be approximated by brownian motions without any constraints on ln dn and n apart from n one of our main results asserts that for the assumed time series models one can define on a new probability space equivalent versions and a gaussian process gn taking values in c rln such that b e b wn j gn t op vn j sup nln as n almost surely without any constraints on ln dn we believe that those results have many applications in diverse areas as indicated above in this paper we study in some detail two direct applications the first application considers the trace operator which equals the trace matrix norm k ktr when applied to covariance matrices we show that the trace of the sample covariance matrix appropriately centered can be approximated by a brownian motion on a new probability space which also establishes the convergence rate b n ktr ke b n ktr op dn the second application elaborated in this paper is shrinkage estimation of a covariance matrix as studied in depth for sequences of random vectors as well as dependent vector time series see by and amongst others in order to regularize the sample matrix the shrinkage estimator considers a convex combination with a target that usually corresponds to a simple regular model we consider the identity target a multiple of the identity matrix in of dimension dn to the best of our knowledge large sample approximations for those estimators have not yet been studied we show that uniformly in the shrinkage weight for the convex combination a bilinear form given by the shrinkage estimator can be approximated by a gaussian process when it is centered at the shrunken true covariance matrix using the same shrinkage weight by uniformity the result also holds for the widely used estimator of the optimal shrinkage weight for this estimated optimal weight the convergence rate under quite general conditions is known it turns out that when comparing the matrices in terms of a natural pseudodistance induced by bilinear forms the convergence rate carries over from the optimal weight s inference for the trace estimator we also compare the shrinkage estimator using the estimated optimal weight with an oracle estimator using the unknown optimal weight last we study the case of nearly asymptotically orthogonal vectors as a consequence of the bound see this property allows to place much more unit vectors on the unit sphere it turns out that for nearly orthogonal vectors the nonparametric part dominates in large samples contrary to the situation for vectors bounded away from orthogonality the time series model of the paper is as follows at time n n n we observe a d dn dimensional mean zero vector time series d yni yni yni n i n defined on a common probability space f p whose coordinates are causal linear processes yni x i dn cnj where are independent mean zero error p terms possibly not identically distributed such that for some and e r converges the coefficients cnj may depend on n and are therefore also allowed to depend on the dimension dn we impose the following growth condition assumption a the coefficients cnj of the linear processes satisfy sup max j for some it is well known that assumption a covers common classes of weakly dependent time series such as arma p q as well as a wide range of long memory processes we refer to for a discussion define the centered bilinear form qn vn wn where b n wn vn wn rdn n x bn yni yni n and b e the class of proper sequences of weighting vectors wn wndn n studied throughout the paper is the set w of those sequences wn n wn rdn n which have uniformly bounded in the sense that sup kwn sup dn x steland von sachs vectors naturally arise in various applications such as sparse principal component analysis as see or sparse financial portfolio selection as studied by for a more detailed discussion we refer to it is worth mentioning that our results easily carry over to weighting vectors with uniformly bounded provided one relies on standardized versions of the bilinear form first notice that conditions and allow us to control the linear process coefficients of a projected time series yni i n which are then o j and therefore decay at the same rate as the original time series the assumption v u dn ux sup t leads to the estimate dn x cnj u dn ux o nj for bounded dimension yields the estimate j for the latter expression but for p growing dimension this does not hold in general assuming for all j is however not reasonable for a setting since then cnj o as for example if cnj for the latter assumption would rule out the case of observing dn autoregressive time series of order with autoregressive parameters bounded away from zero on the other hand if wmin for and cnj cj min for pn cnj dn wmin cj min for each lag j this can be and n then e yni instead of yni where w e n fixed by considering w n wn then d d n n x x j w cnj o j dn next observe that by jensen s inequality v dn p x x u t dn d n dn x e n n w and w e yni is a linear time hence and the imply w series with coefficients decaying at the same rate j as the original time series clearly for any sequences vn wn of weighting vectors with uniformly bounded we have the scaling property e n qn vn wn qn e vn w en e n where v n vn and w n wn have uniformly bounded but if one standardizes qn the factor dn cancels hence in this sense several of our theoretical results can be also applied to study projection onto vectors with uniformly bounded the rest of the paper is organized as follows in section we introduce the partial sums and partial sum processes associated to an increasing number of bilinear forms and establish the strong and weak approximation theorems for those bilinear forms the application to the trace functional is discussed in section the large sample approximations for shrinkage estimators of covariance matrices are studied in depth in section inference for the trace large sample approximations for bilinear forms definitions and review let us define the partial sums k x b nk yi yi k x eyi yi for n k and put b nk wn dnk vn wn n k for two sequences of weighting vectors vn wn the associated processes will be denoted by b n wn dn t vn wn dn vn wn t n especially we have b n wn n dn dn vn wn n b n pn e yni yni with e n for some sequence of standard brownian motions bn t t n and any constant n we can introduce the rescaled version n bn tn s called the of bn in the following result on the asymptotics of a single bilinear form for a uniformly bounded is shown theorem suppose yni i n n is a vector time series according to model that satisfies assumption a let vn and wn be weighting vectors with uniformly bounded in the sense of then for each n n there exists equivalent versions of dnk vn wn and dn t vn wn t again denoted by dnk vn wn and dn t vn wn and a standard brownian motion wn t t which depends on vn wn wn t wn t vn wn both defined on some probability space fn pn such that for some and a constant cn vn wn vn wn wn t cn for all t where vn wn is defined in if cn o as n t this implies the strong approximation sup t vn wn vn wn wn o as n for the of wn as well as the clt vn wn vn wn wn o as n dn is asymptotically n steland von sachs a multivariate version for l n bilinear forms which approximates dn vn j wn j by a brownian motion has been shown in th this result allows to consider the dependence structure which arises when mapping yn ynn onto the subspace j pl span wn j l spanned by l weighting vectors wn j w j we have the canonical mapping called projection onto pl in the sequel yn pn yn pn wn wn l l which represents the orthogonal projection onto pl if wn wn are orthonormal the associated matrix is cov pn yn cov wn j yn wn k yn wn j wn k j if the wn s are eigenvectors of then cov pn yn is a diagonal matrix but that property is lost for more general spanning vectors given the sample ynn of dn random vectors the canonical nonparametric statistical estimators of and cov pn yn b n as defined in and are d pn yn w j b n w k cov n n d pn yn consist of bilinear forms as studied in theorem and for fixed the entries of cov l its multivariate extension suffices to study the dependence structure of the projection onto pl this no longer holds if l is allowed to grow as the sample size increases when studying the case l ln as n indeed the treatment of that situation is much more involved as we shall see it requires a different scaling and a more involved mathematical framework the strong approximations we establish in this paper take place in the euclidean space rln of growing dimension and the hilbert space respectively thus to go beyond the case of a finite number of bilinear forms we now consider b n w j qn vn j wn j vn j n j ln dnj dn vn j wn j n j ln j j where w w j ln are ln pairs of uniformly sequences of weighting vectors and ln may tend to infinity as n we are interested in the joint asymptotics of the centered and scaled versions of the corresponding statistics given by and the associated sequential processes dnj t dn t vn j wn j n t j ln inference for the trace cf the additional factor ln anticipates the right scaling to obtain a large sample approximation further we are interested in studying weighted averages where averaging takes place over all ln forms and all sample sizes n let be the weight for sample size n and the weight for the quadratic form associated to a pair of sequences of weighting vectors vn wn n for ln n define for k dk vn wn x ln x lm x n where b nmk x b nmk wm vn yni ymi b nmk e for n m notice the relations b nnk b nk b nn bn between and dk vn wn depends on all weights dn ln n but is measurable gk yni n n i k now for any sample size m we may consider the associated process associated to dm t vn wn m vn wn t preliminaries before proceeding recall the following facts on the hilbert space and strong approximations in hilbert spaces we shall denote the inner product of an arbitray hilbert space by the induced norm and the operator of an operator t h h by kt kop our results take place in the hilbert space of all sequences p f fj j with j which is a separable hilbert space when equipped with the inner p p product f g j fj gj f fj j g gj j and the induced norm kf f f the associated operator norm of an operator t is simply denoted by kt for two random variables x y defined on f p with e x e y we denote the inner product by x y where f p sufficient conditions for a strong approximation of partial sums of dependent random elements taking values in a separable hilbert space require the control of the associated conditional covariance operator denote the underlying probability space by f p let x p j be a random element defined on f p taking values in with j e xj the covariance operator cx associated to x is defined by x fj e xj xk cx f e f x x j k f fj j steland von sachs to any a we may associate the conditional covariance operator of x given a x cx f e f x fj e xj xk f fj j j k covariance operators are symmetric positive linear operators with operator norm kcx k sup f kf k cx f f for further properties and discussion see a strong invariance principle in deals with the approximation of partial sums of random elements by a brownian motion in recall that a random element b b t t with values in c is called brownian motion in if i b ii for all tn the increments b b ti i n are independent and iii for all s t the increment b t b s is gaussian with mean and covariance operator p min s t k for some nonnegative linear and operator k on such that kei ei where ei is some orthonormal system for if k cx for some random element x b is the brownian motion generated by x the definition for a general separable hilbert space is analogous a strong invariance principle or strong approximation for a sequence of random elements taking values in an arbitrary separable hilbert space h with inner product and induced norm asserts that they can be redefined on a rich enough probability space such that there exists a brownian motion b with values in h and covariance operator such that x b t for constants and c if the dimension of h is finite and x p b t o t log log t as t if h is infinite dimensional throughout the paper we write for two arrays and of real numbers if there exists a constant c such that for all large sample approximations we aim at showing a strong approximation for the d processes n dn dnj l n inference for the trace where the coordinate processes dnj are given by dnj t j dn vn j wn j nln j j t n j b nk wn for j ln n cf and with dnk vn wn vn the above processes can be expressed as partial sums lemma we have the representation dnk k x n for k n n leading to dn t x n nln i n where the random elements t n are defined in to introduce the conditional covariance operators associated to dn denote the filtration fm i m m z and define c n f e f dn dn f n let us also introduce the unconditional covariance operator c n f e f zn zn f n where zn znj l n with random variables znj j ln satisfying e znj and e znj znk n j k for j k ln here j k vn j wn j vn k wn k are the quantities introduced in the asymptotic covariance parameters of the bilinear j j k k forms corresponding to the pairs vn wn and vn wn j k ln the following technical but crucial result establishes the convergence of c n c n in the operator in expectation and provides us with a convergence rate theorem j j suppose vn wn j ln have uniformly bounded sup max max kvn j kwn j c for some constant let n n ln mx n n steland von sachs n with defined by define n n n f e f for f then n c n k where k k denotes the operator norm defined in we are now in a position to formulate the first main result on the large sample approximations of ln bilinear forms when ln converges to infinity in terms of the as well as the the results holds true under the weak assumption that the weighting vectors have uniformly bounded norm theorem let yni i n be a vector time series following model and sat l ln wn n isfying assumption a suppose that vn w w n have uniformly bounded sup max max kvn j kwn j c for some constant c then all processes can be redefined on a rich enough probability space such that there exists for each n a brownian motion of dimension ln bn t bn t vn j wn j j ln t with coordinates bn t j j ln and covariance function given by e bn s j bn t k min s t n j k for j k ln and s t such that the following assertions hold true i in the euclidean space rln we have the strong approximation dnt n bn t krln t x n n bn t rln cn for constants cn and where depends only on ln and provided cn o as n the following assertions hold ii with respect to the we have sup ln x dnj t b n n o j as n for the b n of bn inference for the trace iii with respect to the we have ln x dnj t b n o sup n j ln as n for the b n of bn and with respect to the maximum norm sup max dnj t b n n o j as n iv let n n and n be then there exist constants and c and vn wn n such that for equivalent versions and a standard brownian motion b on defined on a new probability space vn wn vn wn n b t ct for all t further for any sample size m sup t vn wn vn wn n b m t cm for the b m of b remark the brownian motions can be constructed such that j j j j vn wn o ln vn wn wn sup b n n j n j j as n if holds where wn vn wn is as in theorem for j ln due to assertion iv of the above theorem we may conjecture that holds cf the discussion in but we have neither a proof nor a counterexample the following result studies the relevant processes in the space and yields an approximation in probability taking into account the additional factor log log n theorem suppose that the assumptions of theorem hold in the hilbert space we have the strong approximation dnt bn t t x n bn t n p o t log log t as t there exists a sequence n n such that with n log log n max k x k n b n n n ln op steland von sachs n for the b n t n bn tn in other words r k n k dn op bn max n n n or equivalently sup n r n dn t b n n n op as n the above result eliminates the condition but we have no detailed information about the sequence the question arises whether the above results are limited to linear processes as the main arguments deal with approximating martingales we have the following result which suggests that the class of vector time series to which the main results of this paper apply is larger l ln n theorem let wn vn projection vectors with uniformly bounded w w n be sup max max kvn j kwn j c for some constant c let yni i n be a dn vector time series such n that dnk vn wn can be approximated by the martingales mk vn wn k defined in in with rate dn for certain sequences of coefficients cnj j dn satisfying assumption a and a sequence of independent mean zero random variables k with e if sup sup max max j ynk j ynk for some then the results of this section still hold true proofs proof of lemma we argue as in the proof of theorem given in where it was shown that the partial sum associated to a single bilinear form q vn wn attains the representation x n dnk vn wn vn wn with gaussian random variables n vn wn yni vn yni wn e yni vn yni wn i n n for linear processes yni vn x v cnj yni wn x w cnj i z n n inference for the trace with coefficients v cnj dn x cnj w cnj dn x cnj j j for j and n for ln pairs of weighting vectors vn wn j ln we consider the corresponding partial sum process where the summands are the ln vectors n ln n j n n j vn j wn j j ln for i which we however also interpret as random elements taking values in this completes the proof th asserts that and respectively hold if the following conditions for the scaled partial sums k m n are satisfied i for some ii for some iii there exists a covariance operator c such that the conditional covariance operators f e f f h converge in the operator k k to c f in expectation with rate c k for some for a discussion of this result and extensions see as shown by the strong invariance principle also holds true for strictly stationary sequences taking values in a separable hilbert space which possess a finite moment of order and are strong mixing with mixing coefficients satisfying k o for some the above conditions are however more convenient when studying linear processes has studied strong invariance principles for a univariate nonlinear time series using the physical dependence measure which is easy to verify for linear processes extensions to time series of fixed dimension have been provided by we rely on the conditions of since they allow to study time series of growing dimension and taking values in the space in a relatively straightforward way as a preparation for the proof of theorem we need the following lemma dealing with the uniform convergence of unconditional and conditional covariances of the approximating martingales defined by x x n n n fe fe m m k k n n where for brevity fel i fel i vn wn l i dn steland von sachs lemma under assumption a we have n n n n sup sup e which implies n n sup sup sup e further n n n n sup sup e which implies n n sup sup sup e proof a direct calculation leads to n n n n cm n e cm n cm n where cm n mx x n n e k k cm n mx cm n n n mx x cm n mx x n n l n n for let us first estimate cm n cm n we have sup sup n cm n see next we show that sup e sup n inference for the trace recall that e and assume in what follows the schwarz inequality yields cm n m x x x n n n v v u u mx u mx x x u n n t n n using k k and jensen s inequality we obtain v u t mx x n v u u n v u u n mx x mx x mx x n n sup n sup where the upper bound does not depend on hence v u x u mx n t e sup n cm n e sup n v u x u mx n t sup n n using o kvn kvn uniformly in n and the o follows lastly consider mentary fact that l cm n since the indices satisfy k is whereas are independent from hence cm n mx x k k n n e clearly for k the summands vanish such that cm n mx x n n e steland von sachs if e for all k cm n otherwise put we have the estimate n n m x x e n n n v ux x u t fe n fe n n v u mx ux t n hence follows the above arguments also imply that sup sup e n since cm n n where the first term is finite since its is and the second one is uniformly in such that sup sup cm n n which in turn implies to verify one first conditions on and then argues simi larly in order to estimate ecm n observe that with max e n cm n x x n sup n v u x u mx n sup t n using k k and jensen s inequality which verifies and in turn n introduce for and each coordinate ln the partial sums vn wn n vn wn and denote the appropriately scaled versions by n n vn wn ln n for ln the corresponding martingale approximations are given by n n n vn wn vn wn ln n inference for the trace we need to study the approximation error n n n the next result improves upon lemma by showing that firstly the error is of order in terms of the when conditioning on the past and secondly that the result is uniform over weighting vectors lemma we have n sup e n n proof consider as in the decomposition n n n n where nx n x n fe ln l m n n n x x e n n fel f ln l n nx x n fe ln n is the projection of onto the subspace spanned by r s and therefore hence with i l n ln e x x n n fel fel i l x x n n fel i l x x n n fel fel i l x x n fel sup sup such that due to n e sup e n n n fel n n is the projection of onto the subspace spanned by n r s and thus independent from such that e steland von sachs n e ln last by fatou n ln e lim n nx n x i k k n n e nx n x n n sup lim n k k sup lim n v u nx u e n uniformly in f l i t n x k k n n n x x n sup lim n where we estimated the by the hence nx x n n sup fel i e sup e ln n sup n n by virtue of this completes the proof proof of theorem for a sequence of conditional covariance operators cn e xn xn a with xn xnj j e xn n say we have convergence in the operator defined as kt k supf kf f t f for an operator t acting on to some unconditional covariance operator c e z z z zj j e z in expectation if e sup f kf f cn f c f e sup x f kf j fj fk e xnj xnk e zj zk converges to as n define the random elements n n n n n n n where and for ln recall that n n n f e f f and let n n n c f e f f inference for the trace be the conditional covariance operator associated to the martingale approximations obviously n sup c n k where n n sup c k n sup ekc c n we shall estimate both terms separately to simplify notation let n n ct n e n n cm n e for ln to estimate we shall show that n n is o n n uniformly in n by an application of the inequality we have e sup n cm n n n n n e sup e e n n n e sup e n n n e sup e where n n n e sup e r n n n e sup e sup n n n n n since is independent from and the decomposition n n n leads to by virtue of and lemma further n n n e r n n n e e n n steland von sachs by and lemma uniformly in ln consequently sup ln e sup n cm n hence using the inequality p ln ln n n ln we obtain sup c k sup e ln x ln x sup t m f kf t m sup e sup n n ln x ln x ln x sup f kf n by lemma see and the scaling of the martingale approximations ln by the factor ln sup ln max n n ke n n therefore n sup ekc c n k sup e sup kf sup e sup ln x ln x sup kf m n m n sup sup ln n n kf ln x ln x proof of theorem by virtue of lemma equation we have the representations x n x n dnt j dn t j nln n and therefore we check conditions i iii of discussed above for ln cf the summands can be seen as attaining values in the euclidean space rln of finite but inference for the trace increasing in n dimension ln or as random elements taking values in the infinite dimensional hilbert space to show i observe that by the cr for each j ln n j e vn j ynk wn j vn j ynk wn j vn j ynk wn j such that n j q j vn q j wn repeating the arguments of we obtain for with vn j sup k x v x v v sup e sup e k k v j uniformly in k but kvn and p vj vj due to assumption a imply and in turn noting that the above bounds hold uniformly in k and n we obtain n sup sup max j by virtue of jensen s inequality we may now conclude that n ln ln x x n n j j n ln which establishes i introduce the partial sums n n ln mx n n n condition ii can be shown as follows denote the coordinates of by j and n n j denote the corresponding notice that they are given by j ln n n n n martingale approximations by and j respectively and let n n be the remainder with coordinates j j ln cf the preparations above n n clearly the martingale property implies e j e j j ln lemma asserts that n sup ln e sup e j steland von sachs such that two applications of jensen s inequality lead to v u ln h ux n n sup e e sup t e e j v u ln h ux n e e j sup t v u u n t sup ln e sup e j which shows ii condition iii follows from theorem consequently we may conclude that we may redefine all processes on a rich enough probability space where a brownian motion bn t bn t j j with covariance operator c n with covariances e bn t j bn t k n j k exists such that for constants and cn dn t vn wn bn t krln cn for all t therefore sup kdn t b n krln cn for the b n of bn which implies assertions i and ii to show iii recall that the vector of rln can be bounded by ln k krln n such that ln x t b n j n kdn t b n o ln as n further using qp j we have sup t vn j wn j b n j sup kdn t b n o as n for j ln it remains to prove iv we may argue as in to obtain x b nmk w vn vn yni wm ymi e vn yni wm ymi m x yni vn ymi wm e yni vn ymi wm where yni vn dn x x cnj and ymi wm dn x x cmj inference for the trace for ln and lm n m therefore for k we obtain the representation dk vn wn xxx yni vn ymi wm e yni vn ymi wm n m x yi cj yi dj e yi cj yi dj for the linear processes yi cj cj x dj ln x dn x cj cnj dn x and yi dj dj with coefficients j x ln x cnj j hence the result follows from proof of remark ability space by theorem we may and will assume that on the same t vn j wn j vn j wn j wn vn j wn j o j j as n for ln standard brownian motions wn vn wn by virtue of j j orem but then since dnj t ln dn t vn wn vn j wn j wn vn j wn j sup b n j n sup n j dnj t sup t vn j wn j vn j wn j wn vn j wn j n o as n for each j ln which verifies the remark proof of theorem observe that the conditions i iii of theorem hold in the hilbert space as well since for any x rdn the euclidean vector norm coincides with the therefore we obtain the strong approximation k x n n bn k p k log log k as k for sequences o k n put n log log let be given then for each n n we may find n such that p hence op as n now we may conclude that for k n k x n n bn k n steland von sachs such that k x n ln bn k op max n as n which verifies asymptotics for the trace norm the trace plays an important role in multivariate analysis and also arises when studying shrinkage estimation before providing the large sample approximation by a brownian motion we shall briefly review its relation to several matrix norms the trace and related matrix norms there are various matrix norms that can be used to measure the size of covariance matrices here we shall use the trace norm defined as the of the eigenvalues a of a dn matrix a x kaktr a i also notice that the trace norm is a linear mapping on the subspace of definite matrices and satisfies kaktr tr a for any covariance matrix a it induces the frobenius norm via tr further it is worth mentioning that the trace norm is also related to the frobenius norm via the fact x a kaktr i in this way our results formulated in terms of scaled trace norms can be interpreted in terms of scaled squared frobenius norms of square roots too there is a third interesting direct link to another family of norms namely the norms kaks p p of a n m matrix a of rank r which is defined as the of its singular values a a a a of the eigenvalues of x kakps p a p i b n if a yn the norm is also called nuclear norm since b n yn we have the identity such that x b n ktr b n kyn i between the trace norm of the sample covariance matrix and the norm of the scaled data matrix for a sequence an of matrices of growing dimension dn dn it makes sense to attach a scalar weight depending on the dimension to a given norm such that simple matrices such as inference for the trace the identity matrix receive bounded norms having in mind that the squared frobenius norm of an is the trace of an it is natural to attach a scalar weight f dn to the trace operator leading to the scalar weight f dn for the frobenius norm as proposed by one may select f dn such that tr f dn for some simple benchmark matrix such as the dn identity matrix in since tr idn dn we choose f dn n and therefore define the scaled trace operator by tr a n dn x a for a square matrix a aij i j of dimension dn dn the scaled trace operator induces the scaled trace norm n kaktr for a square matrix a which is given by n tr a for a covariance matrix and averages the modulus of the eigenvalues and the scaled frobenius matrix norm given by f tr aa dn kakf trace asymptotics let us now turn to the trace asymptotics if the dimension is fixed it is well known that the eigenvalues of a sample covariance matrix and thus their sum as well have convergence rate op and are asymptotically normal see and for the case the situation is more involved the sample covariance matrix is not consistent to the frobenius norm even in the presence of a dimension reducing factor model see remark the following result provides a large sample normal approximation for the scaled trace b n for arbitrarily growing dimension dn when properly normalized the result also norm of shows that the trace norm has convergence rate b n ktr ktr op dn as n introduce for t x b n t yni yni n and notice that bn b n and b n t t e we are interested in studying the scaled trace norm process tn t b n t t n tr tr t steland von sachs theorem let yni i n be a vector time series following model and satisfying assumption a if holds then under the construction of theorem sup tn t n dn x b n j o as n here b n denotes the of the brownian motion bn arising in j j theorem when choosing the dn pairs vn wn ej ej j dn where ej denotes the jth unit vector and satisfies properties ii and iii of theorem suppose that in addition to the assumptions of theorem ynn is strictly stationary since the weighting vectors used in theorem are the first dn unit vectors the covariance of b n i and b n j which is associated to the asymptotic covariance of dni i and dnj j is given by n i j where i j ei ei ej ej i j dn cf we have the asymptotic representations b n ei b n ej o i j cov j n n x x i j cov ynk ynk o n n n x j i e ynk o n k k therefore up to negligible terms we may express i j as a variance parameter i j i j x where n i j o n i j i j cov n i j is the lag of the series ynk k and ynk k i j dn those can be estimated by i j b ej bn j bn i yk ei n pn with bn i n yk ei where yk ei ei ynk for k n i dn n the associated estimator for i j is then given by b n i j i j m x b n i j where m mn is a sequence of lag truncation constants and wmh a sequence of window weights typically defined by a kernel function w a bartlett kernel via w for some bandwidth parameter b bm inference for the trace by theorem var tn n o with n dn x cov b n j b n k dn j and using the canonical estimator btr n dn x i j j an asymptotic confidence interval with nominal coverage probability for is given by b n b n lemma assume as m for all z and w for some constant w for all m z further suppose that cnj cj n satisfy the decay condition sup j for some and are with e if m mn with o as n then lim tr tr remark it is worth comparing our result with the following result obtained by for a factor model suppose that the generic random vector yn ydn satisfies a factor model yn bn f with k k dn dn observable factors f fk errors and a dn k factor loading matrix bn then the sample covariance matrix of an sample yn fn has the convergence rate op dn k b n kf op dn k if ekyn maxi e and maxi e are bounded see theorem this means compared to the rate for fixed dimension the frobenius norm is inflated by the factor dn steland von sachs proofs b b proof of theorem clearly p is for all k n and thus t as well the fact that tr a i ei aei leads to b n t ktr t ktr tr b n t tr t dn x b n b n t t ej n let dn dnj with dnj t dn dn t ej ej for j dn we shall apply theorem with ln dn therefore when redefining all processes on a new probability space together with a dn brownian motion b n with covariances as described in theorem we may argue as follows since k n tr we have dn x tn t dnj t dn now we can conclude that the process en t satisfies b n t t n tr tr dn x b n j n dn x dnj t b n j t dn dn x dnj t b n j dn o as n by theorem iv proof of lemma the proof follows easily from theorem by noting that the covariances of the coordinates of the brownian motion are given by n i j for i j dn shrinkage estimation shrinkage is a approach to regularize the sample matrix and we shall review in section the results obtained for settings when shrinking towards the identity matrix in terms of a convex combination with the sample matrix the optimal weight depends on the trace of the true covariance matrix which can be estimated canonically by the trace of the sample covariance matrix as a consequence we can apply the results obtained in the previous section to obtain large sample approximations for shrinkage matrix estimators recall that the approximations deal with the norm of the difference between partial sums and inference for the trace a brownian motion both attaining values in a vector space in order to compare covariance matrices we shall work with the following pseudometric define an bn vn wn an bn wn for sequences of matrices an and bn of dimension dn indeed for fixed vn wn the mapping an bn an bn an bn vn wn is symmetric semidefinite an bn implies an bn and satisfies the triangle inequality hence defines a pseudometric on the space of dn dn matrices for each we establish three main results for regular weighting vectors vn wn that are bounded away from orthogonality we establish a large sample approximation which holds uniformly in the shrinkage weight and therefore also when using the common estimator for the optimal weight further we compare the shrinkage estimator using the estimated optimal weight with an oracle estimator using the unknown optimal weight in both cases it turns out that the convergence rate of the estimated optimal shrinkage weight carries over to the shrinkage covariance estimator lastly we study the case of orthogonal and nearly orthogonal vectors the latter case is of particular interest since then one may place more unit vectors on the unit sphere corresponding to overcomplete bases as studied in areas such as dictionary learning shrinkage of covariance matrix estimators the results of the previous chapters show that under general conditions inference relying on inner products of series can be based on the sample covariance b n is singular however from a statistical point of view the matrix even if dn n such that use of this classical estimator is not recommended in such situations of high dimensionality important criteria such as its error or its condition number defined to be the ratio of the largest to the smallest eigenvalue deteriorate and it is advisable to regularise b n in order to improve its performance both asymptotically and for finite sample sizes with respect to these criteria obviously a particular interest lies in based approaches using an invertible estimator of as in the approach of on shrinkage estimation in multivariate hidden markov models one b n without needing to impose any structural assumptions on possibility to regularise in particular avoiding sparsity is the following approach of shrinkage consider a b n with a shrinkage estimator defined by a linear or convex combination of target matrix tn b n wn t n wn wn where wn are the shrinkage weights of this convex combination to be chosen in an optimal way to minimise the error between and see below the role of the target tn is similar to ridge regression to reduce a potentially large condition number of the highdimensional matrix by adding a highly regular well conditioned matrix a popular choice for the target is to take a multiple of the dn identity matrix in t n n in with tr in order to respect the scale of both matrices in the convex combination this choice of the target reduces the dispersion of the eigenvalues of around its steland von sachs grand mean tr as large eigenvalues are pulled down towards and small eigenvalues are lifted up to and in particular lifted up away from zero although a bias is b n the gain in variance reduction in parintroduced in estimating by compared to ticular in helps to considerably reduce the error in estimating in order to develop the correct asymptotic framework of the behaviour of large covariance matrices the authors of propose to use the scaled frobenius norm given by to measure the distance between two matrices of asymptotically growing dimension dn to be used also and in particular to define the error between and to become the expected normalised frobenius loss e f furthermore with this scaling tr dn is the appropriate choice of the factor in front of the identity matrix in in the definition of the target tn in equation b n by in practice needs to be estimated from the trace of bn b n tr dn similarly the theoretical shrinkage weight wn need to be replaced by its sample cn thus the fully expression for the shrinkage estimator of writes as analog w follows bn w cn w cn cn w bn in which shrinks the sample covariance matrix towards the estimated shrinkage target bn in it cn with remains to optimally choose the shrinkage weights wn and its analogue w the purpose of balancing between a good fit and good regularisation for this a prominent possibility is indeed to choose the shrinkage weights wn such that the error mse between and is minimised argminwn e wn f which leads to the shrunken matrix a closed form solution or proposition can be derived as b n e f b e in f this choice leads to the interesting property that b e f e kf showing the actual relative gain of the shrunken estimator compared to the classical unshrunken sample covariance in terms of the error moreover it can be shown that this property continues to hold even if one replaces the in practice yet unknown optimal c which is constructed by replacing the population quantities weights by an estimator w n inference for the trace in numerator and denominator of by sample analogs whereas the denominator can be b n it is slightly less straightforward to estimate the nuessentially estimated by kb n in f b n one possibility suggested by and further developed by for merator e f b our is based on the estimation of the variance of note that dn x b n i j b kf var ndn i where under stationarity for i j dn i j n x i j i j b n i j ynk ynk e ynk ynk n x i j b n i j var i j i j with of the i j i j cov ynk ynk yn yn n n optimal weights can now be obtained as follows let a consistent estimator x i j j i b n i j bn i j bn i j yn yn ynt ynt n pn i j where bn i j ynt ynt i j dn then similar as in the previous section the variances i j can be estimated by b i j i j n m x b i j n i j dn consistency of a more general version has been shown in equation under similar assumptions as stated in lemma for dn as n we are led to the estimator p i j ndn c w n b n kb n in f for also studied in depth in a rate of consistency in an asymptotic framework with growing dimensionality dn can be achieved again following for the specific shrinkage target in also considered in let be such that n the larger the faster is dn allowed to grow with n and that as n n in kf c recalling that n tr we observe that measures the closeness of the target to the true covariance matrix then theorem and theorem show that n wn op dn in order to apply the results of the previous sections onto the fully shrinkage one needs to study the convergence of the estimated shrinkage weight estimator w normalised by as will become clear from the proof of theorem to be stated below we already observe here that implies c w op n n if o thus for close to the dimension dn may even grow faster than steland von sachs asymptotics for regular projections our interest is now in deriving the asymptotics for bilinear forms based on the shrinkage estimator of the covariance matrix we can and will assume that the uniformly weighting vectors vn and wn are it turns out that due to the shrinkage target the inner product wn the angle between the vectors vn and wn appears in the approximating brownian functional the inner product is bounded but may converge to as n tends to the latter case requires special treatment and will be studied separately we shall call a pair vn wn of projections regular if it has uniformly bounded and satisfies wn c for all n for some constant c if it is in addition bounded away from orthogonality let w be an arbitrary shrinkage weight and consider the associated shrinkage estimator b s w w bn b n in n b s w estimates the unobservable shrunken variance matrix notice that n w w w tr n in define for w an w b s w w wn nvn n we shall apply the trace asymptotics obtained in theorem dn x b n t t n b n j o tr tr n as n the variance of the approximating linear functional of the dn brownian motion is given by n dn dn x x i j cov b n i b n j dn dn i i where i j are parameters see since typically parameters have positive limits it is natural to assume that inf n theorem let vn wn be a regular pair of projections under the assumptions of theorem and condition there exists on a new probability space which carries an equivalent n on such that version of the vector time series a brownian motion b n t b n t j sup w bn w o w inference for the trace as n where bn w w b n w wn n dn x b n j the covariance structure of b n t is given by var b n dn vn wn cov b n b n i dn vn wn ei ei for i dn and cov b n i b n j dn i j for i j dn especially for any deterministic or random sequence of shrinkage weights wn we have the large sample approximation for the corresponding shrinkage estimator b s wn wn wn bn wn o n as n notice that var bn w dn w wn x w vn wn i j dn dn dn i dn w w wn x vn wn ej ej o p dn dn as n hence under assumption the variance of the approximating wiener process adressing the nonparametric part of the shrinkage estimator is of the order o n whereas the variance of the term approximating the target is of the order o this is due to the fact that we need a dn brownian motion from which dn coordinates are used to approximate the estimated target this requires to scale all coordinates dn cf theorem the following theorem resolves that issue by approximating the shrinkage estimator by two brownian motions one in dimension for the nonparametric part and one in dimension dn for the target those brownian motions are constructed separately such that a priori nothing can be said about their exact covariance structure it turns out however that the covariances converge properly we shall see that for this alternative construction the terms of the resulting decomposition are of the same order theorem let vn wn be a regular pair of projections suppose that the underlying probability space f p is rich enough to carry in addition to the vector time series yni i n n a uniform random variable then there exist on f p a univariate brownian motion b n t t with mean zero and cov b n s b n t min s t vn wn steland von sachs n for s t and a mean zero brownian motion b n t j t in dimension dn with covariance function cov b n s i b n t j min s t n i j for s t and i j dn such that b s w w wn bn w o n pdn b n j as n with w w b n w wn dn further max b n b n j vn wn ej ej o n as n observe that var w w vn wn dn w wn x i j i w w wn dn dn x vn wn ej ej o as n where all three terms are o the above result shows that the nonparametric part namely the sample covariance matrix b n as well as the shrinkage target bn in contribute to the asymptotics in this sense shrinking with respect to the chosen scaled norms provides us with a large sample approximation that mimics the finite sample situation comparisons with oracle estimators recall that an oracle estimator is an estimator that depends on quantities unknown to us such as the optimal shrinkage weight of course it is of interest to study the distance b s w c with estimated optimal weight and the associated between the shrinkage estimator n n oracle using wn in particular the question arises how the rate of convergence affects the difference between the fully data adaptive estimator and an oracle b sn b sn w that uses the estimated the next theorem compares the shrinkage estimator c and the oracle estimator optimal shrinkage weight w n b s w w b n w n n n n b n in which shrinks the sample covariance matrix towards the target using the optimal shrinkage weight in terms of the pseudometric vn wn and thus considers the quantity b s w b s w vn wn b s w b s w wn n n n n n n n the following result shows that even now the rate of convergence is equal to the rate of c convergence of the estimator w n inference for the trace theorem under the assumptions of theorem and the construction described there we have on the new probability space b s w c b s w vn wn n n n n dn x c wn o o b n n n dn b n i o n as n the next result investigates the difference between the shrinkage estimator and the oracle type estimator tr n in using the oracle shrinkage weight and assuming knowledge of in terms of the pseudodistance c w wn b s w c w vn wn b s w n n n n n n n theorem under the assumptions of theorem and the construction described there we have on the new probability space s b n w vn wn bn w w o o as n the above result is remarkable in that it shows that it is optimal in the sense that c of b s w c w vn wn inherits the rate of convergence from the estimator w n n n n the optimal shrinkage weight wn cf nearly orthogonal projections i let vn i ln be unit vectors in rdn on which we may project yn in order to determine the best approximating direction recall that the true covariance between two i j projections vn yn and vn yn is cov vn i yn vn j yn vn i vn j and the corresponding shrinkage estimator is d v i yn v j yn v i b s w c v j cov n n n n n n i b s w as clearly those covariances vanish for i j if the vn are chosen as eigenvectors of n in a classical principal component analysis pca applied to the shrinkage covariance matrix estimator but when analyzing data it is common to rely on procedures such as sparse pca see which yield sparse principal components then analyzing the i covariances of the projections vn yn is of interest steland von sachs j if oln vn j ln is an orthogonal system and ln dn then oln spans a ln subspace of rdn of course there are at most ln orthogonal vectors ln can not be larger than dn however if one relaxes the orthogonality condition vn i vn j i j then one can place much more unit vectors in the euclidean space rdn in such a way that their pairwise angles are small indeed provides an elegant proof of the following kabatjanskiilevenstein bound theorem cheap version of the bound tao for some a dn let xm be unit vectors in rdn such that xj adn n ca for some universal constant then we have m cd theorem motivates to study the case of nearly orthogonal weighting vectors defined as a pair vn wn w w satisfying wn o n now the asymptotics of the shrinkage estimator is as follows theorem let vn and wn be unit vectors satisfying the nearly orthogonal condition and suppose that the conditions of theorem hold then b s w b s w wn bn w op n n as n where bn w w b n w wn n dn x b n i pdn observe that for asymptotically orthogonal weighting vectors the term w wn dn b n i corresponding to the parametric shrinkage target is op and thus vanishes asymptotically in this situation the nonparametric part dominates in large samples proofs proof of theorem first notice that ensures that the second term in does not converge to in probability since vn wn is a regular projection and condition pdn ensures that the gaussian random variable vn dn b n j is not op since inf p inf p n for any hence w wn vn op iff wn o which is excluded by n we argue similarly as in the proof of theorem put dn dnj with t ln dn t vn wn and dnj t ln dn t ej ej j dn where ln dn since the weighting vectors are uniformly theorem yields on a new probability space where a process equivalent to dn can be defined and will be denoted again by dn the inference for the trace n existence of a brownian motion b n t t as characterized in theorem such that b n wn wn b n o n n and b n n tr tr n dn x b n i o b n ktr and the fact that wn kvn kwn as n using these results bn n n o we have for any w w bn w b n wn w b n w dn x b n wn w wn b n i w tr tr n n b n wn b n w b wn wn n tr tr n n b n wn b n w w wn o dn x b n i dn x b n ktr ktr dn b n i as n which shows proof of theorem by theorem there exist on a new probability space f p d e ni i en t t an equivalent process y yni i and a brownian motion b on such that e n t t wn b en t o sup e n t y e ni y e by billingsley s lemma section as n p where ni d en t t n defined lemma there exist brownian motions b n t t n b on the original probability space f p such that b n t t wn b t o sup as n indeed recall that the infinite product of a complete and separabe metric space is complete and separable in our case d equipped with the usual metric see induced by th skorohod metric making d separable and complete then steland von sachs e n wn b en and apply sec lemma with l e nvn wn to conclude the existence of b n a function of and such that holds where the convergence the supnorm follows from the continuity of b n further by theorem there exist on a new probability space f p an equivalent d en t j dn t vector time series i yni i and brownian motions b on in dimension dn characterized as in the theorem such that n t t n as n p where t dn x en j o b again an application of billingsley s d n en t j dn t n b lemma shows the existence of brownian motions b n t j t n such that b n t t n tr tr n dn x b n j o as n on the original probability space a priori we have no information on the exact second order structure of the two brownian n motions but b n j is close to the associated process dnj t j cf and to the n corresponding martingale approximation j defined in which allows us to study the convergence of the covariances cov b n b n j b n b n j j dn first observe that n max kb n j j o n see lemma and lemma because of and since b n t j satisfies pdn n t j dnj t o as n see theorem ii also notice n that kb n j and j are o uniformly in j now use the decomposition x y x y x y y x x y for x y x y to conclude that n n n n cov b n b n j j b n j j n n b n j where the last two terms are o uniformly in j n n n n max b n j sup kb n j o as n combining these estimates with lemma yields n n max b n b n j cov mn j n o inference for the trace which establishes since the covariances of the approximating martingales equal dn vn wn ej ej o as n the factor dn is due to the additional scaling of dnj t to approximate dn bilinear forms by theorem proof of theorem recall that since cn and the elements of are uniformly bounded in n such that wn kvn kwn o this in turn implies that o and tr n o put c wn w wn c w w rn w n n n n n n and notice that w wn w tr rn w n vn wn using we obtain the bound c w w c w o rn w n n n n observe that b s w b s w w w b n w b n in tr n n n n n is equal to the difference w w tr w n in b n we have when replacing by b s w b s w n n n n c tr wn w n in b n w tr b n tr w n in using we therefore obtain for the associated bilinear form b n w b s w wn n n n c rn wn wn b n wn w wn b n w c w o w n n b n o wn w dn x c n b n i o n v wn w d n n n n as n steland von sachs proof of theorem recall that an w b s w w wn we have n b sn w wn c w c wn w c w wn b s w n n n n n n c c an w rn w w n n n w o further using where again rn w b s w w wn bn w o nvn n we arrive at b s w c w wn n n n c w an w rn w n n n o o w bn w as n which completes the proof proof of theorem theorem we obtain let an w be defined as in arguing as in the proof of b n wn vn wn b n w bn w w dn x b n wn b n i w wn n n the first summand is o by theorem under assumption the second term can be bounded by rn b n wn n tr w wn o b n wn tr n dn x b n i b n b n n n dn x b n i as n which completes the proof appendix a notation and formulas we denote e e the approximating martingales used to obtain the strong approximations require to control the following quantities for the reader s convenience we reproduce them here from as well as some related formulas and results let n n j j vn wn dn x cj cj j inference for the trace n n fl j fl j vn wn dn x cj cj l j and n n fel i fel i vn wn x n fl j dn x x cj cj l i lemma and definition suppose that vn wn have uniformly bounded in the sense of equation then assumption a implies sup x x n n fel i fel c for all n x x n c sup for all n x x n sup fel k c for all and there exist vn wn n such that n x n x x n n c n n for all en w e n n have uniformly bounded then there exist further if vn wn v with en w e n vn wn v n n x n x x n n e n vn w vn wn e n n e n vn wn e vn w en w e n vn wn v acknowledgments part of this work has been supported by a grant of the first author from deutsche forschungsgemeinschaft dfg grant no ste which he gratefully acknowledges rainer von sachs gratefully acknowledges funding by contract projet d actions de recherche no of the de belgique and by iap research network grant of the belgian government belgian science policy steland von sachs references andrew barron albert cohen wolfgang dahmen and ronald devore approximation and learning by greedy algorithms ann patrick billingsley convergence of probability measures wiley series in probability and statistics probability and statistics john wiley sons new york second edition a publication bosq linear processes in function spaces volume of lecture notes in statistics new york theory and applications joshua brodie ingrid daubechies christine de mol domenico giannone and ignace loris sparse and stable markowitz portfolios proceedings the national academy of sciences of the united states of america herold dehling and walter philipp almost sure invariance principles for weakly dependent random variables ann jianqing fan yingying fan and jinchi lv high dimensional covariance matrix estimation using a factor model econometrics mark fiecas franke rainer von sachs and joseph tadjuidje shrinkage estimation for multivariate hidden markov models amer statist to appear moritz jirak analysis in increasing dimension multivariate kollo and heinz neudecker asymptotics of eigenvalues and eigenvectors of sample variance and correlation matrices multivariate kollo and heinz neudecker corrigendum asymptotics of eigenvalues and unitlength eigenvectors of sample variance and correlation matrices multivariate anal no multivariate michael kouritzin strong approximation for of linear variables with dependence stochastic process oliver ledoit and michael wolf improved estimation of the covariance matrix of stock returns with an application to portfolio selection journal of empirical finance olivier ledoit and michael wolf a estimator for covariance matrices multivariate weidong liu and zhengyan lin strong approximation for a class of stationary processes stochastic process walter philipp a note on the almost sure approximation of weakly dependent random variables monatsh alessio sancetta sample covariance shrinkage for high dimensional dependent data multivariate steland and von sachs approximations for matrices of time series bernoulli in press terence tao a cheap version of the bound for almost orthogonal vectors daniela witten and robert tibshirani testing significance of features by lassoed principal components ann appl inference for the trace daniela witten robert tibshirani and trevor hastie a penalized decomposition with applications to sparse principal components and canonical correlation analysis biostatistics wei biao wu strong invariance principles for dependent random variables ann wei biao wu yinxiao huang and wei zheng covariances estimation for processes adv in appl wei biao wu and wanli min on linear processes with dependent innovations stochastic process zhang strong approximations of martingale vectors and their applications in adaptive designs acta math appl sin engl
| 10 |
genetic algorithm for solving simple mathematical equality problem denny hermawanto indonesian institute of sciences lipi indonesia mail abstract this paper explains genetic algorithm for novice in this field basic philosophy of genetic algorithm and its flowchart are described step by step numerical computation of genetic algorithm for solving simple mathematical equality problem will be briefly explained basic philosophy genetic algorithm developed by goldberg was inspired by darwin theory of evolution which states that the survival of an organism is affected by rule the strongest species that survives darwin also stated that the survival of an organism can be maintained through the process of reproduction crossover and mutation darwin concept of evolution is then adapted to computational algorithm to find solution to a problem called objective function in natural fashion a solution generated by genetic algorithm is called a chromosome while collection of chromosome is referred as a population a chromosome is composed from genes and its value can be either numerical binary symbols or characters depending on the problem want to be solved these chromosomes will undergo a process called fitness function to measure the suitability of solution generated by ga with problem some chromosomes in population will mate through process called crossover thus producing new chromosomes named offspring which its genes composition are the combination of their parent in a generation a few chromosomes will also mutation in their gene the number of chromosomes which will undergo crossover and mutation is controlled by crossover rate and mutation rate value chromosome in the population that will maintain for the next generation will be selected based on darwinian evolution rule the chromosome which has higher fitness value will have greater probability of being selected again in the next generation after several generations the chromosome value will converges to a certain value which is the best solution for the problem the algorithm in the genetic algorithm process is as follows step determine the number of chromosomes generation and mutation rate and crossover rate value step generate number of the population and the initialization value of the genes with a random value step process steps until the number of generations is met step evaluation of fitness value of chromosomes by calculating objective function step chromosomes selection step crossover step mutation step solution best chromosomes the flowchart of algorithm can be seen in figure ith population chromosome chromosome solutions encoding chromosome chromosome evaluation selection next generation roulette wheel crossover mutation end n y best chromosome decoding best solution figure genetic algorithm flowchart numerical example here are examples of applications that use genetic algorithms to solve the problem of combination suppose there is equality a genetic algorithm will be used to find the value of a b c and d that satisfy the above equation first we should formulate the objective function for this problem the objective is minimizing the value of function f x where f x a since there are four variables in the equation namely a b c and d we can compose the chromosome as follow to speed up the computation we can restrict that the values of variables a b c and d are integers between and a b c d step initialization for example we define the number of chromosomes in population are then we generate random value of gene a b c d for chromosomes chromosome a b c d chromosome a b c d chromosome a b c d chromosome a b c d chromosome a b c d chromosome a b c d step evaluation we compute the objective function value for each chromosome produced in initialization step abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs step selection the fittest chromosomes have higher probability to be selected for the next generation to compute fitness probability we must compute the fitness of each chromosome to avoid divide by zero problem the value of is added by fitness fitness fitness fitness fitness fitness total the probability for each chromosomes is formulated by p i fitness i total p p p p p p from the probabilities above we can see that chromosome that has the highest fitness this chromosome has highest probability to be selected for next generation chromosomes for the selection process we use roulette wheel for that we should compute the cumulative probability values c c c c c c having calculated the cumulative probability of selection process using can be done the process is to generate random number r in the range as follows r r r r r r if random number r is greater than c and smaller than c then select chromosome as a chromosome in the new population for next generation newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome chromosomes in the population thus became chromosome chromosome chromosome chromosome chromosome chromosome in this example we use point randomly select a position in the parent chromosome then exchanging parent chromosome which will mate is randomly selected and the number of mate chromosomes is controlled using parameters for the crossover process is as follows begin while k population do r k random if r k then select chromosome k as parent end k k end end chromosome k will be selected as a parent if r k suppose we set that the crossover rate is then chromosome number k will be selected for crossover if random generated value for chromosome k below the process is as follows first we generate a random number r as the number of population r r r r r r for random number r above parents are chromosome chromosome and chromosome will be selected for crossover chromosome chromosome chromosome chromosome chromosome chromosome after chromosome selection the next process is determining the position of the crossover point this is done by generating random numbers between to length of chromosome in this case generated random numbers should be between and after we get the crossover point parents chromosome will be cut at crossover point and its gens will be interchanged for example we generated random number and we get c c c then for first crossover second crossover and third crossover parent s gens will be cut at gen number gen number and gen number respectively chromosome chromosome chromosome chromosome chromosome chromosome chromosome chromosome chromosome thus chromosome population after experiencing a crossover process chromosome chromosome chromosome chromosome chromosome chromosome step mutation number of chromosomes that have mutations in a population is determined by the parameter mutation process is done by replacing the gen at random position with a new value the process is as follows first we must calculate the total length of gen in the population in this case the total length of gen is number of population mutation process is done by generating a random integer between and to if generated random number is smaller than variable then marked the position of gen in chromosomes suppose we define it is expected that of in the population that will be mutated number of mutations suppose generation of random number yield and then the chromosome which have mutation are chromosome number gen number and chromosome gen number the value of mutated gens at mutation point is replaced by random number between suppose generated random number are and then chromosome composition after mutation are chromosome chromosome chromosome chromosome chromosome chromosome finishing mutation process then we have one iteration or one generation of the genetic algorithm we can now evaluate the objective function after one generation chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs from the evaluation of new chromosome we can see that the objective function is decreasing this means that we have better chromosome or solution compared with previous chromosome generation new chromosomes for next iteration are chromosome chromosome chromosome chromosome chromosome chromosome these new chromosomes will undergo the same process as the previous generation of chromosomes such as evaluation selection crossover and mutation and at the end it produce new generation of chromosome for the next iteration this process will be repeated until a predetermined number of generations for this example after running generations best chromosome is obtained chromosome this means that a b c d if we use the number in the problem equation a we can see that the value of variable a b c and d generated by genetic algorithm can satisfy that equality reference mitsuo gen runwei cheng genetic algorithms and engineering design john wiley sons
| 9 |
sep a family of two generator groups donghi lee and makoto sakuma abstract we construct groups gm m where each gm has a specific presentation gm ha b which satisfies small cancellation conditions c and t here urm i is the single relator of the upper presentation of the link group of slope rm i where m m m and rm i m m i hmi m m in continued fraction expansion for every integer i introduction recall that a group g is called hopfian if every epimorphism g g is an automorphism the property of finitely generated groups has a close connection with the finiteness in fact the classical work due to mal cev shows that every finitely generated group is finite one of the hardest open problems about hyperbolic groups is whether or not every hyperbolic group is residually finite an important progress on this problem was given by sela asserting that every hyperbolic group is hopfian in osin proved that this problem is equivalent to the question on whether or not a group g is residually finite if g is hyperbolic relative to a finite collection of residually finite subgroups the notion of relatively hyperbolic groups is an important generalization of hyperbolic groups in geometric group theory originally introduced by gromov cf motivating examples for this generalization include the fundamental groups of hyperbolic manifolds of finite volume in particular every link complement except for a torus link is a hyperbolic manifold with cusps so its fundamental group that is the link group is hyperbolic relative to its peripheral subgroups although it is not a hyperbolic group it is known by groves that a finitely generated group is hopfian if it is hyperbolic relative to free abelian subgroups it is also proved by reinfeldt and weidmann that every hyperbolic group possibly with torsion is hopfian in addition based on this result coulon and guirardel mathematics subject classification primary the first author was supported by basic science research program through the national research foundation of korea nrf funded by the ministry of education science and technology the second author was supported by jsps proved that every lacunary hyperbolic group which is characterized as a direct limit of hyperbolic groups with a certain radii condition is also hopfian as for small cancellation groups it is known that if a group has a finite presentation which satisfies small cancellation conditions either c or both c and t then it is hyperbolic see wise also proved that every finite c cancellation presentation defines a residually finite group historically not many have been known examples of finitely generated nonhopfian groups with specific presentations the earliest such example was found by neumann in as follows ha b where ei bi bi for every integer i soon after the first group with finite presentation was discovered by higman as follows ha s t as at i also a group with the simplest presentation up to now was produced by baumslag and solitar as follows ha t t i many other groups with specific finite presentations have been obtained by generalizing higman s group or s group see for instance another notable group was obtained by ivanov and storozhev in they constructed a family of finitely generated but not finitely presented relatively free groups with direct limits of hyperbolic groups although the defining relations of their group presentations are not explicitly described in terms of generators motivated by this background we construct groups by using hyperbolic link groups in more detail we construct a family of groups each of which has the form g ha b satisfying small cancellation conditions c and t where uri is the single relator of the upper presentation of the link group of the link of slope ri for every i here the rational numbers ri may be parametrized by i and there is an explicit formula to express uri in terms of a and b to parametrize the rational numbers ri we express ri in continued fraction expansion note that every rational number s has a unique continued fraction expansion such that mk mk where k mk k and mk unless k the main result of the present paper is the following whose proof is contained in in section theorem let and let ri i for every integer i then the group presentation g ha b satisfies small cancellation conditions c and t and g is here the symbol i represents i successive s if i whereas means that does not occur in that place so that remark once we allow the components of a continued fraction expansion to be meaning that the two integers immediately before and after are added to form one component ri s in theorem can be parametrized including i as ri for every i if we express the rational number ri in theorem as qi where pi and qi are relatively prime positive integers then see section a simple computation shows that the inequality holds for every i so that the length of the word uri satisfies the inequality c c for every integer i where c by looking at the proof of theorem in section it is not hard to see that a similar result holds not only for but also for m m m with m being any integer greater than thus we only state its general form without a detailed proof theorem suppose that m is an integer with m let m m m and let ri m m i hmi m m for every integer i then the group presentation g ha b satisfies small cancellation conditions c and t and g is the present paper is organized as follows in section we recall the upper presentation of a link group and basic facts established in concerning the upper presentations we also recall key facts from obtained by applying small cancellation theory to the upper presentations section is devoted to the proof of the main result theorem preliminaries upper presentations of link groups we recall some notation in the conway sphere s is the punctured sphere which is obtained as the quotient of by the group generated by the around the points in for each s q let be the simple loop in s obtained as the projection of a line in of slope we call s the slope of the simple loop for each r the link k r of slope r is the sum of the rational tangle b t of slope and the rational tangle b t r of slope recall that b t and b t r are identified with s so that and bound disks in b t and b t r respectively by s theorem the link group g k r s k r is obtained as follows g k r s k r s ii b t ii let a b be the standard meridian generator pair of b t as described in section then b t is identified with the free group f a b with basis a b for the rational number r where p and q are relatively prime positive integers let ur be the word in a b obtained as follows set where is the greatest integer not exceeding x if p is odd then q b where if p is even then where b t is represented by the simple loop and we then ur f a b obtain the following and presentation of a link groups g k r b t ii ha b ur i this presentation is called the upper presentation of a link group basic facts concerning the upper presentations throughout this paper a cyclic word is defined to be the set of all cyclic permutations of a cyclically reduced word by v we denote the cyclic word associated with a cyclically reduced word also the symbol denotes the equality between two words or between two cyclic words now we recall definitions and basic facts from which are needed in the proof of theorem in section definition let v be a reduced word in a b decompose v into v vt where for each i t vi is a positive negative subword that is all letters in vi have positive negative exponents and is a negative positive subword then the sequence of positive integers s v is called the of let v be a cyclically reduced word in a b decompose the cyclic word v into v vt where vi is a positive negative subword and is a negative positive subword taking subindices modulo t then the cyclic sequence of positive integers cs v is called the of v here the double parentheses denote that the sequence is considered modulo cyclic permutations definition for a rational number r with r let ur be the word defined in the beginning of this section then the symbol cs r denotes the cs ur of ur which is called the of slope a reduced word w in a b is said to be alternating if and appear in w alternately to be precise neither nor appears in also a cyclically reduced word w in a b is said to be cyclically alternating all the cyclic permutations of w are alternating in particular ur is a cyclically alternating word in a b note that every alternating word w in a b is determined by the sequence s w and the initial letter with exponent of note also that if w is a cyclically alternating word in a b such that cs w cs r then either w ur or w r as cyclic words in the remainder of this section we suppose that r is a rational number with r and write r as a continued fraction expansion r mk where k mk k and mk unless k note from that if k then some properties of cs r differ according to or for brevity we write m for lemma proposition for the rational number r mk satisfying that if k the following hold suppose k r then cs r m m suppose k then each term of cs r is either m or m moreover no two consecutive terms of cs r can be m m so there is a cyclic sequence of positive integers ts such that cs r m hmi m hmi m ts hmi here the symbol ti hmi represents ti successive m s definition if k the symbol ct r denotes the cyclic sequence ts in lemma which is called the ct of slope lemma proposition and corollary for the rational number r mk with k and let r be the rational number defined as r mk then we have ct r cs r lemma proposition for the rational number r mk the cyclic sequence cs r has a decomposition which satisfies the following each si is symmetric the sequence obtained from si by reversing the order is equal to si here is empty if k each si occurs only twice in the cyclic sequence cs r the subsequence begins and ends with m the subsequence begins and ends with lemma proof of proposition for the rational number r mk with k and let r be the rational number defined as in lemma also let cs r and cs r be the decompositions described in lemma then the following hold if k then and m hmi if k then and m hmi m m hmi m hmi m hmi hmi m hmi the following lemma is useful in the proof of lemma lemma for two distinct rational numbers r mk and s lt assume that i m is a positive integer ii mi and lj are integers greater than for every i and j iii k t and k t and iv if k t then while if k t then let r and be the rational numbers defined as in lemma also let cs r and cs r be the decompositions described in lemma suppose that cs s contains or as a subsequence then cs contains or as a subsequence in the above lemma and throughout this paper we mean by a subsequence a subsequence without leap namely a sequence al is called a subsequence of a cyclic sequence if there is a sequence bn representing the cyclic sequence such that l n and ai bi for i proof first suppose that cs s contains as a subsequence by lemma cs s contains m hmi m m hmi m as a subsequence where then clearly cs ct s contains that is as a subsequence so we are done next suppose that cs s contains as a subsequence again by lemma cs s contains hmi m hmi hmi m hmi as a subsequence where then cs ct s contains as a subsequence where in the reminder of the proof we show that so that cs ct s contains as a subsequence to this end note that since r mk by lemma also since lt cs ct s consists of and by lemma hence each of and is either or suppose first that k then by the assumption iv and thus the only possibility is thus we have suppose next that k then again by the assumption iv note that k because t by the assumption iii thus we can see by using lemma and the assumption mi for every i that contains m hmi m as a subsequence this implies that cs ct s contains a term since the only possibility thus we again have completing the proof of lemma small cancellation theory applied to the upper presentations a subset r of the free group f a b is called symmetrized if all elements of r are cyclically reduced and for each w r all cyclic permutations of w and also belong to definition suppose that r is a symmetrized subset of f a b a nonempty word v is called a piece with respect to r if there exist distinct r such that and the small cancellation conditions c p and t q where p and q are integers such that p and q are defined as follows see condition c p if w r is a product of n pieces then n condition t q for wn r with no successive elements wi an inverse pair i mod n if n q then at least one of the products wn wn is freely reduced without cancellation the following proposition enables us to apply small cancellation theory to the upper presentation ha b ur i of g k r proposition theorem let r be a rational number such that r and let r be the symmetrized subset of f a b generated by the single relator ur of the group presentation g k r ha b ur i then r satisfies c and t this proposition follows from the following characterization of pieces which in turn is proved by using lemma lemma corollary let r and r be as in proposition then a subword w of the cyclic word r is a piece with respect to r if and only if s w contains neither nor with as a subsequence proof of theorem in this section for brevity of notation we sometimes write for for a letter or a word x for a quotient group h of the free group f a b and two elements and of f a b the symbol means the equality in the group for we have by using lemma cs cs let ha b also let x a be the alternating word in a b such that s x and let f f a b f a b be the homomorphism defined by f a and f b lemma under the foregoing notation let f a b be the composition of f and the canonical surjection f a b then is onto proof since b it suffices to show that a is contained in the image of let w a be the alternating word in a b such that s w then f w here since x a and a are alternating words in a b we see that f w a is also an alternating word in a b with s f w s s s bx s s s bx s bx s s x s bx since s x and s we have s s bx s and s bx so that s f w letting a a and b be the cyclically alternating words in a b such that s s s we see that f w moreover for each i since cs vi cs vi as cyclic words by lemma which implies that vi hence f w and thus a is contained in the image of as required at this point we set up the following notation which will be used at the end of the proofs of lemmas and notation suppose that v is an alternating word in a b such that there is a sequence ts of positive integers satisfying s v ts where is or for i then the symbol t v denotes the sequence ts suppose that v is a cyclically alternating word in a b such that there is a cyclic sequence ts of positive integers satisfying cs v ts then the symbol ct v denotes the cyclic sequence ts in particular by lemma if v ur for some r mk with then ct ur ct r cs r where r mk suppose that v is an alternating word in a b such that there is a sequence hp of positive integers satisfying t v hp where t v is defined as in and is or for i then the symbol v v denotes the sequence hp suppose that v is a cyclically alternating word in a b such that there is a cyclic sequence hp of positive integers satisfying ct v hp where ct v is defined as in then the symbol cv v denotes the cyclic sequence hp in particular by lemma if v ur for some r mk with then cv ur ct r cs r where r mk and r mk lemma under the foregoing notation f proof recall that cs cs clearly the cyclic word has six positive or negative subwords of length cutting in the middle of such subwords we may write the cyclic word as a product where put wn f vn for every n namely bx and it then follows that f f claim where a is an alternating word in a b with s proof of claim recall that x a and a are alternating words in a b such that s x and s it is not hard to see that s s s s bx s bx letting a and a be alternating words in a b such that s and s clearly here since cs cs and so we finally have as required claim where a is the alternating word in a b with s proof of claim as in the proof of claim we have s s s s x s bx letting a and b be alternating words in a b such that s and s clearly here since cs cs and so we finally have as required by claims and it follows that bx so that f moreover we see that a a b a a and b are alternating words in a b such that s s s s s s s s bx s s s s s s s s s this implies that cs s s following notation we also have t t t t t t and that ct t t we furthermore have v v v v v v and cv v v since is the corresponding to the rational number we see that r for some rational number r with r for this rational number r since cs r ct r ct consists of and we have r furthermore since cs r cs consists of and we finally have r which equals in the statement of the theorem this completes the proof of lemma lemma under the foregoing notation f uri for every i proof fix i then ri mk with by lemma cs ri consists of and without moreover since mk by lemmas and ct ri cs consists of and which implies that the number of occurrences of s between any two s is one or two claim by cutting the cyclic word uri in the middle of each positive or negative subwords of length we may write uri as a product vi ki where each vi j is one of the following proof of claim note that for every n vn is an alternating word in a b such that s vn kn tn where tn and kn consider the graph as in figure where the vertex set is equal to and each edge is endowed with one or two orientations observe that if vn and vm are the initial and terminal vertices respectively of an oriented edge of the graph then the word vn vm is an alternating word such that s vn vm kn tn tm namely the terminal subword of vn corresponding to the last component of s vn and the initial subword of vm corresponding to the first component km of s vm are amalgamated into a maximal positive or negative alternating subword of vn vm of length moreover the weight tn resp tm is or according to whether the vertex vn resp vm has valence or thus if vnp where vnj is a closed edge path in the graph which is compatible with the specified edge orientations a compatible closed edge path in brief namely if vnj and are the initial and terminal vertices of an oriented edge of the graph for each j p where the indices are considered modulo p then the cyclically reduced word vnp is a cyclically alternating word with cssequence tnp since the weight tnj is or according to whether the vertex vnj has valence or we see that for any compatible closed edge path the ct tnp of the corresponding cyclically alternating word consists of only and and that it has isolated s moreover for any such cyclic sequence we can construct a compatible closed edge path such that the ct of the corresponding cyclically alternating word is equal to the given cyclic sequence in particular we can find a compatible closed edge path such that the ct of the corresponding cyclically alternating word w is equal to ct uri this implies that cs w cs uri figure proof of claim in the proof of lemma hence w ri as cyclic words by lemma this completes the proof of claim putting bx and we obviously have f vn wn for every n so that f uri f vi ki wi ki where each wi j recall from claims and in the proof of lemma that where a is the alternating word in a b with s and that where a is the alternating word in a b with s it follows that bx then we have f uri wi k i w w moreover where each wi j b b a a b b a a are alternating words in a b such that s s s s s s s s s s bx s and s s s s s s and s s observe in the graph in figure that if vn and vm are the initial and terminal is an alternating word such vertices respectively of an oriented edge then wm that s wn wm s wn s wm which consists of and and moreover the components are isolated this observation yields that s s wi k cs wi k i i ct wi k t t wi k i i here t t t t t t t t this also yields that cv wi k v v wi k i i where v if n and v otherwise define n vn to be the number of positive or negative proper subwords of vn of length for each n here by a proper subword of vn we mean a subword which lies in the interior of vn then we see that v n vn for each n since vi ki is a product being cut in the middle of each positive or negative subwords of length we also see that n n vi ki ct ri cs n v for each j k with mk since v wi j i j i w n v n v cv i ki is the corresponding i ki to the rational number mk hence f uri wi k r i for some rational number r with r mk for this rational number w consists of and we have r r since cs r ct r ct i ki w consists of and mk furthermore since cs r cs i ki we finally have r mk which equals in the statement of the theorem this completes the proof of lemma since g ha b lemmas imply that f descends to an epimorphism g now we show that is not an isomorphism let s then cs us cs s so that us as in the proof of lemma letting w a be an alternating word in a b such that s w we have us lemma we have us proof clearly f us f w w bf b here since w from the proof of lemma we have f us where is a cyclically alternating word in a b such that cs s s s s bx s babab which equals cs this implies that f us namely us as required lemma under the foregoing notation let r be the symmetrized subset of f a b generated by the set of relators uri i of the upper presentation g ha b then r satisfies c and t proof since every element in r is cyclically alternating r clearly satisfies t to show that r satisfies c we begin by setting some notation recall from lemma that for every rational number r with r cs r has a decomposition depending on for clarity we write r r r r for this decomposition on the other hand if r is a rational number with r mk with k and mk then the symbol r n denotes the rational number with continued fraction expansion mk for each n k claim for any two integers i j with i j the cyclic word urj does not contain a subword corresponding to ri or ri with proof of claim suppose on the contrary that there are some i j such that the cyclic word urj contains a subword corresponding to ri or ri we first show that this assumption implies that cs rj contains ri or ri as a subsequence if urj contains a subword corresponding to ri then clearly cs rj contains ri as a subsequence so assume that urj contains a subword corresponding to ri then cs rj contains st as a subsequence where ri st since the continued fraction expansions of both ri and rj begin with we see that ri begins and ends with by lemma and that cs rj also consists of and by lemma hence we must have and therefore cs rj contains ri as a subsequence thus we have proved that cs rj contains ri or ri as a subsequence note that the lengths of the continued fraction expansions of ri and rj are and n j respectively hence we can apply lemma successively to see that cs rj n n contains ri or ri as a subsequence for every n min i j since i j there are two cases case j i recall that ri is equal to or ri i according to whether i or i so we have ri m here m if i and m otherwise since j i we can observe that rj has a continued fraction expansion of the form m nk where k and each nt is or consists of and m the cyclic sequence and cs rj since ri cs rj can not contain ri must contain ri m as a subsequence hence cs rj as a subsequence but since m m does not occur in cs rj m m by lemma this implies that contradiction rj m nk with by lemma since ri ri can not occur in case i j as in case we can observe that rj if j and m otherwise and that ri cs rj a m where m has a continued fraction expansion of the form m nk where k and each nt is or then both ri and ri contain a term by lemma but since cs rj consists of only m and m this is impossible by claim we see that the assertion in lemma holds even if r is for any i and the symmetrized subset r in the lemma is replaced by ri enlarged to be the set in the current setting namely r is the symmetrized subset of f a b generated by the set of relators uri i of the group presentation g ha b to be precise the following hold claim for each i a subword w of the cyclic word ri is a piece with respect to the symmetrized subset r in lemma if and only if s w contains neither ri nor ri with as a subsequence by using claim we can see as in proof of corollary that each cyclic word ri is not a product of less than pieces with respect to hence r satisfies c lemma under the foregoing notation us proof suppose on the contrary that us then there is a reduced van kampen diagram over g ha b such that us see since is a by lemma contains a subword of some ri which is a product of pieces with respect to the symmetrized subset r in lemma see section this implies that cs must contain a term which is a contradiction to the fact cs cs us cs s consists of only and lemma together with lemma shows that is an epimorphism of g but not an isomorphism of consequently g is and the proof of theorem is now completed references baumslag and solitar some groups bull amer math soc coulon and guirardel automorphisms and endomorphisms of lacunary hyperbolic groups bowditch relatively hyperbolic groups int algebra comput farb relatively hyperbolic groups geom funct anal gromov hyperbolic groups essays in group theory gersten ed msri publ springer groves limit groups for relatively hyperbolic groups ii diagrams geom topol higman a finitely related group with an isomorphic proper factor group london math soc ivanov and storozhev relatively free groups geom dedicata lee and sakuma epimorphisms between link groups homotopically trivial simple loops on spheres proc london math soc lee and sakuma homotopically equivalent simple loops on spheres in link complements i geom dedicata lyndon and schupp combinatorial group theory berlin mal cev on the faithful representation of infinite groups by matrices mat sb neumann a group isomorphic to a proper factor group london math soc osin peripheral fillings of relatively hyperbolic groups invent math no osin relatively hyperbolic groups intrinsic geometry algebraic properties and algorithmic problems memoirs amer math soc no pp reinfeldt limit groups and diagrams for hyperbolic groups phd thesis university reinfeldt and weidmann diagrams for hyperbolic groups preprint updated sapir and wise ascending hnn extensions of residually finite groups can be nonhopfian and can have very few finite quotients j pure appl algebra sela endomorphisms of hyperbolic groups i the hopf property topology strebel appendix small cancellation groups in sur les groupes hyperboliques d mikhael gromov papers from the swiss seminar on hyperbolic groups held in bern ghys and de la harpe editors progr vol boston boston ma wise a automatic group algebra wise research announcement the structure of groups with a quasiconvex hierarchy electron res announc math sci department of mathematics pusan national university pusan korea address donghi department of mathematics graduate school of science hiroshima university japan address sakuma
| 4 |
transponder configuration for elastic optical networks may mohammad hadi member ieee and mohammad reza pakravan member ieee propose an procedure for transponder configuration in elastic optical networks in which quality of service and physical constraints are guaranteed and joint optimization of transmit optical power temporal spatial and spectral variables are addressed we use geometric convexification techniques to provide convex representations for quality of service transponder power consumption and transponder configuration problem simulation results show that our convex formulation is considerably faster than its nonlinear counterpart and its ability to optimize transmit optical power reduces total transponder power consumption up to we also analyze the effect of mode coupling and number of available modes on power consumption of different network elements optimization green communication elastic optical networks fibers mode coupling i ntroduction d temporally spectrally and spatially elastic optical network eon has been widely acknowledged as the next generation high capacity transport system and the optical society has focused on its architecture and network resource allocation techniques eons can provide an network configuration by adaptive resource allocation according to the communication demands and physical conditions higher energy efficiency of orthogonal frequency division multiplex ofdm signaling has been reported in which nominates ofdm as the main technology for resource provisioning over resources of time and spectrum on the other hand enabling technologies such as fibers fmfs and fibers mcfs have been used to increase network capacity and efficiency through resource allocation over spatial dimension although many variants of algorithms have been proposed for resource allocation in eons joint assignment of temporal spectral and spatial resources in eons needs much research and study among the available works on eons a few of them have focused on which is a fundamental requirement of the future optical networks moreover the available approaches do not consider transmit optical power as an optimization variable which results in inefficient network provisioning flexible resource allocation is an problem and it is usually decomposed into several with lower complexity following this approach we decompose the resource allocation problem into routing and ordering ros and transponder configuration subproblem tcs and mainly focus on tcs which is more complex and we consider fmf because it has simple amplifier structure easier fusion process lower nonlinear effects and lower manufacturing cost compared to other multiplexed sdm optical fibers in tcs we optimally configure transponder parameters such as modulation level number of coding rate transmit optical power number of active modes and central frequency such that total transponders power consumption is minimized while quality of service qos and physical constraints are met unlike the conventional approach we provide convex expressions for transponder power consumption and optical signal to noise ratio osnr as an indicator of qos we then use the results to formulate tcs as a convex optimization problem which can efficiently be solved using fast convex optimization algorithms we consider transmit optical power as an optimization variable and show that it has an important impact on total transponder power consumption simulation results show that our convex formulation can be solved almost times faster than its nonlinear program minlp counterpart optimizing transmit optical power also improves total transponder power consumption by a factor of for european optical network with aggregate traffic tbps we analyze the effect of mode coupling on power consumption of the different network elements as simulation results show total network power consumption can be reduced more than using stronglycoupled fmfs rather than ones numerical outcomes also demonstrate that increasing the number of available modes in fmfs provides a between fft and dsp power consumption such that the overall transponder power consumption is a descending function of the number of available modes ii s ystem m odel consider a coherent optical communication network characterized by topology graph g v l where v and l are the sets of optical nodes and directional optical fmf links respectively the optical fmfs have m modes and gridless bandwidth q is the set of connection requests and ql shows the set of requests sharing fmf l on their routes each request q is assigned a contiguous bandwidth around carrier frequency and modulates mq modes of its available m modes the assigned contiguous bandwidth includes ofdm with space of f so to have a feasible mimo processing the remaining unused modes of a request can not be shared among others we assume that the assigned bandwidths are continuous over their routes to remove the high cost of conversion request q passes nq fiber spans along its path and has nq i shared spans with request i q each fmf span has fixed length of lspn and an mode demux mimo pd adc fft eq sm dec pd adc fft eq sm dec pd adc fft eq sm dec mode mux lo optical mixer lo optical mixer ld dac ifft sm enc ld dac ifft sm enc ld dac ifft sm enc fig block diagram of a pair of transmit and receive transponders with available modes optical amplifier to compensate for its attenuation there are modulation levels c and coding rates r where each pair of c r requires minimum osnr c r to get a ber value of each transponder is given modulation level cq coding rate rq and injects optical power pq to each active mode of each polarization chromatic dispersion and mode coupling signal broadenings p are respectively proportional to nq and q p nq with coefficients flspn and l lspn where is chromatic dispersion factor and lsec is the product of rms uncoupled group delay spread and section length transponders add a sufficient cyclic prefix to each ofmd symbol to resolve the signal broadening induced by mode coupling and chromatic dispersion transponders have maximum information bit rate there is also a guard band g between any two adjacent requests on a link considering the architecture of fig the power consumption of each pair of transmit and receive transponders pq can be calculated as follows pq ptrb mq bq pf f t pdsp where ptrb is transmit and receive transponder bias term pedc is the scaling coefficient of encoder and decoder power consumptions pf f t denotes the power consumption for a two point fft operation and pdsp is the power consumption scaling coefficient of the receiver dsp and mimo operations to have a green eon we need a resource allocation algorithm to determine the values of system model variables such that the transponders consume the minimum power while physical constrains are satisfied and desired levels of osnr are guaranteed in general such a problem is modeled as an nphard minlp optimization problem to simplify the problem and provide a solution the resource allocation problem is usually decomposed into two ros where the routing and ordering of requests on each link are defined and tcs where transponders are configured usually the search for a near optimal solution involves iterations between these two to save this iteration time it is of great interest to hold the running time of each at its minimum value in this work we mainly focus on tcs which is the most and formulate it as a convex problem to benefit from fast convex optimization algorithms for a complete study of ros one can refer to iii t ransponder c onfiguartion p roblem a minlp formulation for tcs is as follows min c b r p m x pq q j g l l j b q mq rq cq p q rq f nq q where c b r p m and are variable vectors of transponder configuration parameters modulation level number of subcarriers coding rate transmit optical power number of active modes and central frequency mba shows the set of integer numbers a the goal is to minimize the total transponder power consumption where pq is obtained using constraint is the qos constraint that forces osnr to be greater than its required minimum threshold is a nonlinear function of b p m and while the value of is related to rq and cq constraint is nonoverlappingguard constraint that prevents two requests from sharing the same frequency spectrum j is a function that shows which request occupies assigned spectrum bandwidth on link l and its values are determined by solving ros constraint holds all assigned central frequencies within the acceptable range of the fiber spectrum the last constraint guarantees that the transponder can convey the input traffic rate rq in which wasted cyclic prefix times are considered generally this problem is a complex minlp which is and can not easily be solved in a reasonable time therefore we use geometric convexification techniques to convert this minlp to a convex optimization problem and then use relaxation method to solve it to have a convex problem we first provide a generalized posynomial expression for the optimization and then define a variable change to convexify the problem a posynomial expression for osnr of a request in eons has been proposed in we simply consider each active mode as an independent source of nonlinearity and incoherently add all the interferences therefore the extended version of the posynomial osnr expression is p mqq pq p q pi nq i i mi m i where and nsp is the spontaneous emission factor is the light frequency h is planck s constant is attenuation coefficient is dispersion factor and is nonlinear constant furthermore dq i is the distance between carrier frequencies and and equals to dq i we use cq for posynomial curve fitting of osnr threshold values where following the same approach as we arrive at this new representation of the optimization problem x min c b r p m t d pq k x q i q nq i h bq nq mq q x nq i i i q i q j l j g l l l bq b q q f rq q mq q rq c q p nq rq rq cq mq q cq tq q i j j i l i j i j l l l ignoring constraints and the penalty term of the goal function the above formulation is equivalent geometric program of the previous minlp in which expressions and the mentioned posynomial curve fitting have been used for qos constraint constraints and and the penalty term are added to guarantee the implicit equality of dq i constraint is also needed to convert the generalized posynomial qos constraint to a valid geometric expression as explained in now consider the following variable change x ex x r x r applying this variable change to the goal function which is the most difficult part of the variable change we have x ptrb emq pf f t pdsp x i q nq i clearly i emq and are convex over variable domain we use expression to provide a convex approximation for the remaining term bq the approximation relative error is less than for practical values of mq and bq consequently function which is a nonnegative weighted sum of convex functions is also convex the same statement without any approximation can be applied to show the convexity of the constraints under variable change of for some constraints we need to apply an extra log to both sides of the inequality to solve this problem a relaxed continuous version of the proposed convex formulation is iteratively optimized in a loop at each epoch the continuous convex optimization is solved and obtained values for relaxed integer variables are rounded by a given precision then we fix the acceptable rounded variables and solve the relaxed continuous convex problem again the loop continues untill all the integer variables have valid values the number of iterations is at most equal to in practice is usually less than the number of integer variables furthermore a simpler problem should be solved as the number of iteration increases because some of the integer variables are fixed during each loop iv n umerical r esults in this section we use simulation results to demonstrate the performance of the convex formulation for tcs the european optical network is considered with the topology and traffic matrix given in simulation constant parameters are lspn km thz nsp f mhz ps fs g ghz b thz ptrb w pedc w pf f t mw pdsp mw we use matlab yalmip and cvx software packages for programming modeling and optimization the total power consumption of different network elements in terms of aggregate traffic with and without adaptive transmit optical power assignment has been reported in fig we have used the proposed approach of for fixed assignment of transmit optical power clearly for all the elements the total power consumption is approximately a linear function of aggregate traffic but the slope of the lines are lower when transmit optical powers are adaptively assigned as an example adaptive transmit optical power assignment improves total transponder power consumption by a factor of for aggregate traffic of tbps fig shows total power consumption of different network elements versus number of available modes m in fmfs the power consumption values are normalized to their corresponding values for the scenario with single mode fibers m as m increases the amount of transponder power consumption decreases but there is no considerable gain for m moreover there is a tradeoff between dsp and fft power consumption such that the overall transponder power consumption is a decreasing function of the number of available modes fig shows power consumption of different network elements in terms of aggregate traffic for and fmfs obviously total transponder power consumption is considerably reduced for fmfs in which group delay spread is proportional to square root of path lengths in comparison to fmfs in which group delay spread is proportional to path lengths this is the same as the results published in as an example improvement can be more than for aggregate traffic of tbps numerical outcomes also show that our convex formulation can be more than times faster than its nonlinear counterpart which is compatible with the results reported in c onclusion resource allocation and quality of service provisioning is the fundamental problem of green fmfbased elastic optical networks in this paper we decompose the resource allocation problem into two for routing and traffic ordering and transponder configuration we mainly focus on transponder configuration and provide a convex formulation in which joint optimization total total total total total total total total of temporal spectral and spatial resources along with optical transmit power are considered simulation results show that our formulation is considerably faster than its nonlinear counterpart and its ability to optimize transmit optical power can improve total transponder power consumption up to we demonstrate that there is a tradeoff between dsp and fft power consumptions as the number of modes in fmfs increases but the overall transponder power consumption is a descending function of the number of available modes we also calculate the power consumption of different network elements and show that fmfs reduce the power consumption of these elements power consumption fixed transmit power power consumption fixed transmit power dsp power consumption fixed transmit power transponder power consumption fixed transmit power power consumption adaptive transmit power power consumption adaptive transmit power dsp power consumption adaptive transmit power transponder power consumption adaptive transmit power kw r eferences aggregate tbps fig total power consumption of different network elements in terms of aggregate traffic with and without adaptive transmit optical power assignment normalized normalized normalized normalized total total total total transponder power consumption power consumption power consumption dsp power consumption number of available modes in fmfs fig normalized total power consumption of different network elements in terms of the number of available modes in fmfs total total total total total total total total power consumption fmf power consumption fmf dsp power consumption fmf transponder power consumption fmf power consumption fmf power consumption fmf dsp power consumption fmf transponder power consumption fmf kw aggregate tbps fig total power consumption of different network elements in terms of aggregate traffic for and fmfs proietti et elastic optical networking in the temporal spectral and spatial domains ieee communications magazine vol no pp khodakarami et flexible optical networks an energy efficiency perspective journal of lightwave technology vol no pp saridis et survey and evaluation of space division multiplexing from technologies to optical networks ieee communications surveys tutorials vol no pp chatterjee sarma and oki routing and spectrum allocation in elastic optical networks a tutorial ieee communications surveys tutorials vol no pp muhammad et resource allocation for multiplexing optical white box versus optical black box networking journal of lightwave technology vol no pp winzer optical transport capacity scaling through spatial multiplexing ieee photonics technology letters vol no pp yan et joint assignment of power routing and spectrum in static networks journal of lightwave technology no hadi and pakravan resource allocation for elastic optical networks using convex optimization arxiv preprint khodakarami pillai and shieh quality of service provisioning and energy minimized scheduling in software defined flexible optical networks journal of optical communications and networking vol no pp yan et resource allocation for optical networks with nonlinear channel model journal of optical communications and networking vol no pp hadi and pakravan resource allocation for elastic optical networks using geometric optimization arxiv preprint ho and kahn mode coupling and its impact on spatially multiplexed systems optical fiber telecommunications vi vol pp hadi and pakravan bvwxc placement in elastic optical networks ieee photonics journal no pp askarov and kahn adaptive equalization in multiplexing systems journal of lightwave technology vol no pp boyd et a tutorial on geometric programming optimization and engineering vol no gao et analytical expressions for nonlinear transmission performance of coherent optical ofdm systems with frequency guard band journal of lightwave technology vol no pp
| 7 |
sep the annals of statistics vol no doi c institute of mathematical statistics moderate deviations for studentized u with applications by jinyuan and southwestern university of finance and economics university of melbourne chinese university of hong and princeton u are widely used in a broad range of applications including those in the fields of biostatistics and econometrics in this paper we establish sharp moderate deviation theorems for studentized u in a general framework including the and studentized test statistic as prototypical examples in particular a refined moderate deviation theorem with accuracy is established for the these results extend the applicability of the existing statistical methodologies from the onesample to more general nonlinear statistics applications to tribute peter was a brilliant and prolific researcher who has made enormously influential contributions to mathematical statistics and probability theory peter had extraordinary knowledge of analytic techniques that he often applied with ingenious simplicity to tackle complex statistical problems his work and service have had a profound impact on statistics and the statistical community peter was a generous mentor and friend with a warm heart and keen to help the young generation jinyuan chang and zhou are extremely grateful for the opportunity to learn from and work with peter in the last two years at the university of melbourne even in his final year he had afforded time to guide us we will always treasure the time we spent with him shao is so grateful for all the helps and supports that peter had provided during the various stages of his career peter will be dearly missed and forever remembered as our mentor and friend received june supported in part by the fundamental research funds for the central universities grant no nsfc grant no the center of statistical research at swufe and the australian research council supported by hong kong research grants council grf and supported by nih grant and a grant from the australian research council ams subject classifications primary secondary key words and phrases bootstrap false discovery rate u test multiple hypothesis testing moderate deviation studentized statistics twosample u this is an electronic reprint of the original article published by the institute of mathematical statistics in the annals of statistics vol no this reprint differs from the original in pagination and typographic detail chang shao and zhou multiple testing problems with false discovery rate control and the regularized bootstrap method are also discussed introduction the u is one of the most commonly used nonlinear and nonparametric statistics and its asymptotic theory has been well studied since the seminal paper of hoeffding u extend the scope of parametric estimation to more complex nonparametric problems and provide a general theoretical framework for statistical inference we refer to koroljuk and borovskich for a systematic presentation of the theory of u and to kowalski and tu for more recently discovered methods and contemporary applications of u applications of u can also be found in high dimensional statistical inference and estimation including the simultaneous testing of many different hypotheses feature selection and ranking the estimation of high dimensional graphical models and sparse high dimensional signal detection in the context of high dimensional hypothesis testing for example several new methods based on u have been proposed and studied in chen and qin chen zhang and zhong and zhong and chen moreover li et al and li zhong and zhu employed u to construct independence feature screening procedures for analyzing ultrahigh dimensional data due to heteroscedasticity the measurements across disparate subjects may differ significantly in scale for each feature to standardize for scale unknown nuisance parameters are always involved and a natural approach is to use studentized or statistics the noteworthy advantage of studentization is that compared to standardized statistics studentized ratios take heteroscedasticity into account and are more robust against data the theoretical and numerical studies in delaigle hall and jin and chang tang and wu evidence the importance of using studentized statistics in high dimensional data analysis as noted in delaigle hall and jin a careful study of the moderate deviations in the studentized ratios is indispensable to understanding the common statistical procedures used in analyzing high dimensional data further it is now known that the theory of moderate deviations for studentized statistics quantifies the accuracy of the estimated which is crucial in the study of multiple tests for controlling the false discovery rate fan hall and yao liu and shao in particular moderate deviation results can be used to investigate the robustness and accuracy properties of and critical values in multiple testing procedures however thus far most applications have been confined to fan hall and yao wang and hall delaigle hall and jin cao and kosorok it is conjectured in fan hall and yao that analogues of the theoretical properties studentized u of these statistical methodologies remain valid for other resampling methods based on studentized statistics motivated by the above applications we are attempting to develop a unified theory on moderate deviations for more general studentized nonlinear statistics in particular for u the asymptotic properties of the standardized u are extensively studied in the literature whereas significant developments are achieved in the past decade for studentized u we refer to wang jing and zhao and the references therein for bounds and edgeworth expansions the results for moderate deviations can be found in vandemaele and veraverbeke lai shao and wang and shao and zhou the results in shao and zhou paved the way for further applications of statistical methodologies using studentized u in high dimensional data analysis u are also commonly used to compare the different treatment effects of two groups such as an experimental group and a control group in scientifically controlled experiments however due to the structural complexities the theoretical properties of the u statistics have not been well studied in this paper we establish a moderate deviation theorem in a general framework for studentized u especially the and the studentized test in particular a refined moderate deviation theorem with accuracy is established for the tstatistic the paper is organized as follows in section we present the main results on moderate deviations for studentized u statistics as well as a refined result for the in section we investigate statistical applications of our theoretical results to the problem of simultaneously testing many different hypotheses based particularly on the and studentized tests section shows numerical studies a discussion is given in section all the proofs are relegated to the supplementary material chang shao and zhou moderate deviations for studentized u we use the following notation throughout this paper for two sequences of real numbers an and bn we write an bn if there exist two positive constants such that an for all n we write an o bn if there is a constant c such that holds for all sufficiently large n and we write an bn and an o bn respectively if an and an moreover for two real numbers a and b we write for ease of presentation that a b max a b and a b min a b chang shao and zhou a review of studentized u we start with a brief review of moderate deviation for studentized u for an integer s and for n let xn be independent and identically distributed random variables taking values in a metric space x g and let h xd r be a symmetric borel measurable function hoeffding s u with a kernel h of degree s is defined as un x s h xis is which is an unbiased estimate of e h xs in particular we focus on the case where x is the euclidean space rr for some integer r when r write xi xir t for i let x e h xs x for any x xr t rr and var var h xs assume that then the standardized nondegenerate u is given by zn un because is usually unknown we are interested in the following studentized u bn n un u sb where denotes the jackknife estimator of given by n n x qi un b n s with qi x h xi for each shao and zhou established a general moderate deviation theorem for studentized nonlinear statistics in particular for studentized u studentized u theorem assume that vp e for some p suppose that there are constants and such that for all xs r s x xi h xs then there exist constants c c depending only on d such that bn x p u o vp p x p vh x x holds uniformly for x c min where c and as max s in particular we have bn x p u x holds uniformly in x o condition is satisfied for a large class of u below are some examples statistic sample variance gini s mean difference wilcoxon s statistic kendall s kernel function h h h h i h studentized u let x and y be two independent random samples where x is drawn from a probability distribution p and y is drawn from another probability distribution q with and being two positive integers let h be a kernel function of order which is real and symmetric both in its first variates and in its last variates it is known that a nonsymmetric kernel can always be replaced with a symmetrized version by averaging across all possible rearrangements of the indices set e h and let x x h is js chang shao and zhou be the u where to lighten the notation we write jk yjk such that h jk h yjk and define x e h x y e h y also let var h var xi var yj and s n for the standardized u of the form a uniform bound of order o was obtained by helmers and janssen and borovskich using a concentration inequality approach chen and shao proved a refined uniform bound and also established an optimal nonuniform bound for large deviation asymptotics of u we refer to nikitin and ponikarov and the references therein here we are interested in the following studentized u where u n n x x qi qi and with n n x x pj pj x x x x qi h xi pj h yj jk note that and are jackknife estimators of and respectively studentized u for p let moderate deviations for u p e and p e moreover put s n and vh with var h given the following result gives a moderate deviation for u in under mild assumptions a proof can be found in the supplementary material chang shao and zhou assume that there are constants and such theorem that h x y x x yj xi for all x and y where is given in assume that p and p are finite for some p then there exist constants c c independent of and such that x p u x o holds uniformly for p x p x p x c min p ad x p s where c and as max s in particular as n x p u x holds uniformly in x o theorem exhibits the dependence between the range of uniform convergence of the relative error in the central limit theorem and the optimal moment conditions in particular if p the region becomes x chang shao and zhou o see theorem in jing shao and wang for similar results on sums under higher order moment conditions it is not clear if our technique can be adapted to provide a better approximation for x for x lying between and in order the tail probability p u it is also worth noticing that many commonly used kernels in nonparametric statistics turn out to be linear combinations of the indicator functions and therefore satisfy condition immediately as a prototypical example of u the is of significant interest due to its wide applicability the advantage of using either or twosample is their high degree of robustness against data in which the sampling distribution has only a finite third or fourth moment the robustness of the is useful in high dimensional data analysis under the sparsity assumption on the signal of interest when dealing with two experimental groups which are typically independent in scientifically controlled experiments the is one of the most commonly used statistics for hypothesis testing and constructing confidence intervals for the difference between the means of the two groups let x be a random sample from a population with mean and variance and let y be a random sample from another population with mean and variance independent of x the is defined as q b where yj and xi n x xi n x yj the following result is a direct consequence of theorem theorem assume that and e e for some p then there exist absolute constants c c such that x p x o x p p p x holds uniformly for x c p where c p p and p e p e studentized u motivated by a series of recent studies on the effectiveness and accuracy of testing using we investigate whether a higher order expansion of the relative error as in theorem of wang for sums holds for the so that one can use bootstrap calibration to correct skewness fan hall and yao delaigle hall and jin or study power properties against sparse alternatives wang and hall the following theorem gives a refined moderate deviation result for whose proof is placed in the supplementary material chang shao and zhou theorem assume that let e and e be the third central moment of and respectively moreover assume that e e for some p then p x x exp x n p x x p x p p o n holds uniformly for x c min min p where c and for every q q e q e a refined moderate deviation theorem for the tstatistic was established in wang which to our knowledge is the best result for the known up to date or equivalently sums more examples of u beyond the we enumerate three more u and refer to nikitin and ponikarov for more examples let x and y be two independent random samples from population distributions p and q respectively example the test statistic order defined as h x y i x y the kernel h is of with p chang shao and zhou and in view of x g x y f y in particular if f g we have example the lehmann statistic defined as the kernel h is of order h i with p then under e h and x g x g x y f y f y in particular if f g then example the kochar statistic the kochar statistic was constructed by kochar to test if the two hazard failure rates are different denote by f the class of all absolutely continuous cumulative distribution functions cdf f satisfying f for two arbitrary cdf s f g f and let f f g be their densities thus the hazard failure rates are defined by rf t f t f t rg t g t g t as long as both f t and g t are positive kochar considered the problem of testing the null hypothesis rf t rg t against the alternative rf t rg t t with strict inequality over a set of nonzero measures observe that holds if and only if s t s t t s for s t with strict inequality over a set of nonzero measures where f for any f f recall that and are two independent samples drawn respectively from f and following nikitin and ponikarov we see that f g e x y x y p p p p under f g while under f g the u with the kernel of order is given by h i yyxx or xyyx i xxyy or yxxy studentized u here the term yyxx refers to and similar treatments apply to xyyx xxyy and yxxy under rf t rg t we have x x x x y y y y in particular if f g then multiple testing via studentized tests testing occurs in a wide range of applications including dna microarray experiments functional magnetic resonance imaging analysis fmri and astronomical surveys we refer to dudoit and van der laan for a systematic study of the existing multiple testing procedures in this section we consider testing based on studentized tests and show how the theoretical results in the previous section can be applied to these problems a typical application of testing in high dimensions is the analysis of gene expression microarray data to see whether each gene in isolation behaves differently in a control group versus an experimental group we can apply the assume that the statistical model is given by xi k k i yj k k j for k m where index k denotes the kth gene i and j indicate the ith and jth array and the constants and respectively represent the mean effects for the kth gene from the first and the second groups for each k k k k k are independent random variables with for the kth marginal test mean zero and variance and are unequal the twhen the population variances statistic is most commonly used to carry out hypothesis testing for the null against the alternative since the seminal work of benjamini and hochberg the benjamini and hochberg procedure has become a popular technique in microarray data analysis for gene selection which along with many other procedures depend on that often need to be estimated to control certain simultaneous errors it has been shown that using approximated is asymptotically equivalent to using the true for controlling the kfamilywise error rate and false discovery rate fdr see for example kosorok and ma fan hall and yao and liu and shao for tests cao and kosorok proposed an alternative method to control and fdr in both chang shao and zhou and a common thread among the aforementioned literature is that theoretically for the methods to work in controlling fdr at a given level the number of features m and the sample size n should satisfy log m o recently liu and shao proposed a regularized bootstrap correction method for multiple so that the constraint on m may be relaxed to log m o under less stringent moment conditions as assumed in fan hall and yao and delaigle hall and jin using theorem we show that the constraint on m in large scale can be relaxed to log m o as well this provides theoretical justification of the effectiveness of the bootstrap method which is frequently used for skewness correction to illustrate the main idea here we restrict our attention to the special case in which the observations are independent indeed when test statistics are correlated false discovery control becomes very challenging under arbitrary dependence various dependence structures have been considered in the literature see for example benjamini and yekutieli storey taylor and siegmund ferreira and zwinderman leek and storey friguet kloareg and causeur and fan han and gu among others for completeness we generalize the results to the dependent case in section normal calibration and phase transition consider the significance testing problem versus k let v and r denote respectively the number of false rejections and the number of total rejections the false discovery proportion fdp is defined as the ratio fdp v max r and fdr is the expected fdp that is e v max r benjamini and hochberg proposed a method for choosing a threshold that controls the fdr at a prespecified level where for k m let pk be the marginal of the kth test and let p p m be the order statistics of pm for a predetermined control level the procedure rejects hypotheses for which pk p where max k m p k m with p in microarray analysis are often used to identify differentially expressed genes between two groups let tk q k m studentized u where pn xi k n x xi k pn yj k and n x yj k here and below xi m and yj m are independent random samples from xm and ym respectively generated according to model which are usually in practice moreover assume that the sample sizes of the two samples are of the same order that is before stating the main results we first introduce a number of notation set k m let denote the number of true null hypotheses and m both m m and are allowed to grow as n increases we assume that lim m in line with the notation used in section set var xk var yk e xk e yk throughout this subsection we focus on the and k normal calibration and let pbk where is the standard normal distribution function indeed the exact null distribution of tk and thus the true are unknown without the normality assumption theorem assume that xm ym are independent nondegenerate random variables m m and log m o as n for independent random samples xi m and yj m suppose that max max e e c min min c for some constants c and c where xk and yk moreover assume that k m log m k as n and let lim inf n x k chang shao and zhou i suppose that log m o then as n and ii suppose that log m for some and that log o n then there exists some constant such that lim p and lim inf log m iii suppose that and log o then as n and here and denote respectively the fdr and the fdp of the procedure with pk replaced by pbk in together conclusions i and ii of theorem indicate that the number of simultaneous tests can be as large as exp o before the normal calibration becomes inaccurate in particular when n the skewness parameter given in reduces to x lim inf as noted in liu and shao the limiting behavior of the varies in different regimes and exhibits interesting phase transition phenomena as the dimension m grows as a function of the average of skewness plays a crucial role it is also worth noting that conclusions ii and iii hold under the scenario that is o m this corresponds to the sparse settings in applications such as gene detections under finite moments of xk and yk the robustness of and the accuracy of normal calibration in the control have been investigated in cao and kosorok when this corresponds to the relatively dense setting and the sparse case that we considered above is not covered bootstrap calibration and regularized bootstrap correction in this subsection we first use the conventional bootstrap calibration to improve the accuracy of fdr control based on the fact that the bootstrap approximation removes the skewness term that determines inaccuracies of the standard normal approximation however the validity of bootstrap approximation requires the underlying distribution to be very light tailed which does not seem realistic in real data applications as pointed in the literature of gene study many gene data are commonly recognized to have heavy tails which violates the assumption on underlying distribution used to make conventional bootstrap approximation work recently liu and shao proposed a regularized bootstrap method that is shown to be more robust against the heavy tailedness of the underlying distribution and the dimension m is allowed to be as large as exp o studentized u let xk b k b k b yk b k b k b b b denote bootstrap samples drawn independently and uniformly with replacement from xk k k and yk k k respectively let tk b be the constructed from k b k b and k b k b following liu and shao we use the following empirical distribution m fm b t b xx i b t mb to approximate the null distribution and thus the estimated are given by pbk b fm b respectively fdpb and fdrb denote the fdp and the fdr of the procedure with pk replaced by pbk b in the following result shows that the bootstrap calibration is accurate provided log m increases at a strictly slower rate than and the underlying distribution has tails theorem assume the conditions in theorem hold and that max max e e c for some constants c i suppose that log m o then as n fdpb and fdrb ii suppose that log m o and for some then as n fdpb and fdrb the condition in theorem is quite stringent in practice whereas it can hardly be weakened in general when the bootstrap method is applied in the context of error rate control fan hall and yao proved that the bootstrap calibration is accurate if the observed data are bounded and log m o the regularized bootstrap method however adopts the very similar idea of the trimmed estimators and is a twostep procedure that combines the truncation technique and the bootstrap method first define the trimmed samples bi k xi k i k x ybj k yi k i k for i j where and are regularized parameters x to be specified let xbk b x k b k b and yk b k b k b b b be the corresponding bootstrap samples drawn by sampling randomly with replacement from k x bn k and ybk k ybn k xbk x chang shao and zhou respectively next let tbk b be the statistic constructed from p n b b b x xi k and k b xi k k b k b pn b b yj k as in the previous procedure define yj k k b the estimated by m pbk rb fbm rb b xx with fbm rb t i b t mb let fdprb and fdrrb denote the fdp and the fdr respectively of the procedure with pk replaced by pbk rb in theorem assume the conditions in theorem hold and that max max e e c the regularized parameters are such that and log m log m i suppose that log m o then as n fdprb and fdrrb ii suppose that log m o and for some then as n fdprb and fdrrb in view of theorem the regularized bootstrap approximation is valid under mild moment conditions that are significantly weaker than those required for the bootstrap method to work theoretically the numerical performance will be investigated in section to highlight the main idea a proof of theorem is given in the supplementary material chang shao and zhou the proofs of theorems and are based on straightforward extensions of theorems and in liu and shao and thus are omitted fdr control under dependence in this section we generalize the results in previous sections to the dependence case write for and define every k m let cov xk cov yk which characterizes the dependence between xk yk and we see that r corr x x ularly when and k corr yk in this subsection we impose the following conditions on the dependence structure of x xm t and y ym t studentized u there exist constants r r r and such that max r and max sk m where for k m sk m m corr xk log m or corr yk log m for some there exist constants r r r and such that r and for each xk the number of variables that are dependent of xk is less than the assumption r for some r imposes a constraint on the magnitudes of the correlations which is natural in the sense that the correlation matrix r is singular if under condition each xk yk is allowed to be moderately correlated with at most as many as o other vectors condition enforces a local dependence structure on the data saying that each vector is dependent with at most as many as o other random vectors and independent of the remaining ones the following theorem extends the results in previous sections to the dependence case its proof is placed in the supplementary material chang shao and zhou theorem assume that either condition holds with log m o or condition holds with log m o i suppose that and are satisfied then as n and ii suppose that and are satisfied then as n fdprb and fdrrb in particular assume that condition holds with log m o and mc for some c then as n fdprb and fdrrb studentized test let x and y be two independent random samples from distributions f and g respectively let p x y consider the null hypothesis against the alternative this problem arises in many applications including testing whether the physiological performance of an active drug is better than that under the control treatment and testing the effects of a policy such as unemployment insurance or a vocational training program on the level of unemployment chang shao and zhou the test mann and whitney also known as the wilcoxon test wilcoxon is prevalently used for testing equality of means or medians and serves as a nonparametric alternative to the the corresponding test statistic is given by n n x x i xi yj the test is widely used in a wide range of fields including statistics economics and biomedicine due to its good efficiency and robustness against parametric assumptions over of the articles published in experimental economics use the test and okeh reported that thirty percent of the articles in five biomedical journals published in used the test for example using the u test charness and gneezy developed an experiment to test the conjecture that financial incentives help to foster good habits they recorded seven biometric measures weight body fat percentage waist size etc of each participant before and after the experiment to assess the improvements across treatments although the test was originally introduced as a rank statistic to test if the distributions of two related samples are identical it has been prevalently used for testing equality of medians or means sometimes as an alternative to the it was argued and formally examined recently in chung and romano that the test has generally been misused across disciplines in fact the test is only valid if the underlying distributions of the two groups are identical nevertheless when the purpose is to test the equality of distributions it is recommended to use a statistic such as the smirnov or the mises statistic that captures the discrepancies of the entire distributions rather than an individual parameter more specifically because the test only recognizes deviation from it does not have much power in detecting overall distributional discrepancies alternatively the test is frequently used to test the equality of medians however chung and romano presented evidence that this is another improper application of the test and suggested to use the studentized median test even when the test is appropriately applied for testing the asymptotic variance depends on the underlying distributions unless the two population distributions are identical as hall and wilson pointed out the application of resampling to pivotal statistics has better asymptotic properties in the sense that the rate of convergence of the actual significance level to the nominal significance level is more rapid when the studentized u pivotal statistics are resampled therefore it is natural to use the studentized test which is asymptotic pivotal let u denote the studentized test statistic for as in where x x x x qi pj qi pj with qi i xi yj i yj xi and pj when dealing with samples from a large number of geographical regions suburbs states health service areas etc one may need to make many statistical inferences simultaneously suppose we observe a family of paired groups that is for k m xk k k yk k k where the index k denotes the kth site assume that xk is drawn from fk and independently yk is drawn from gk for each k m we test the null hypothesis p k k against the alternative if is rejected we conclude that the treatment effect of a drug or a policy is acting within the kth area define the test statistic k u k k k is constructed from the kth paired samples according to where u let k k k t p u and t p z t where z is the standard normal random variable then the true k and pbk u k denote the estimated are pk k u based on normal calibration to identify areas where the treatment effect is acting we can use the method to control the fdr at level by rejecting the null hypotheses indexed by s k m pbk pb where max k m pb k and b p k denote the ordered values of b pk as before let be the fdr of the method based on normal calibration alternative to normal calibration we can also consider bootstrap tion recall that xk b k b k b and yk b k b k b b b are two bootstrap samples drawn independently and uniformly with replacement from xk k k and yk k k spectively for each k m let u k b be the bootstrapped test statistic chang shao and zhou constructed from xk b and yk b that is x x b i xi k yj k u k b k b k b where k b and k b are the analogues of given in and specified below via replacing xi and yj by xi k b and yj k b respectively using the empirical distribution function m b xx b t t i g k b m b mb u b for a predewe estimate the unknown by pbk b g m b k b termined the null hypotheses indexed by sb k m pbk b pb b are rejected where max k m pbk b denote by fdrb the fdr of the method based on bootstrap calibration applying the general moderate deviation result to studentized k leads to the following result the proof is based on whitney statistics u a straightforward adaptation of the arguments we used in the proof of theorem and hence is omitted theorem assume that xm ym are independent random variables with continuous distribution functions xk fk and yj gk the triplet m is such that m m log m o and k m as n for independent samples xi m and yj m suppose that min c for some constant c and as n k m log m k var g x var f y and where k k k k k then as n fdpb and fdrb attractive properties of the bootstrap for testing were first noted by hall in the case of the mean rather than its studentized counterpart now it has been rigorously proved that bootstrap methods are particularly effective in relieving skewness in the extreme tails which leads to accuracy fan hall and yao delaigle hall and jin it is interesting and challenging to investigate whether these advantages of the bootstrap can be inherited by multiple u in either the standardized or the studentized case studentized u numerical study in this section we present numerical investigations for various calibration methods described in section when they are applied to multiple testing problems we refer to the simulation for and studentized test as and respectively assume that we observe two groups of dimensional gene expression data xi and yj where and are independent random samples drawn from the distributions of x and y respectively for let x and y be such that x e and y e where m t and m t are two sets of random variables the components of noise vectors and follow two types of distributions i the exponential distribution exp with density function ii student t k with k degrees of freedom the exponential distribution has nonzero skewness while the is symmetric and for each type of error distribution both cases of homogeneity and heteroscedasticity were considered detailed settings for the error distributions are specified in table for we assume that x and y satisfy x and y where m t and m t are two sets of random variables we consider several distributions for the error terms k and k standard normal distribution n t k uniform distribution u a b and beta distribution beta a b table reports four settings of k k used in our simulation in either setting we know p k k holds hence the power against the null hypothesis p xk yk will generate from the magnitude of the difference between the kth components of and in both and we set and assume that the first and components of are equal to c log m the rest are zero here and denote the variance of k and k and table distribution settings in exponential distributions student homogeneous case heteroscedastic case k exp k exp k exp k exp k t k t k t k t chang shao and zhou table distribution settings in identical distributions nonidentical distributions case k n k n k n k t case k u k u k u k beta c is a parameter employed to characterize the location discrepancy between the distributions of x and y the sample size was set to be and and the discrepancy parameter c took values in the significance level in the procedure was specified as and and the dimension m was set to be and in we compared three different methods to calculate the in the procedure normal calibration given in section bootstrap calibration and regularized bootstrap calibration proposed in section for regularized bootstrap calibration we used a approach as in section of liu and shao to choose regularized parameters and in we compared the performance of normal calibration and bootstrap calibration proposed in section for each compared method we evaluated its performance via two indices the empirical fdr and the proportion among the true alternative hypotheses was rejected we call the latter correct rejection proportion if the empirical fdr is low the proposed procedure has good fdr control if the correct rejection proportion is high the proposed procedure has fairly good performance in identifying the true signals for ease of exposition we only report the simulation results for and m in figures and the results for and m are similar which can be found in the supplementary material chang shao and zhou each curve corresponds to the performance of a certain method and the line types are specified in the caption below the horizontal ordinates of the four points on each curve depict the empirical fdr of the specified method when the level in the procedure was taken to be and respectively and the vertical ordinates indicate the corresponding empirical correct rejection proportion we say that a method has good fdr control if the horizontal ordinates of the four points on its performance curve are less than the prescribed levels in general as shown in figures and the procedure based on regularized bootstrap calibration has better fdr control than that based on normal calibration in where the errors are symmetric k and k follow the student the panels in the first row of figure show that the procedures using all the three calibration methods studentized u fig performance comparison of procedures based on three calibration methods in with and m the first and second rows show the results when the components of noise vectors and follow and exponential distributions respectively left and right panels show the results for homogeneous and heteroscedastic cases respectively horizontal and vertical axes depict empirical false discovery rate and empirical correct rejection proportion respectively and the prescribed levels and are indicated by unbroken horizontal black lines in each panel dashed lines and unbroken lines represent the results for the discrepancy parameter c and respectively and different colors express different methods employed to calculate in the procedure where blue line green line and red line correspond to the procedures based on normal conventional and regularized bootstrap calibrations respectively are able to control or approximately control the fdr at given levels while the procedures based on bootstrap and regularized bootstrap calibrations outperform that based on normal calibration in controlling the fdr when the errors are asymmetric in the performances of the three procedures are different from those in the symmetric cases from the second row of figure we see that the procedure based on normal calibration is distorted in controlling the fdr while the procedure based on regularized bootstrap calibration is still able to control the fdr at given levels this chang shao and zhou fig performance comparison of procedures based on two different calibration methods in with and m the first and second rows show the results when the components of noise vectors and follow the distributions specified in cases and of table respectively left and right panels show the results for the cases of identical distributions and nonidentical distributions respectively horizontal and vertical axes depict empirical false discovery rate and empirical correct rejection proportion respectively and the prescribed levels and are indicated by unbroken horizontal black lines in each panel dashed lines and unbroken lines represent the results for the discrepancy parameter c and respectively and different colors express different methods employed to calculate in the procedure where blue line and red line correspond to the procedures based on normal and bootstrap calibrations respectively phenomenon is further evidenced by figure for comparing the procedures based on conventional and regularized bootstrap calibrations we find that the former approach is uniformly more conservative than the latter in controlling the fdr in other words the procedure based on regularized bootstrap can identify more true alternative hypotheses than that using conventional bootstrap calibration this phenomenon is also revealed in the heteroscedastic case as the discrepancy parameter c gets larger so that the signal is stronger the correct rejection proportion of the studentized u cedures based on all the three calibrations increase and the empirical fdr is closer to the prescribed level discussion in this paper we established moderate deviations for studentized u of arbitrary order in a general framework where the kernel is not necessarily bounded u typified by the test statistic have been widely used in a broad range of scientific research many of these applications rely on a misunderstanding of what is being tested and the implicit underlying assumptions that were not explicitly considered until relatively recently by chung and romano more importantly they provided evidence for the advantage of using the studentized statistics both theoretically and empirically unlike the conventional and u the asymptotic behavior of their studentized counterparts has barely been studied in the literature particularly in the case recently shao and zhou proved a moderate deviation theorem for general studentized nonlinear statistics which leads to a sharp moderate deviation result for studentized u however extension from onesample to in the studentized case is totally nonstraightforward and requires a more delicate analysis on the studentizing quantities further for the we proved moderate deviation with secondorder accuracy under a finite moment condition see theorem which is of independent interest in contrast to the case the can not be reduced to a sum of independent random variables and thus the existing results on ratios jing shao and wang wang can not be directly applied instead we modify theorem in shao and zhou to obtain a more precise expansion that can be used to derive a refined result for the finally we show that the obtained moderate deviation theorems provide theoretical guarantees for the validity including robustness and accuracy of normal conventional bootstrap and regularized bootstrap calibration methods in multiple testing with control the dependence case is also covered these results represent a useful complement to those obtained by fan hall and yao delaigle hall and jin and liu and shao in the case acknowledgements the authors would like to thank peter hall and aurore delaigle for helpful discussions and encouragement the authors sincerely thank the editor associate editor and three referees for their very constructive suggestions and comments that led to substantial improvement of the paper chang shao and zhou supplementary material supplement to moderate deviations for studentized twosample u with applications doi this supplemental material contains proofs for all the theoretical results in the main text including theorems and and additional numerical results references benjamini and hochberg y controlling the false discovery rate a practical and powerful approach to multiple testing stat soc ser stat methodol benjamini and yekutieli the control of the false discovery rate in multiple testing under dependency ann statist borovskich asymptotics of u and von mises functionals soviet math dokl cao and kosorok simultaneous critical values for in very high dimensions bernoulli chang shao and zhou supplement to moderate deviations for studentized u with chang tang and wu y marginal empirical likelihood and sure independence feature screening ann statist chang tang and wu y local independence feature screening for nonparametric and semiparametric models by marginal empirical likelihood ann statist charness and gneezy u incentives to exercise econometrica chen and qin a test for data with applications to testing ann statist chen and shao normal approximation for nonlinear statistics using a concentration inequality approach bernoulli chen zhang and zhong tests for covariance matrices amer statist assoc chung and romano exact and asymptotically robust permutation tests ann statist chung and romano j asymptotically valid and exact permutation tests based on u statist plann inference delaigle hall and jin j robustness and accuracy of methods for high dimensional data analysis based on student s stat soc ser stat methodol dudoit and van der laan j multiple testing procedures with applications to genomics springer new york fan hall and yao q to how many simultaneous hypothesis tests can normal student s t or bootstrap calibration be applied amer statist assoc fan han and gu estimating false discovery proportion under arbitrary covariance dependence amer statist assoc ferreira j and zwinderman on the method ann statist studentized u friguet kloareg and causeur a factor model approach to multiple testing under dependence amer statist assoc hall on the relative performance of bootstrap and edgeworth approximations of a distribution function multivariate anal hall and wilson two guidelines for bootstrap hypothesis testing biometrics helmers and janssen on the theorem for multivariate u in math cent sw mathematisch centrum amsterdam hoeffding a class of statistics with asymptotically normal distribution ann math statistics jing shao and wang q large deviations for independent random variables ann probab kochar comparison of two probability distributions with reference to their hazard rates biometrika koroljuk and borovskich theory of u mathematics and its applications kluwer academic dordrecht kosorok and ma marginal asymptotics for the large p small n paradigm with applications to microarray data ann statist kowalski and tu modern applied u wiley hoboken nj lai shao and wang q type moderate deviations for studentized u esaim probab stat leek and storey a general framework for multiple testing dependence proc natl acad sci usa li zhong and zhu feature screening via distance correlation learning amer statist assoc li peng zhang and zhu robust rank correlation based screening ann statist liu and shao moderate deviation for the maximum of the periodogram with application to simultaneous tests in gene expression time series ann statist liu and shao phase transition and regularized bootstrap in largescale with false discovery rate control ann statist mann and whitney on a test of whether one of two random variables is stochastically larger than the other ann math statistics nikitin and ponikarov on large deviations of nondegenerate u and v with applications to bahadur efficiency math methods statist okeh statistical analysis of the application of wilcoxon and whitney u test in medical research studies biotechnol molec biol rev shao and zhou type moderate deviation theorems for processes bernoulli storey taylor and siegmund strong control conservative point estimation and simultaneous conservative consistency of false discovery rates a unified approach stat soc ser stat methodol vandemaele and veraverbeke type large deviations for studentized u metrika wang q limit theorems for large deviation electron probab electronic chang shao and zhou wang q refined large deviations for independent random variables theoret probab wang and hall relative errors in central limit theorems for student s t statistic with applications statist sinica wang jing and zhao the bound for studentized statistics ann probab wilcoxon individual comparisons by ranking methods biometrics zhong and chen x tests for regression coefficients with factorial designs amer statist assoc chang school of statistics southwestern university of finance and economics chengdu sichuan china and school of mathematics and statistics university of melbourne parkville victoria australia shao department of statistics chinese university of hong kong shatin nt hong kong qmshao zhou department of operations research and financial engineering princeton university princeton new jersey usa and school of mathematics and statistics university of melbourne parkville victoria australia wenxinz
| 10 |
a unified method for first and third person action recognition ali javidani ahmad department of computer science and engineering shahid beheshti university tehran iran cyberspace research center shahid beheshti university tehran iran classification human action recognition deep learning convolutional neural network cnn optical flow motion and recognizing activities in them is highly challenging due to the fact that circumstances of camera and recording actions are completely different from each other in the first and third person videos there exist two main approaches for classifying each group and to the best of our knowledge there does not exist a unified method that works perfectly for both the main motivation of our work is to provide a unified framework which can classify both first and videos toward this goal two complementary streams are designed to capture motion and appearance features of video data the motion stream is based on calculating optical flow images to estimate motion in video and by following them over time using pot representation method with different pooling operators motion dynamics are extracted efficiently the appearance stream is obtained via describing the middle frame of the input video utilizing networks our method is evaluated on two different datasets and dogcentric it is demonstrated that the proposed method achieves high accuracy for both datasets i introduction ii related works video recognition is one of the popular fields in artificial intelligence that aims to detect and recognize ongoing events from videos this can help humans to inject vision to robots in order to assist them in different situations for instance one of the most prominent applications of video classification is cars which are going to become available in the market totally there are two main categories of videos that researchers conduct their experiments on them and videos in videos most of the times camera is located in a specific place without any movement or scarcely with a slight movement and records the actions of humans while in videos the person wears the camera and involves directly in events this is the reason that videos are full of in general there are two major approaches for classifying videos traditional and modern approaches traditional ones are based on descriptors which try to detect different aspects of each action at the first step features of video segments are extracted these features can be interest points or dense points obtained from raw input frames is one of the ways to obtain corner points from video then feature points are described by handcrafted descriptors such as hog hof and mbh to describe features more effectively some of these descriptors have been extended to dimensions to incorporate temporal information in their calculations and are two of the most popular ones this paper a new video classification methodology is proposed which can be applied in both first and third person videos the main idea behind the proposed strategy is to capture complementary information of appearance and motion efficiently by performing two independent streams on the videos the first stream is aimed to capture motions from shorter ones by keeping track of how elements in optical flow images have changed over time optical flow images are described by networks that have been trained on large scale image datasets a set of time series are obtained by aligning descriptions beside each other for extracting motion features from these time series pot representation method plus a novel pooling operator is followed due to several advantages the second stream is accomplished to extract appearance features which are vital in the case of video classification the proposed method has been evaluated on both first and datasets and results present that the proposed methodology reaches the state of the art successfully raw frames optical flow motion feature vector description description final feature vector pot of time series gradientvariance svm description middle frame description appearance feature vector figure general pipeline of the proposed methodology our framework has two streams for obtaining motion and appearance features the top stream extracts motion while the bottom extracts appearance features following feature extraction and description phases in order to obtain a feature vector and becoming independent of some variables such as number of frames for each video or number of interest points an encoding step is required for so doing encoding methods like bag of visual words bovw fisher kernel have been used till now the experiments illustrate that fisher kernel is more accurate than the former one however recently pot encoding method was proposed by ryoo et al and it could reach results in the case of videos modern approaches are mostly based on deep learning convolutional neural networks cnns could succeed in giving the best results on image recognition image segmentation and so forth although one problem in the case of video domain is that these networks are designed for input images to address this problem some researches have been conducted as a case in point karpathy et al introduced four models of but cnns and in their models time dimension was incorporated in different channels of the network zisserman et al proposed two stream cnn model to classify videos however their method suffers from the problem that the number of stacked optical flow frames given to cnn is limited due to the problem of overfitting of the network furthermore for better estimation of motions in video a convolutional neural network was devised all convolution and pooling layers in operate and the depth of time dimension convolution is a small number due to the vast amount of convolution calculations hence it can only capture the motion dynamics and longer ones would be lost by using this network a recent work used stacked to obtain tractable dimensionality for the case of videos also another work designed a deep fusion framework in which by the aid of lstms representative temporal information have been extracted and could reach results on three widely used datasets iii proposed method in this section we introduce our proposed method pipeline generally videos either or consist of two different aspects appearance and motion appearance is related to detecting and recognizing existing objects in each frame while motion is their following over time hence motion information is highly correlated with temporal dimension as it is depicted in fig in order to capture two mentioned aspects in video our proposed framework has two streams independent of each other the upper stream is for extracting motion and the bottom is for appearance features in the following motion feature extraction is explained in more detail firstly the images of optical flow between consecutive frames are calculated this helps to estimate motion through nearby frames for each video however estimating motion is strictly challenging and is an open area of research here the idea is to keeping track of how motion elements vary over time to estimate longer changes therefore optical flow images should be described by a specific set of features to be pursued over the time dimension we found that the best way for doing so is utilizing networks that already have been trained on large scale image datasets by doing this not only training of network which is a highly time consuming process is not needed but also a strong representation would be obtained since these networks could reach results for image recognition aligning representation of sequential frames beside each other leads to obtaining a set of time series there are various ways to extract features from time series for doing so pot representation plus a novel pooling operator is chosen due to several prominent reasons firstly thanks to the temporal filters time series are break down to which assists to represent activity from lower levels furthermore pot is benefited from extracting different features from time series that each of them can represent different aspects of data the resulted time series especially those coming from first person videos which are full of are more sophisticated than a specific feature max can represent them as a result pot framework can be beneficial for extracting motion features from time series pot representation method extracts different features max sum and histogram of time series gradient from time series the final feature vector representation which is designed to be motion features in our framework is the concatenation of all of these features together for each time series we add variance as another pooling operator to the pooling set and demonstrate that this feature can also extracts useful information below is the definition of each pooling operator in the time domain ts te fi t is the value of ith time series in time the max and sum pooling operators are defined as and the histograms of time series gradient pooling operators are defined as where we proposed variance as a new pooling operator as follows figure some sample frames from different classes of two different datasets left and dogcentric right vectors that are expected to represent complementary information as the last step a svm is trained on the final feature vector iv experimental results we conducted our experiments on two public datasets and dogcentric fig represents some sample frames of them is a type which contains different human activities playing basketball volleyball and it consists of about human activity videos in this dataset camera is not usually located in a specific place and it has large amounts of movement to evaluate our method we performed leave one out cross validation loocv on this dataset as in the original work dogcentric is a activity dataset in which a camera is located on the back of dogs thus it has large amounts of which makes it highly challenging it has class activities consisting activities of dogs such as walking and drinking as well as interacting with humans the total number of videos in this dataset is like other previous methods half of the total number of videos per class is used as training and the other half is used as testing for classes with odd number of clips the number table comparison of the encoding methods on the dogcentric dataset per class and final classification accuracy activity class besides in order to concentrate better on the resulted time series temporal pyramid filters are applied on each series hence the resultant time series are the whole time domain of the whole time domain parts and so forth motion features are extracted as explained above by applying pooling operators on each level of the resulted time series after exploiting temporal pyramid filters on the other hand appearance features have substantial role in classifying videos in our pipeline we used middle frame of each video and feed it to a network to obtain appearance feature vector the final representation for each video is acquired from concatenating motion and appearance feature ball play car drink feed turn head left turn head right pet body shake sniff walk final accuracy bovw method accuracy ifv pot proposed of test instances is one more than number of training we performed our algorithm times with different permutation for train and test sets the mean classification accuracy is reported in table it is clear that the proposed method has been achieved a significant improvement in terms of classification accuracy in compare with two traditional representation methods bag of visual words bovw and improved fisher vector ifv in addition the proposed method could outperform the baseline pot method in most classes of the dogcentric dataset and also in the final accuracy for obtaining optical flow images of consecutive frames we used popular method of horn schunck to convert it to colorful images in order to be fed to networks flow visualization code of baker et al was followed in our implementation this method was applied to all frames of each video and we did not sample the existing frames googlenet as a network is utilized to describe either optical flow images in motion stream or middle frame in appearance stream this was feasible by omitting softmax and layers of table comparison of classification accuracy of the dogcentric dataset according to temporal pyramid levels number of temporal pyramids classification accuracy table comparison of our results to the approaches on dataset method accuracy hasan et al liu et al et al dense trajectories soft attention cho et al snippets two stream lstm two stream lstm proposed method linear svm two stream lstm two stream lstm proposed method svm the network googlenet has neurons in this layer furthermore in the case of number of temporal pyramids different experiments have been conducted and results are illustrated in the table it can be seen that by increasing the number of temporal pyramids up to levels the classification accuracy has been improved while by increasing it to five levels it has decreased in compare with four levels we believe this phenomenon is due to the fact that by increasing the temporal pyramid levels the number of dimensionality would increase dramatically on the other hand there do not exist enough training data for learning the classifier this is the reason that increasing number of temporal pyramids can not always improve the performance of the system the proposed method is also evaluated on video dataset the number of temporal pyramids used for this dataset is and sampling between frames was not performed comparison of our method to the results on this dataset is reported in table as can be seen the proposed method with svm could reach the best results on this dataset in all our experiments svm classifier with linear and kernel is used and the later one showed better performances conclusion in this paper a new approach for video classification was proposed which has the capability of employing for two different categories of first and videos motion changes are calculated by extracting discriminant features from motion time series following pot representation method with a novel pooling operator final feature vector is resulted from concatenating two complementary feature vectors of appearance and motion to perform the classification by evaluating the proposed method on two different types of datasets and comparing the obtained results to the state of the art it is concluded that the proposed method not only works perfectly for both groups but also increases the accuracy references liu luo and shah recognizing realistic actions from videos in the wild in computer vision and pattern recognition cvpr ieee conference on pp iwashita takamine kurazume and ryoo animal activity recognition from egocentric videos in pattern recognition icpr international conference on pp ryoo rothrock and matthies pooled motion features for videos in proceedings of the ieee conference on computer vision and pattern recognition pp liu shao zheng and li realistic action recognition via gaussian processes pattern recognition vol pp uijlings duta rostamzadeh and sebe realtime video classification using dense in proceedings of international conference on multimedia retrieval laptev on interest points international journal of computer vision vol pp dalal and triggs histograms of oriented gradients for human detection in ieee computer society conference on computer vision and pattern recognition pp dalal triggs and schmid human detection using oriented histograms of flow and appearance in european conference on computer vision pp wang and schmid action recognition with improved trajectories in proceedings of the ieee international conference on computer vision pp klaser and schmid a descriptor based on in bmvc british machine vision conference pp scovanner ali and shah a sift descriptor and its application to action recognition in proceedings of the acm international conference on multimedia pp csurka dance fan willamowski and bray visual categorization with bags of keypoints in workshop on statistical learning in computer vision eccv pp csurka and perronnin fisher vectors beyond image representations in international conference on computer vision imaging and computer graphics pp karpathy toderici shetty leung sukthankar and video classification with convolutional neural networks in proceedings of the ieee conference on computer vision and pattern recognition pp simonyan and zisserman convolutional networks for action recognition in videos in advances in neural information processing systems pp tran bourdev fergus torresani and paluri generic features for video analysis corr vol wang gao j song zhen sebe and shen deep appearance and motion learning for egocentric activity recognition neurocomputing vol pp gammulle denman sridharan and fookes two stream lstm a deep fusion framework for human action recognition in applications of computer vision wacv ieee winter conference on pp horn and schunck determining optical flow artificial intelligence vol pp baker scharstein lewis roth j black and szeliski a database and evaluation methodology for optical flow international journal of computer vision vol pp hasan and incremental activity modeling and recognition in streaming videos in proceedings of the ieee conference on computer vision and pattern recognition pp and sclaroff object scene and actions combining multiple features for human action recognition computer pp wang schmid and liu action recognition by dense trajectories in computer vision and pattern recognition cvpr ieee conference on pp sharma kiros and salakhutdinov action recognition using visual attention arxiv preprint cho lee chang and oh robust action recognition using local motion and group sparsity pattern recognition vol pp ng hausknecht vijayanarasimhan vinyals monga and toderici beyond short snippets deep networks for video classification in proceedings of the ieee conference on computer vision and pattern recognition pp
| 1 |
attacks on uas networkschallenges and open research problems vahid behzadan feb dept of computer science and engineering university of nevada reno usa vbehzadan of critical missions to unmanned aerial vehicles uav is bound to widen the grounds for adversarial intentions in the cyber domain potentially ranging from disruption of command and control links to capture and use of airborne nodes for kinetic attacks ensuring the security of electronic and communications in systems is of paramount importance for their safe and reliable integration with military and civilian airspaces over the past decade this active field of research has produced many notable studies and novel proposals for attacks and mitigation techniques in uav networks yet the generic modeling of such networks as typical manets and isolated systems has left various vulnerabilities out of the investigative focus of the research community this paper aims to emphasize on some of the critical challenges in securing uav networks against attacks targeting vulnerabilities specific to such systems and their aspects index security vulnerabilities i ntroduction the century is scene to a rapid revolution in our civilization s approach to interactions advancement of communication technologies combined with an unprecedentedly increasing trust and interest in autonomy are pushing mankind through an evolutionary jump towards delegation of challenging tasks to agents from mars rovers to search and rescue robots we have witnessed this trend of overcoming the limitations inherent to us through replacement of personnel with systems capable of performing tasks that are risky repetitive physically difficult or simply economically infeasible for human actors unmanned aerial vehicles or uavs are notable examples of this revolution since the early military and intelligence theaters have seen an explosive growth in the deployment of tactical uavs for surveillance transport and combat operations in the meantime civilian use of uavs has gained traction as the manufacturing and operations costs of small and uavs are undergoing a steady decline the cheaper cost of such uavs has also led to a growing interest in collaborative deployment of multiple uavs to perform specific tasks such as monitoring the conditions of farms and patrolling national borders yet there are a multitude of challenges associated with this vision solving which are crucial for safe and reliable employment of such systems in civilian and military scenarios one such challenge is ensuring the security of systems that comprise uavs as their remote operational conditions leave the burden of command and control reliant on the onboard gnss telemetry satellite relay mobile ground unit link satellite link atg link ground control station fig communication links in a uas network components the body of literature on this issue has seen an accelerated growth in recent years which is partly due to major cyber attacks on uavs the overwhelming number of potential vulnerabilities in uavs indicates the need for vigorous standards and frameworks for assurance of reliability and resilience to malicious manipulations in all aspects of uavs from the mechanical components to the information processing units and communications systems in operations links are necessary for exchange of situational and operational commands which are the basis of essential functions such as formation control and task optimization as for the architecture of these uav networks the current consensus in the research community is biased towards decentralized and ad hoc solutions which allow dynamic deployment of unmanned aerial systems uas with minimal time and financial expenditure on preparations structure of a typical uas network is shown in figure by considering the various types of links and interfaces depicted in this figure it can be deduced that such networks are inherently of a complex nature integration of multiple subsystems not only aggregates their individual vulnerabilities but may result in new ones that are rooted in the interactions between those subsystems hence uas present the research community with a novel interdisciplinary challenge the aim of this paper is to emphasize on some of the critical vulnerabilities specific to network and communications aspects of uavs and provide the research community with a list of open problems in ensuring the safety and security of this growing technology ii u niqueness of uas n etworks accurate analysis of vulnerabilities in uas networks necessitates an understanding of how an airborne network differs from traditional computer networks much of recent studies in this area compare uas networks to mobile ad hoc networks manets and wireless sensor networks wsn as uas communications and protocols may initially seem similar to those of generic distributed and mobile networks yet differences in mobility and mechanical degrees of freedom as well as their operational conditions build the grounds for separate classification of uas networks one such distinguishing factor is the velocity of airborne vehicles which may range up to several hundreds of miles per hour the high mobility of airborne platforms increases the complexity of requirements for the communications subsystem and many aspects of the uas network in the link layer management of links and adaptation of access control has to be fast enough to accommodate tasks such as neighbor discovery and resource allocation in an extremely dynamic environment likewise the network layer must be able to provide fast route discovery and path calculation while preserving the reliability of the information flow in the physical layer not only communications but the kinetic aspects of the uas give rise to unique requirements as the span of a uas network may vary from clusters to far and sparse distributions the transmission power of uav radios must be adjustable for efficient power consumption and sustained communications also since the geography and environment of the mission may vary rapidly channel availability in uas links is subject to change a potential solution is for the uas to be equipped with dynamic spectrum access dsa and adaptive radios to provide the required agility furthermore the conventional antenna arrangement on airborne platforms is such that changes in orientation and attitude of the aircraft affect the gain of onboard radios this problem is further intensified in unmanned aircraft as the elimination of risk to human pilot allows longer unconventional maneuvers these considerations clarify the demand for a fresh vantage point for analyzing the problem of security in uas networks the reliability of today s uavs need to be studied with models that adopt a more inclusive view of such systems and the impact of seemingly benign deficiencies on the overall vulnerability of uavs iii a natomy of a uav uavs are systems meaning that their operations are reliant on the interaction between physical and computational elements of the system consequently security of a uav is dependent not only on the computation and communications elements and protocols but also on the physical components of the system this heavy entanglement of traditionally independent components requires a thorough framework for analysis of security issues in uavs to be inclusive of the entire airframe one obstacle in developing such a framework is the variety of uav architectures and capabilities which makes the design of a generic model iff antenna satcom antenna nav optical and ir sensors multiband data link radar antenna antenna uhf antenna fig sensing and communication components of a uav difficult yet the similarity of fundamental requirements of such systems allows for generation of a high level system model for conventional types of uavs figure depicts a breakdown of components in a conventional uav most uavs contain multiple communication antennas including air to ground atg air to air ata satellite data link and navigation antennas along with a set of sensors the positioning and navigation of a uav is typically consisted of a global navigation satellite system gnss receiver for accurate positioning and an inertial measurement unit imu for relative positioning based on readings from kinetic sensors this subsystem can be further extended to include air traffic monitors such as and collision avoidance systems inside the fuselage one or more processors supervise the operation and navigation of the uav using the output of various radios and sensors for adjustment of electronic and mechanical parameters this process is performed by adaptive control mechanisms many of which are dependent on feedback loops each of the elements mentioned in this section may become the subject of malicious exploitation leading the uav into undesirable states and critical malfunctions iv overview of p otential attacks table i lists some of the uninvestigated attacks on uas networks categorized according to both network functionalities and factors the table emphasizes on the criticality of the security problem as the potential for vulnerability exists in every major component ranging from the outer fuselage and antennas to network layers and application stack this section provides an overview on the attacks listed in table i and presents preliminary ideas on potential mitigating approaches and areas of research a sensors and navigation absence of a human pilot from the airframe of uavs puts the burden of observing the environment on the set of sensors onboard the aircraft whether autonomous or remotely piloted sensors are the eyes and ears of the flight controller and provide the environmental measurements necessary for safe and successful completion of the mission however malicious exploitation of sensors in critical systems is widely neglected in vulnerability assessment of table i attacks on uas networks component attacks sensors visual navigation and spoofing physical layer adaptive radios deceptive attacks on spectrum sensing jamming antennas disruption and deception of direction of arrival estimator beamnullinduced jamming orientation by induction of defensive maneuvers link layer topology inference topological vulnerability of formation to adaptive jamming routing attacks network layer traffic analysis disruption of convergence air traffic control spoofing induced collisions fault handling manipulation of fault detection such systems an attacker may manipulate or misuse sensory input or functions to trigger or transfer malware misguide the processes dependent on such sensors or simply disable them to cause denial of service attacks and trigger undesired failsafe mechanisms for navigational measurements gnss and imu units are traditionally used in tandem to provide accurate positioning of the aircraft it is that gnss signals such as gps are highly susceptible to spoofing attacks the report in demonstrates that uavs that only rely on commercial gps receivers for positioning are vulnerable to relatively simple jamming and spoofing attacks which may lead to crash or capture of the uav by adversaries since the establishment of gps various countermeasures against gnss spoofing have been proposed ranging from exploitation of direction and polarization of the received gps signal for attack detection to beamforming and statistical signal processing methods for elimination of spoofing signals however the speed and spatial freedom of uavs render many of the basic assumptions and criteria of such techniques inapplicable in the authors propose the of variations in imu and gps readings for detection of spoofing attacks from anomalies in fused measurements while theoretically attractive practical deployment of this technique requires highly reliable imus and adaptive threshold control for an efficient performance which are economically undesirable for the small uavs industry such practical limitations in accuracy and implementation leave this detection technique ineffective to advanced spoofing attacks demonstrating the insufficiency of current civilian gnss technology for applications fusion of imu and gnss systems with other sensors such as video camera may lessen the possibility of spoofing yet navigation is also subject to attacks the simplest of which is blinding the camera by saturating its receptive sensors with high intensity laser beams a more sophisticated attack may aim for deception of the visual navigation system in smaller areas homogenizing or periodically modifying the texture of the terrain beneath a uav may cause miscalculations of movement and orientation investigating the effect of such attacks on the control loop of a fused positioning system may determine the feasibility of such attacks and potential mitigation techniques detection of attacks on the navigation subsystem is the basis of reactive countermeasures such as triggering of hovering or mechanisms however as the following section demonstrates mechanisms are also potential subjects to malicious manipulation robustness of the sensory and navigational subsystem against spoofing attacks may be further improved by implementation of proactive mechanisms through elimination of spoofing signals applicability of which to uavs is yet to be investigated b fault handling mechanisms even with the stringent reliability requirements of uavs mechanical and electronic subsystems of uavs remain prone to faults due to physical damage and unpredicted state transitions therefore critical uav systems must consider the possibility of faults and implement fault handling mechanisms to reduce the impact of such events on the system typical examples of fault handling mechanisms are entering a hovering pattern when temporary faults occur for persistent faults and in the event of fatal faults such as capture or crash in remotely operated systems fault handling mechanisms may be triggered automatically once a certain fault is detected this process adds yet another attack surface to uas networks as the fault detection mechanisms may be subject to manipulation for instance if a temporary disruption of communications triggers the hovering pattern of a uav an adversary can jam the link to bind the motion of the aircraft thus simplify its kinetic destruction or physical capture a more severe case is when sensory manipulation allows the induction of capture conditions on a tactical uav thereby triggering its autodestruction mechanism air traffic control atc and collision avoidance integration of unmanned vehicles with national and international airspaces requires guarantees on safety and reliability of uav operations one major consideration in the safety physical layer typical uavs require multiple radio interfaces to retain continuous connectivity with essential links to satellite relays ground control stations and other uavs this degree of complexity along with the physical and mechanical characteristics of uavs widen the scope of potential vulnerabilities and enable multiple attacks that are specific to uas networks this section presents a discussion on some of such attacks on the physical layer of uav nodes adaptive radios as the operational environment of uas detect tcas advisory descend tcas advisory climb detect altitude of all airborne operations and is situation awareness and collision avoidance modern manned aircraft in the major civilian airspaces are equipped with secondary surveillance technologies such as automatic dependent surveillance broadcast which allow each aircraft to monitor the air traffic in their vicinity this information along with other available means of traffic monitoring provide situation awareness to the traffic advisory and collision avoidance system tcas which monitors the risk of collision with other aircraft and generates advisories on how to prevent collisions with the growing interest in deployment of uavs implementation of similar technologies in uas is crucial recent literature contain several proposals on tcas and atc solutions for uavs many of which are based on adaptation of and commercial tcas protocols from a security point of view this approach suffers from several critical vulnerabilities rendering it unfeasible for missioncritical uas applications firstly is an insecure protocol by design lack of authentication and the unencrypted broadcast nature of this protocol make room for relatively simple attacks ranging from eavesdropping to manipulation of air traffic data by jamming or injection of false data consequently a tcas system relying on can produce erroneous results and advisories leading to unwanted changes in the flight path or in the worst scenario collisions also tcas is shown to be susceptible to a flaw known as collisions common implementations of tcas are not equipped with prediction capabilities to foresee the effect of an advisory that they produce in dense traffic conditions certain scenarios may cause the tcas to generate advisories that lead to a state where avoidance of collision is not possible hence an adversary capable of manipulating the traffic data can intentionally orchestrate conditions leading to collisions authors of provide an example of this flaw for a airplane scenario as illustrated in figure in this scenario and are initially in a collision path hence the tcas in each generates a collision avoidance advisory to descend and climb respectively at a lower altitude the same situation holds for and causing to climb which puts and on a collision path even though that tcas does not fail to generate new correction advisories in both uavs but the advisory is no longer practical as there is not enough time before the collision to implement the new path tcas advisory descend detect detect tcas advisory climb fig example of collision in a airplane scenario networks is highly dynamic sustained and reliable communications necessitates the employment of radios that are capable of adjusting to changes in propagation and links conditions depending on the operational requirements this adaptability may apply to any of the physical layer parameters such as transmit power frequency modulation and configuration of antennas the procedure responsible for controlling these parameters must essentially rely on environmental inputs which can be manipulated by adversaries to result in undesirable configurations this issue is analogous to deceptive attacks on the spectrum sensing process of cognitive radio networks for which various mitigation techniques have been proposed based on anomaly detection and fusion of distributed measurements however the rapid variation of conditions in a uas network may lead to situations where determination of a baseline for anomaly detection is not practical the same consideration also develops a necessity for rapid adjustments which limits the acceptable amounts of redundancy and overhead similarly deployment of airborne nodes in hostile environments further reduces the feasibility of relying on collaboration between distributed sensors therefore such countermeasures will not be sufficient for agile uas radios and novel solutions must be tailored according to the unique requirements of airborne networks antennas the current trend in antenna selection for uav radios is favored towards omnidirectional antennas defined by their relatively homogeneous reception and transmission in all directions of the horizontal or vertical planes this feature simplifies communications in mobile nodes as the homogeneity of gain eliminates the need for considering the direction of transmissions on the other hand the indiscriminate nature of omnidirectional antennas extends the attack surface for eavesdroppers and jammers since they also do not need to tune towards the exact direction of radios to implement their attacks a countermeasure against this class of attack is the utilization of directional antennas which can only communicate in certain directions and are blind to others besides their higher security other advantages of directional antennas include longer transmission ranges and spatial reuse thus providing a higher network capacity one downside associated with this approach is the inevitable escalation of overhead maintaining directional communications in highly mobile networks is a complex and costly task as it requires knowledge of other nodes positions as well as employment of antennas capable of reconfiguring their beam patterns to overcome the disadvantages of these two approaches a midway solution combining the simplicity of omnidirectional radios and spatial selectivity of directional antennas can be actualized in the form of beamforming antenna arrays such antennas are capable of detecting the direction of arrival doa of individual signals this measurement along with other system parameters are then used to electronically reconfigure the radiation pattern and directionality of the antenna array beamforming has been studied as a mitigation technique against jamming attacks as it allows spatial filtering of the jammer s signals by adjusting the antenna pattern such that a null is placed towards the direction of the jammer the accuracy and efficiency of this technique depends on correct detection of the jamming signal as well as the resolution of beamformer s doa estimations an adversary may attack the doa estimator by shaping its jamming signals to mimic waveforms of a nearby legitimate node thus avoiding detection or causing false detections another attack scenario exploits the process of beamnulling itself in an ad hoc uas network beamnulling must be implemented in a distributed fashion to allow targeted nodes to retain or regain connectivity with the network independently due to lack of coordination nulls created by one node towards a jammer may also null the direction of legitimate signals depending on the mobility model and formation of the network an adversary may deploy multiple mobile jammers with strategically controlled trajectories to manipulate the doa measurements and eventually cause the network to null more of its legitimate links than is necessary in certain conditions the adversary can maximize the efficiency of jamming attacks by persistently manipulating the distributed beamnulling mechanism in such a way that its solution converges towards a maximally disconnected state analytical studies into feasibility criteria of this attack may produce insights into possible countermeasures and mitigation techniques orientation as depicted in figure a conventional uav employs multiple fixed antennas on different sides each of which is dedicated to a certain application consider the atg antenna which is placed on the lower side of the uav as discussed previously if the uav performs a maneuver or ascends with a steep climb angle the atg antenna is no longer capable of communicating with the ground antenna and therefore the atg link is lost this issue can be exploited for jamming in uas networks that employ the spatial retreat as a mitigation technique by observing the reaction of the nodes to jamming attacks an adversary may infer their reformation strategy and adapt its attack such that the defensive reformation of certain nodes leads to the loss of some links due to the new orientation of antennas link layer and formation similar to generic multihop wireless networks the topology of a uas network is determined based on the location of uavs relative to each other uavs closer than a threshold can directly communicate with each other while those that are farther must utilize relay nodes to reach their destination knowledge of the topology of a network allows adversaries to optimize attacks by analyzing the structure of their target and determine the most vulnerable regions by identifying nodes whose disconnection incur the maximum loss of connectivity in the network even though the effect of topology on the resilience of the network is widely studied the proposed mitigation techniques fail to provide practical solutions for uas networks a class of such solutions are based on a security by obscurity approach suggesting the employment of covert communications between nodes to hide the topology of the network from adversaries besides the undesirable overhead of this approach in terms of decreased network throughput and increased processing costs it has been shown that the topology of such networks can be estimated with a high degree of accuracy via timing analysis attacks therefore hiding the topology may not serve as a reliable solution in mission critical scenarios an alternative mitigation technique is adaptive control of the topology in this approach detection of a jamming attack triggers a reformation process during which the nodes of a uas network change their positions to retain connectivity a fundamental assumption of this approach is the ability of the nodes to detect and localize attacks which may not always be practical a promising area of further investigation is the problem of minimizing the topological vulnerability to targeted jamming attacks development of and distributed formation control techniques that consider this optimization problem may lead to highly efficient techniques for ensuring dynamic resilience of uas networks a mitigation technique against topology inference attacks is randomization of transmission delays it is expected that introducing randomness in forwarding delays weakens the observed correlation between connected hops and therefore reduces the accuracy of timing analysis attacks however the high mobility of uas networks and the consequent requirement for minimal latency limit the maximum amount of delay permissible in such networks this constraint limits the randomness of the forwarding delays which may neutralize the effect of mitigation technique a potential alternative for delay randomization is transmission of decoy signals to perturb the adversary s correlation analysis this proposal may be extended by incorporating it in topology control such that the resultant formation is optimized for decoy transmissions in a way that spatial distribution of traffic in the network appears homogeneous to an outside observer thereby inducing an artificial correlation between all nodes in the network to the extent of authors knowledge the feasibility overhead and optimal implementation of this approach are yet to be analytically and experimentally studied network layer the impact of high mobility in uas networks is greatly accentuated in the network layer speed and frequency of changes in the topology of a uas network give rise to many challenges that are still active subjects of research yet studies on security of routing mechanisms tend to follow the tradition of equating uas networks with manets indeed the unique features of unmanned airborne networks generate a set of challenges in the network layer that do not match the criteria of conventional manets the highly dynamic nature of uas networks as well as stringent requirements on latency necessitate novel routing mechanisms capable of calculating paths in rapidly changing topologies a survey of the state of the art in this area is presented in the proposed methods may be prone to potential vulnerabilities and the demand for a detailed technical analysis and comparison of these proposals in terms of their security is yet to be fulfilled similar to the link layer the routing layer of uas networks is also vulnerable to traffic analysis attacks aiming to infer individual flows as well as pairs of connections various mitigation techniques against such attacks have been proposed many of which rely on traditional approaches such as mixing and decoy transmissions as such techniques require addition of redundancies and overhead to the uas networks a comprehensive feasibility analysis and optimal design of the corresponding defense strategies is vital but not yet available to the research community mobile routing in uas networks is a surface for attacks on convergence of the network as discussed the topology of unmanned airborne networks is subject to manipulation by adversarial actions such as exploitation of adaptive formation control and jamming attacks also many of the recently proposed routing mechanisms for airborne networks rely on global knowledge of the geographical positions of every node in the network which may also be prone to manipulation a sophisticated adversary may be able to design a strategic combination of topological perturbation and sensor manipulations to prevent or slow the convergence of routing in the network investigation of this attack in terms of feasibility as well as potential countermeasures may prove to be valuable for efficient protection of uas networks operating in hostile environments c onclusions the nature of uavs demand an extension to the scope of ordinary vulnerability analysis for such systems in addition to threats in the electronic and computational components a largely overlooked class of vulnerabilities is fostered by the interactions between the mechanical elements and the computational subsystems pondering on the list of critical attacks presented in this paper an alarming conclusion can be drawn serious threats still remain unmitigated not only in every networking component of uas communications but also in the interdependency of the network and other components including sensors and physical elements of uavs considering the seriousness of open issues in the aspects of uavs a successful move towards the age of mainstream unmanned aviation can not be envisioned without remedying the void of effective solutions for such critical challenges r eferences kim wampler goppert hwang and aldridge cyber attack vulnerabilities analysis for unmanned aerial vehicles infotech aerospace javaid sun devabhaktuni and alam cyber security threat analysis and modeling of an unmanned aerial vehicle system in homeland security hst ieee conference on technologies for pp ieee banerjee venkatasubramanian mukherjee and gupta ensuring safety security and sustainability of systems proceedings of the ieee vol no pp subramanian beyah et sensory channel threats to cyber physical systems a call in communications and network security cns ieee conference on pp ieee wesson and humphreys hacking drones scientific american vol no pp broumandan nielsen and lachapelle gps vulnerability to spoofing threats and a review of antispoofing techniques international journal of navigation and observation vol humphreys ledvina psiaki ohanlon and kintner jr assessing the spoofing threat development of a portable gps civilian spoofer in proceedings of the ion gnss international technical meeting of the satellite division vol hartmann and steup the vulnerability of uavs to cyber attacksan approach to the risk assessment in cyber conflict cycon international conference on pp ieee tang causal models for analysis of collisions phd thesis universitat de barcelona bhattacharjee sengupta and chatterjee vulnerabilities in cognitive radio networks a survey computer communications vol no pp bhunia behzadan regis and sengupta performance of adaptive beam nulling in multihop ad hoc networks under jamming ieee international symposium on cyberspace safety and security css new york behzadan and sengupta inference of topological structure and vulnerabilities for adaptive jamming against tactical ad hoc networks under review in elsevier journal of computer and system sciences zhu and distributed formation control via online adaptation in decision and control and european control conference ieee conference on pp ieee bekmezci sahingoz and temel flying networks fanets a survey ad hoc networks vol no pp kong hong and gerla an and routing scheme against anonymity threats in mobile ad hoc networks mobile computing ieee transactions on vol no pp
| 3 |
a construction of linear codes with two ziling henga b c qin yuea b c a department of mathematics nanjing university of aeronautics and astronautics nanjing pr china b state key laboratory of cryptology o box beijing pr china c state key laboratory of information security institute of information engineering chinese academy of sciences beijing pr china abstract jul linear codes with a few weights are very important in coding theory and have attracted a lot of attention in this paper we present a construction of linear codes from trace and norm functions over finite fields the weight distributions of the linear codes are determined in some cases based on gauss sums it is interesting that our construction can produce optimal or almost optimal codes furthermore we show that our codes can be used to construct secret sharing schemes with interesting access structures and strongly regular graphs with new parameters keywords linear codes secret sharing schemes strongly regular graphs gauss sums msc introduction let fq denote the finite field with q elements an n k d linear code c over fq is a subspace of fnq with minimum hamming distance an n k d code is called optimal if no n k d code exists let ai denote the number of codewords with hamming weight i in a code c with length the weight enumerator of c is defined by z an z n the sequence an is called the weight distribution of the code c is said to be if the number of nonzero aj j n in the sequence an equals the weight distribution is an interesting topic which was investigated in and many other papers in particular a survey of cyclic codes and their weight distributions were provided in weight distribution gives the minimum distance and the error correcting capability of a code in addition it contains important information on the computation of the probability of error detection and correction with respect to some error detection and correction algorithms recently ding et al proposed a very effective construction of linear codes in as follows let d dn fr where r is a power of q a linear code of length n over fq is defined by cd xdn x fr where x x xq xq denotes the trace function from fr to fq and r q s the set d is called the defining set of if the set d is well chosen the code c may have good parameters by using this the paper is supported by foundation of science and technology on information assurance laboratory no email addresses zilingheng ziling heng yueqin qin yue preprint submitted to journal of latex templates july construction and selecting proper defining sets many good codes were found in let f be a function over fr then this construction can be equivalently written as cd xf xf xf dn x fr let m be positive integers such that and gcd let trqmi be the trace function from fqmi to fq i let nqm be the norm function from fqm to fqmi then for x fqm nqm x mi m qmi mi qm x qmi i in this paper we present a construction of a linear code as cd xnqm xnqm dn x where the defining set d is given as d x nqm x a for a fq since the norm function nqm is surjective there exists an element c such that nqm c a for a if a then d x nqm x a x nqm cx x nqm x this implies that we only need to consider a we remark that this construction is a further generalization of that in when m in the authors determined a lower bound of the minimum hamming distance of cd and gave its weight distributions for a and a respectively the purpose of this paper is to determine the weight distribution of cd defined in equation in some cases our main mathematical tools used in this paper are gauss sums consequently we obtain four classes of linear codes with very flexible parameters examples given by us show that some codes are optimal or almost optimal as some applications our codes are used to construct secret sharing schemes with interesting access structures and strongly regular graphs with new parameters the following notations will be used in this paper canonical additive characters of fq respectively generators of multiplicative character groups of fq respectively g g g gauss sums over fq respectively primitive element of e e gcd l l gcd q gauss sums in this section we recall some basic results of gauss sums which are important tools in this paper let fq be a finite field with q elements where q is a power of a prime the canonical additive character of fq is defined as follows x where e p fq x denotes the primitive root of complex unity and is the trace function from fq to fp the orthogonal property of additive characters see is given by q if a x ax otherwise i let be a multiplicative character of for for some i q the trivial multiplicative character is defined by x for all x it is known from that all the which is isomorphic to the orthogonal property multiplicative characters form a multiplication group f q q of a multiplicative character see is given by q if x x otherwise q the gauss sum over fq is defined by g x x x q it is easy to see that g and g g if we have gauss sums can be viewed as the fourier coefficients in the fourier expansion of the restriction of to in terms of the multiplicative characters of fq x x g x x q in this paper gauss sum is an important tool to compute exponential sums in general the explicit determination of gauss sums is a difficult problem in some cases gauss sums are explicitly determined in in the following we state the gauss sums in the case lemma case gauss sums let be a multiplicative character of order n of assume that n and there exists a least positive integer j such that pj mod n let r for some integer then the gauss sums of order n over fr are given by if p g pj n r if p furthermore for s n the gauss sums g are given by s if n is even p and s g r otherwise pj n are odd the quadratic gauss sums are the following lemma theorem suppose that q pt and is the quadratic multiplicative character of fq where p is an odd prime then if p mod g t t q if p mod where exponential sums in this section we investigate two exponential sums which will be used to calculate the weight distribution of cd let be the canonical additive character of fq let be the canonical additive character of fqmi i respectively denote b x x y q qm and b x qm qm ybx zx z b x qm y q qm qm ybx zx b firstly we begin to compute the exponential sum b lemma let m be positive integers such that and gcd q l where e f b i ti gcd let f q q b qmi qe i for b we have q m q x g s g s s b g e s q q e q where s l j j l qm qm proof for let and then we have i and i this implies that m b x qx qm i qm i z y q m x qx y q z using the fourier expansion of additive characters see equation we have m b q qx x z q y x q g x q g q x z m q y x g g yb z m qx q j since mi we obtain ord q m where i therefore we have q m and m qx i m qx q m if otherwise b i such that where i assume that and for u let f q q and v q if then which is equivalent to q u q v mod q q this implies that q u q v mod q mi i therefore q u mod q and q v mod q it is known that gcd q q q gcd q e then we have u mod e q where qe qe and v mod qe qe denote u and v for substituting u v into equation we have q e hence e b q qx x qm g g yb z z q s y e q qx x x qm g g b z z y m q s assume that where hence since gcd q l we have x y q denote s mod that b l x i x s e i q if s mod otherwise l b since z z for z fq we have q e let f q x q m q x g g b z z m m q q x q m q x g g b z e z m m q q x g g b g m q q q q e the proof is completed we remark that the fourier expansion of additive characters used in lemma is an effective technique in computing exponential sums it was also employed in to determine the weight distribution of cyclic codes by li and yue by lemma we know that the value distribution of b can be determined if the gauss sums are known in the following we mainly consider some special cases to give the value distribution of b lemma let l and other notations and hypothesises be the same as those of lemma then the value distribution of b b is the following if e then b q m q b q q if e then b qm qm q proof if l by lemma we have that g b e s q times q times and q m q x g s g s s b q q e where s q j j in the following we discuss the value distribution of the exponential sum b for e respectively assume that e it is clear that s then b q m q b q q assume that e then we have s q j j q hence q b q q q q m q x j j j g b g m m q q q x q m q j j j g g b q q m m m note that ord ord q now we give the value distribution of b in several cases if q is even by lemma we have g j g j q j q then b q x q m q j q b q q let b s q then we have q x j b q x js q if s mod q otherwise hence the value distribution of b is m q q b m q q times q times if q is odd and mod we have mod due to gcd q since and is even by lemma we have g j g j q j q for b s q b q x q m q j q b q q q x q m q js q q q for s mod q we have q x q m q js and b q m q q for s mod q we have q x js and b q m q q q q mod q one can see that for s qs s and s this implies that q x q m q js and b q m q q hence the value distribution of b is qm m q q b m q m q m q q times q times is odd if q is odd and mod we have mod due to gcd q in this case the value distribution of b can be obtained in a similar way we omit the details here the value distribution of b is given as m q q b m q q m m q q times q times note that the value distribution of b can be represented in a unified form for e the proof is completed lemma let l e other notations and hypothesises be the same as those of lemma then the value distribution of b b is given as follows qm m q q b m q m q q times times proof since l e by lemma we have that b q m q x g s g s s b g s b q q where s it is clear that is even and is odd hence by lemma b q m q g g b g q q q m q g b g g m m q q t q m q q b m m q q qm m q times q q qm m q q times m q q for l e the value distribution of b can t be given because the gauss sums of order q are unknown in general however for e and l we can easily obtain the value distributions of b because the cubic and quartic gauss sums are known we omit the details here in the following we begin to investigate the exponential sum b b lemma let m be positive integers such that denote e gcd let ti qmi qe i for b we have b q m q x g s g s s b q q e i i and f where s q j j q qm qm proof for let and then we have i and i this implies that b x m qx x m qx qm qm i i y q y q using the fourier expansion of additive characters see equation we have m b let ti qmi qe q q x qx q y x q x g g q q x x g g yb z m qx y q i from the proof of lemma we know that m qx q m if otherwise and if and only if and where q e and q e hence e b q x qx qm g g yb z q s y e q qx x x qm g g b z y m q s assume that where hence this implies that x y x i x s e i q q if e otherwise and x z i x s e i q since gcd the system x mod q q if mod q e otherwise e e mod q mod q is equivalent to mod q where q e denote s mod q q e then we have that b q m q x g g b q q for e the value distribution of b can be given as follows lemma let the notations be the same as those of lemma then the value distribution of b b is given as follows if e then b qm for all b if e then b qm qm q q times q times proof the proof is similar to that of lemma we omit the details here the weight distribution of cd in this section we give the weight distribution of cd defined in equation in some special cases the griesmer bound of linear codes is the following lemma griesmer bound for an n k d code over fq we have x i where denotes the smallest integer which is larger than or equal to x the case a in the following we determine the weight distribution of cd for a denote n x nqm x since the norm function qm nqm x x is an epimorphism of two multiplicative groups and the trace function fq is an epimorphism of two additive groups we have n ker nqm ker q m q q note that n when hence we always assume that in this section for b we denote n b x nqm x and bnqm x by the basic facts of additive characters we have that n b x x y bnqm x z nqm x x x ybx zx y qm qm y q qm qm x x x x ybx zx m x q x ybx qm zx qm y q qm qm x x qm x x b ybx zx q q note that the norm function nqm is an epimorphism hence x x qm zx qm x x zx q q q m q x x zx q q q qm m similarly x x qm ybx qm x x ybx q q q m q x x ybx q q q qm m from the discussions above we obtain that n b qm b q q q for any b the weight of a codeword c b bnqm bnqm dn equals wh c b n n b q q m q q q b q q q q by equations and hence by lemma the parameters of cd for e are q m q q q m q q m m q q q then cd is an optimal linear code with respect to the griesmer bound however any linear code is not new because it is equivalent to a concatenated version of a simplex code for e the weight distribution of cd is given in the following theorem let m be positive integers such that and denote gcd let cd be the linear code defined in equation for a if e and then cd is a m linear code with parameters and its weight enumerator is given by table i q table i weight distribution of the code in theorem weight frequency q q m q q q q q q q m q q q q q q proof for e the weight distributions of cd can be obtained by lemma and equation it is easy to verify that wh cb for all b if then the dimension equals example let m if q then cd in theorem is an optimal linear code according to the griesmer bound and has weight enumerator z if q then cd in theorem is an almost optimal linear code according to the griesmer bound and has weight enumerator example let m and q then cd in theorem is a linear code its weight enumerator is given by the case a in the following we determine the weight distribution of cd for a denote n x nqm x it is clear that n ker nqm ker q q m q for b we denote n b x nqm x and bnqm x by the basic facts of additive characters we have that n b x x y bnqm x z nqm x z x x ybx zx z y qm qm y q m qm qm x x x x z ybx zx x x ybx qm zx qm z y q qm qm qm x x x x b qm z ybx zx q q q note that x x qm x x zx z q qm zx z q qm x q x qm z zx q q m from section above we have x x qm ybx q qm q q m q from the discussions above we obtain that n b qm b q q q q for any b the weight of a codeword c b bnqm bnqm dn equals wh c b n n b q q m q b m m q q q q by equations and hence by lemma the parameters of cd for e l is q q m q q q m m q q q then cd is an optimal linear code with respect to the griesmer bound and is not new as mentioned above for e and l the weight distribution of cd is given in the following theorem let m be positive integers such that and gcd q l where gcd let cd be the linear code defined in equation for a if e then cd is a linear code with parameters q qm and its weight enumerator is given by table ii table ii weight distribution of the code in theorem weight frequency q m q q q q q m q q q q q q q q q proof for e the weight distributions of cd can be obtained by lemma and equation note that wh cb for all b then the dimension equals example let m if q then cd in theorem is an almost optimal linear code according to the griesmer bound and has weight enumerator z if q then cd in theorem is an nearly optimal linear code while the corresponding optimal linear codes have parameters example let m and q then cd in theorem is a linear code its weight enumerator is given by theorem let m be positive integers such that and gcd q l where gcd let cd be the linear code defined in equation for a if e then cd is a linear code with parameters q q m q q q q m m m q q q q and its weight distribution is given in table iii table iii weight distribution of the code in theorem weight frequency q m q q q q q q m q q q q q proof the proof is completed by lemma and equation example let m and q then cd in theorem is a linear code its weight enumerator is given by its dual is a code with parameters shortened linear codes of cd it is observed that the weights of the code in theorems have a common divisor q this indicates that the code cd may be punctured into a shorter one assume that a note that x d implies that ux d for any u fq hence the defining set of cd in equation can be expressed as d uv u and v where di for every pair of distinct elements di dj in then we obtain a shortened linear code of cd by theorem we directly obtain the following result corollary let m be positive integers such that and denote gcd let be the linear code and its defining set is given in equation for a if e and m and its weight then is a linear code with parameters q q enumerator is given by table iv table iv weight distribution of the code in corollary weight frequency q q q q m m q q q q q m q q q m q q q example let m if q then in corollary is an optimal linear code according to the griesmer bound and has weight enumerator its dual has parameters which is optimal according to applications in this section we apply our linear codes to construct secret sharing schemes and strongly regular graphs we denote by c the dual code of a code secret sharing schemes from linear codes secret sharing schemes were introduced by shamir and blakley for the first time in secret sharing schemes are used in banking systems cryptographic protocols electronic voting systems and the control of nuclear weapons it was shown in that any linear code over fq can be employed to construct secret sharing schemes in order to describe the secret sharing scheme of a linear code see we need to introduce the covering problem of linear codes the support of a vector c fnq is defined as i n ci a codeword covers a codeword if the support of contains that of a minimal codeword of a linear code c is a nonzero codeword that does not cover any other nonzero codeword of the covering problem of a linear code is to determine all the minimal codewords of from theorem we know that secret sharing scheme with interesting access structure can be derived from c provided that each nonzero codeword of a linear code c is minimal if the weights of a linear code c are close enough to each other then all nonzero codewords of c are minimal as described as follows lemma let wmin and wmax denote the minimum and maximum nonzero hamming weights of a linear code c respectively if wmin q then every nonzero codeword of c is minimal for the codes in theorem and corollary we have q q q wmin m wmax q q q q if mod and q q q wmin wmax q q q q if mod for the code in theorem we have wmin q q wmax q q q if mod and q q wmin wmax q q q if mod for the code in theorem we have wmin q q wmax q q q if from the discussions above the linear codes obtained in this paper can be used to construct secret sharing schemes with interesting access structures using the framework in strongly regular graphs from linear codes a connected graph with n vertices is called a strongly regular graph with parameters n k if it is regular of valency k and the number of vertices joined to two given vertices is or according as the two given vertices are adjacent or the theory of strongly regular graphs was introduced by bose in for the first time a code c is said to be projective if the minimum distance of its dual code c is at least the following lemma gives a connection between projective linear codes and strongly regular graphs lemma if c is a projective n k linear code over fq with two nonzero weights then it is equivalent to a strongly regular graph with the following parameters n qk k n q k q kq q q qk due to lemma new projective linear codes yield new strongly regular graphs examples in section show that our codes are not always projective in particular we find two classes of projective codes in the following lemma let m and other notations be the same as those of theorem then the linear m code cd in theorem is a projective q m linear code with weight enumerator m q m qm z m q m m q q q m qm z proof the weight enumerator can be directly obtained by theorem we now prove that cd is projective let ai bi denote the numbers of codewords with hamming weight i in cd and cd respectively denote m m m m q m q qm q q m q m q by the first three pless power moments see we have a q m n q q a a n q n q b q n q q note that n q qm solving the above system we have hence the minimum distance of cd is at least the proof is completed lemma let m and other notations be the same as those of corollary then the m linear code in corollary is a projective m linear code with weight enumerator m m m q q q m q m z z m q proof the weight enumerator can be directly obtained by corollary we now prove that is projective let ai bi denote the numbers of codewords with hamming weight i in and cd respectively denote m m m m q q qm q q m q q by the first three pless power moments see we have a q m n q q a a n q n q b q n q q note that n qm solving the above system we have hence the minimum distance of cd is at least the proof is completed lemmas and yield the following theorem theorem let gcd m q and then there exists a strongly regular graph with the following parameters qm q q m k m m q q q q m m m m q q q q n the following theorem can be directly obtained by lemmas and theorem let and m then there exists a strongly regular graph with the following parameters n k qm qm m m q m q q q q q m m m m q q we remark that the parameters of the strongly regular graphs in theorems and are probably new after comparing with known ones in the literature concluding remarks in this paper we presented a construction of linear codes and determined the weight distributions in some cases based on gauss sums four classes of linear codes were obtained note that these linear codes have very flexible parameters and are probably new after comparing with known linear codes in the literature see for some known linear codes it is interesting that our construction can produce optimal or almost optimal codes what s more our codes can be used to construct secret sharing schemes with interesting access structures and strongly regular graphs acknowledgments the authors are very grateful to the reviewers and the editor for their valuable comments that improved the quality of this paper special thanks go to one of the reviewers for pointing out some knowledge of linear codes references references ashikhmin barg minimal vectors in linear codes ieee trans inf theory anderson ding helleseth and how to build robust shared control systems des codes cryptogr blakley safeguarding cryptographic keys proc nat comput conf bose strongly regular graphs partial geometries and partially balanced designs pacific j math baumert and mceliece weights of irreducible cyclic codes inf contr berndt evans williams gauss and jacobi sums wiley and sons company new york calderbank kantor the geometry of codes bull london math soc carlet ding and yuan linear codes from perfect nonlinear mappings and their secret sharing schemes ieee trans inf theory ding linear codes from some ieee trans inf theory ding wang a coding theory construction of new systematic authentication codes theoretical computer science ding li li and zhou cyclic codes and their weight distributions discrete math ding ding a class of and codes and their applications in secret sharing ieee trans inf theory de clerk delanote codes partial geometries and steiner systems des codes cryptogr delsarte weights of linear codes and strongly regular normed spaces discrete math grassl bounds on the parameters of various types of codes avaliable at http heng yue a class of binary linear codes with at most three weights ieee commun letters heng yue two classes of linear codes finite fields appli heng yue evaluation of the hamming weights of a class of linear codes based on gauss sums des codes cryptogr huffman pless fundamentals of codes cambridge cambridge univ press codes for error detection singapore world scientific li yue a class of cyclic codes from two distinct finite fields finite fields appli li yue and li hamming weights of the duals of cyclic codes with two zeros ieee trans inform theory lidl niederreiter finite fields cambridge univ press cambridge macwilliams and j sloane the theory of error correcting codes ii amsterdam the netherlands shamir how to share a secret commun assoc comp mach xu cao xu two classes of bent functions and linear codes with three or four weights cryptogr commun yuan ding secret sharing schemes from three classes of linear codes ieee trans inf theory yang yao complete weight enumerators of a family of linear codes des codes cryptogr zhou li fan et linear codes with two or three weights from quadratic bent functions des codes cryptogr zhou c ding a class of cyclic codes finite fields appli
| 7 |
an efficient counting method for the colored triad census jeffrey lienerta b c laura koehlya felix christopher steven marcuma c feb a national human genome research institute national institutes of health b business school university of oxford c corresponding author abstract the triad census is an important approach to understand local structure in network science providing comprehensive assessments of the observed relational configurations between triples of actors in a network however researchers are often interested in combinations of relational and categorical nodal attributes in this case it is desirable to account for the label or color of the nodes in the triad census in this paper we describe an efficient algorithm for constructing the colored triad census based in part on existing methods for the classic triad census we evaluate the performance of the algorithm using empirical and simulated data for both undirected and directed graphs the results of the simulation demonstrate that the proposed algorithm reduces computational time by approximately over the approach we also apply the colored triad census to the zachary karate club network dataset we simultaneously show the efficiency of the algorithm and a way to conduct a statistical test on the census by forming a null distribution from realizations of a conditioned graph and comparing the observed colored triad counts to the expected from this we demonstrate the method s utility in our discussion of results about homophily heterophily and bridging simultaneously gained via the colored triad census in sum the proposed algorithm for the colored triad census brings novel utility to social network analysis in an efficient package keywords triad census labeled graphs simulation introduction the triad census is an important approach towards understanding local network structure first presented the isomorphism classes of structurally unique triads preprint submitted to xxx february possible in a directed network to conduct a triad census one simply counts each occurrence of these structures without respect to the labeling of the nodes here we use node label color characteristic and attribute interchangeably this is useful insofar as specific triads or combinations thereof may relate to underlying social processes giving rise to an observed network for example bridges triads with one null dyad and two dyads may be important in navigating social networks and certain triads may be more or less favorable based on structural balance theory the is balanced but the is not see figure moreover a variant of the triad census motif analysis investigates the statistics of various triad configurations motifs and has found wide application in biology also important to network structure are nodal characteristics and how they relate to tie formation or dissolution this has been the subject of research on homophily individuals having similar attributes with those to whom they are connected however homophily is an observed phenomenon not a process the processes giving rise to homophily are varied often confound the relationship between networks and outcomes and are difficult to tease apart methodological advances such as stochastic models can disentangle these effects to some some extent other analyses have attempted to disentangle the processes leading to homophily from structural processes such as triadic closure additionally the coloring of nodes in a network has been an important question for many graph theorists and indeed represents a major topic in this field although nodal characteristics and the triad census are important they have rarely been examined fully in conjunction yet there are a few cases where specific colored triads have been studied for example study brokerage based on triad structure and group membership simultaneously this same approach has been used to study brokerage in dynamic networks as well a study by examined specific colored triads based on generational membership within families in this work the authors showed that ties were observed in different quantities than expected based on the underlying null model none of the past research evaluated the full census of colored triads rather researchers have focused instead on specific colored triads that were a priori expected to be relevant to the processes at hand as a result these foundational works were not exhaustive with respect to all alternatives in other words previous research examining a subset of colored triads likely had an amount of false negatives due to not examining every colored triad this could be addressed by censusing the colored triads the examination of node characteristics together with local structure is important as it provides opportunity to simultaneously study the occurrence of triadic structure nodal attributes and the interactions between them for instance certain colored triads may be impermissible such as between strict heterosexuals in sexual contact networks impermissible triads would be categorized the same as those that were not observed due to chance in a triad census potentially missing important social processes or constraints at play in this type of network only by incorporating node coloring into the triad census can this pattern be fully elucidated based on this methodological gap in the literature we develop a method to census the colored triads for any binary network with arbitrary number of colors due to the large numbers of unique isomorphism classes as the number of colors increases this method requires computational efficiency in addition to mathematical accuracy as well one is often interested in forming a null distribution with which to compare observed colored triad counts if the null distribution can not be analytically solved one would likely census the colored triads of many simulated networks further increasing the need for the algorithm to be computationally efficient current efficient methods for the triad census exploit the sparseness of networks and scale as the number of edges increases the time to run the algorithm is faster than the number of edges squared however methods that exploit network sparseness by inferring the number of null triads do not work in the colored case because they do not explicitly interrogate every triad and there are variations within the null triads due to the coloring therefore we extend the methodology of which is based on matrix algebra and interrogates every triad his method scales with the number of nodes this paper presents the colored triad census and its computational complexity shows that this approach can be used on large networks tested for up to nodes with up to colors in relatively efficient time and uses the method many times to create null distributions of colored triad censuses to form the basis of conditional uniform graph tests we illustrate the benefits of an analysis incorporating the colored triad census using a dataset zachary s karate club algorithm since the original appearance of the triad census in a number of papers have explored how to compute the triad census of a network in an efficient manner although methods exist for calculating the triad census we use the quadratic algorithm presented by here this is because the more efficient methods avoid interrogating null triads directly by taking advantage of the sparseness of graphs the subsequent large number of null triads and the known number of total triads instead they interrogate all triads with at least one edge and then subtract that count from the total number of triads in the network to arrive at the number of null triads this is insufficient in the colored triad census as there are null triads and their number can not be algebraically determined moody s algorithm does not employ this limiting shortcut and we therefore use it as a basis for our colored triad census algorithm additionally because many networks are sparse we can leverage computational techniques for increasing the efficiency of sparse matrix operations further reducing the computational complexity of our method showed that the count of each of the triad isomorphism classes could be derived by using matrix algebra on the adjacency matrix of the graph and its derivatives to review let a be the adjacency matrix of a network and aij when a tie exists from node i to node j let e be the symmetrized matrix a formed by making any edge in a reciprocal via eij max aij aji the complement of e is formed by subtracting the complete network adjacency matrix from e so that eij if and only if there is neither a tie from i to j nor a tie from j to i next we have m the mutual matrix of a and is made by removing any asymmetric edges from a or mij mji aij aji finally c is the matrix of only asymmetric edges and is calculated by c a m therefore cij aij aji based on these matrices moody demonstrates how to calculate the number of each of the isomorphism classes for the case of unlabeled graphs or equivalently for a graph consisting of nodes of the same single color generally this was done by multiplying either through or multiplication the three matrices corresponding to the relevant edges in the triad of interest there were two triads and that were not directly amenable to this process and were calculated via addition and subtraction of other triad types respectively to extend this work to the case of multiple colors we introduce the and matrices k r and k r respectively where r is the focal color of matrix here the matrix is the transpose of the matrix the matrix is calculated by evaluating the color of the nodes such that rows indexing nodes of the focal color are composed in the following way r if r i r if r i r where r i is a function returning the color of node i as above the matrix is the transpose of the matrix in eq our algorithm works by using the and matrices to evaluate and switch on edges that have nodes of the focal colors at the ends or tails of edges in the adjacency matrix a of the network we adapt the triad census nomenclature of by appending the colors after the name of the triad the colors are ordered from the top node proceeding clockwise in figure we have arbitrarily adapted the orientation of the triads from the triad census figure in for computational reasons the orientation is important here because triads with the same orientation may no longer be isomorphic when color is introduced figure makes it possible to count unambiguously and name only unique colored triads therefore is the triad consisting of symmetric dyad and null dyads where the top node is of color the node is of color and the node is of color this is distinct from the triad because the coloring of the nodes is not identical from the previous triad following this the general formula for an arbitrary triad t with an arbitrary coloring triplet is t t r k h t k k h t k k h t k in the above refers to multiplication and tr is the trace function for an arbitrary triad t has a color triplet h t i j is a function returning the matrix specific to the type of edge between nodes i and j in triad t for example in a triad the first edge from the top node going clockwise is a symmetric edge from node one to node two figure h in this case would be the matrix e for the symmetric matrix and the sandwiching color matrices would turn the proper edges on and off if nodes one and two were of the specified colors if the edge is an asymmetric one and the direction of the edge in figure is then c is used instead of c to force the edge to go in the proper direction at this point there are redundant triads due to certain colored triads being isomorphic for instance the is isomorphic with and and would be these are removed by checking for isomorphisms based on matrix row and column permutations of the triad if two colored matrices are identical after such row column permutations then they are isomorphic and one is removed we arbitrarily decide to discard the triad whose coloring triplet name comes second alphanumerically it should be noted that removing in this way is computationally expensive particularly as the number of colors and nodes grows large we therefore shorten this process by performing it once for to colors and storing the unique isomorphism classes this leaves only unique isomorphism classes of colored triads which can then be accessed in linear time the number of unique isomophism classes for a given number of colors can be shown for each of the ismorphism classes in the triad census the classes separate into four types of colored triads depending on how many structurallydistinct positions there are in the triad the two ends of the edge in a triad are not from one another but are distinct from the node with no edges the calculation for the number of each isomorphism class for arbitrary number of colors k is shown in table each combinatoric term in each row together with their respective leading permutation coefficients counts the number of colored triads when there are three two or one unique color s respectively for example in a network with three colors the and classes have only one accessible permutation when there are three colors present in the triad six ways when there are two colors and one way when there is one color in the triad isomorphism classes of colored k k k and k and and table expression for the number of isomorphism classes within a triad class k is the number of colors if these numbers are summed over the isomorphism classes the total number of colored isomorphism classes of triads for k colors is returned similarly the same can be done for undirected triads solely summing over the triads observed in the undirected case table reports the total number of colored triads for undirected and directed networks over a range of clearly the number of isomorphism classes grows quite quickly as k increases the algorithm implemented as an r package is publicly available and is linked to this paper via github https algorithmic performance theoretically if basic matrix multiplication is used this algorithm runs with computational complexity o n it scales with the number of nodes squared n because of the matrix multiplication involved in the algorithm the scaling with the number of colors cubed comes from the number of distinct colored triads number of colors number of directed colored triads number of undirected colored triads table the number of colored triad isomorphism classes for directed and undirected networks for k ranging from to the algorithm needs to evaluate by taking advantage of methods for matrix multiplication using sparse matrices as appropriate due to the sparse nature of most social networks this complexity is reduced to something closer to o n log n to test the efficiency of the algorithm we apply it to networks ranging in size from n to n with the number of colors ranging from k to k all holding the average density constant at by creating graphs with those parameters the runtime of the algorithm with these parameters can be seen in figure in general increasing k results in constant increases in log runtime which is what we expect based on the theoretical computational complexity as expected we also observe a super linear increase in log runtime as n increases although it is super linear it is still well below the linear curve that would exist if we used matrix multiplication not optimized for sparse matrices finally we observe changes in the and decreases in runtime going from to nodes this is also due to the computational time involved in initializing the sparse matrices and storing and operating on sparse matrices and as such is not unexpected to be perfectly optimized therefore the algorithm would use standard matrix multiplication for small networks and switch to sparse methods for larger networks however the gains would be minimal generally under seconds and would require additional logical steps to check for network size further minimizing the gain we therefore use sparse matrix methods for all network sizes empirical use and example to show the empirical value of this algorithm we use the zachary karate club social network this is a historical network that describes the social relationships between members of a university karate club ties exist between members if they overlapped in at least one of eight contexts representing undirected relations these relations varied in terms of likely strength of the association likely at the weak end of the spectrum is being enrolled in the same class at the university while likely at the strong end is being a at the studio additionally three ties are specific to activities with a instructor member factions were identified as a node attribute taking one of five mutually exclusive values strongly associated with the president weakly associated with the president neutral weakly associated with the instructor or strongly associated with the parttime instructor these are labeled zs zw n hw and hs respectively these labels can be placed on an ordinal scale from zs to hs to quantify members direction and strength of alignment this undirected network with five colors represents a case that is rich in the number of colored triads for detailed conclusions to be drawn using the proposed algorithm which is general to both undirected and directed networks we initially ran the colored triad census on the social network using the faction as the nodal attribute this gave our empirical observed colored triad census to determine whether these triads were observed more or less often than expected by chance we construct a null model as the choice of null model can have important ramifications for the null distribution of triads we chose a model where edge formation is a function of the probability of ties between nodes of specific attributes the null model is a conditioned uniform random graph distribution based on probabilities of edges between nodes of particular color combinations this matrix comprises empirical probabilities of ties between groups with the diagonal representing tie probabilities this means that significantly or colored triads are observed as such due to network effects beyond homophily and heterophily networks are then generated from this matrix via a bernoulli random graph process la here then this null model therefore conditions on graph size the distribution of node factions and the probability of ties within and between factions by generating networks from the null model we can observe whether colored triad counts deviate from that expected based on the marginal distribution of faction mixing because we condition on the above parameters if we observe statistical deviations in our colored triad census it indicates that the structure of the network is dependent on parameters other than those on which we conditioned moreover for any triad the expected number and variance can be calculated assuming each tie follows a binomial distribution which is a reasonable assumption for most binary social network data the observed number can then be compared to these numerical results and a extracted from an exact binomial test this equates to the following probability expectation and variance for an example colored triad p t aij i r j p aij i r j aij i r j l t e t p t y r s t r v t e t p t the probability of t p t in equation is based on the of the three colors r involved in the triad t as is standard for the approach this continues to assume that all edges in the graph are independent for the expected value of a specific triad we multiply the probability of a single one of those triads by the total number of colored triplets that exist in the graph in equation the expectation of the triad l t returns the number of unique colors in p r t and is the number of nodes of color r in the graph also we take the nodes one two or three at a time depending on how many times that color repeats in t represented by s t r this expectation therefore follows a binomial distribution and it s variance follows accordingly in equation however to show that this method also works for null distributions that are not analytically solvable we construct a null distribution based on simulated draws from the null model as the number of trials increases the simulated null distribution of the colored triad census should asymptotically approach the analytical solution shown above for each of trials we draw random networks from the null distribution and run the triad census on all these networks comparing our observed count to the null distribution then allows us to get an approximate for a conditional uniform graph test and test the or of each colored triad we now turn to these results results figure is a heatmap of the approximate associated with each binomial exact test against the null for each triad clustered by the triad and the colored triplet as returned by the proposed algorithm we use a clustering algorithm to group color triplets with similar profiles across the types of triads this assists with identifying trends across different colored triads leading to conclusions that would likely be missed if all the colored triads were individually examined we find particular importance in three branch cutpoints in the clustering algorithm on the color triplets the first branch in the clustering algorithm a in figure separates four color triplets comprising colored triads with a pattern of and triads and and triads these results show that these color triplets are those that are less clustered than expected by chance the color triplets all contain nodes of two factions with the first two nodes being hs that is those strongly aligned with the instructor this indicates that those who are so aligned are likely to form ties to one another but not to members of other factions the only exception in this group is that two hs nodes are more likely to form a tie from one of the hs members to a hw member but even in this case the complete triad is still observed less than expected by chance this particular result is perhaps unsurprising since hs and hw members are close in alignment more so than with those aligning with the president therefore given the tendency towards homophily they are likely to overlap though less strongly than members of the same faction hence the figure the isomorphism classes of triads and their orientation used here with respect to the color numbering when colors are added to these triads they are labeled starting from the top node and proceeding clockwise algorithmic runtime log running time colors colors colors colors colors colors colors colors log nodes figure runtime of the algorithm on networks ranging from size to nodes in orders of magnitude and from one to ten colors these runtimes were generated using a pc running windows with an intel ghz chip and of ram count a b value c color key and histogram h szszs zsn n zszszs zsn zs zsh w zs zszw zs zsn h w h sh sh s n h sh w h szw n h szw h w h sn n h w zszs zszw h w h szw zw zszw n h sn h w n zszs h sh w zs zsh sn zszw zw n h sh s zsh sh s zsh sh w zw zszs n h szs h w h sh s h w h szs zsh szw zsh szs zw h szs h sn zs zw h sh s h sh sn h sh szs h sh szw h sh sh w h w h sh w zw h szw n h sn h sh w h w h szw zs zsh w h w zwzwn zwzwzw zwzwhw zwnn zwnhw zwhwhw nnn nnhw nhwhw hwhwhw nzwzw nzwn nzwhw nzwzs hwzwzw hwzwn hwzwhw hwzwzs hwnn hwnhw hwnzs zwnzs zwzwzs zwhwzs nnzs nhwzs hwhwzs zw h sh w zw h sn n h szw h w h szw h w h sn figure heatmap of colored triads and their corresponding of how often they were observed in the empirical networks relative to the null distribution the columns separate triads based on the man configuration and the rows separate triads based on the triplet of colors standard clustering algorithms were used to create the dendrograms white space indicates redundant isomorphism classes gray boxes are either those with triads observed in the network or in any of the networks of the null distribution and therefore have an undefined pseudo or those with a pseudo of the three labels correspond to three breakpoints in the clustering that separate meaningful groups a is a group of four color triplets exhibiting homophily between hs nodes b is a group of colored triplets exhibiting low clustering between heterogeneous nodes c is a group of colored triplets that show potential significant amounts of bridging the second branching point in the clustering b in figure separates the group of color triplets that are for the triad for the and triads and observed about as much as expected for the triads all the triplets in question have nodes of different factions in the first and second position because the edge in the triad is between the first and second node in the triplet figure this means that these are all triplets where the first edge is less likely than expected by chance and the lack of formation of the first edge subsequently hampers the formation of the edge between the second and third nodes in the triplet triad the first two nodes of these triplets are often out of two factions at least a distance of two away n and hs indicating members of a faction are not likely to overlap with members who are too disparate from their faction put another way this pattern of triads shows a lack of faction heterophily the third branch point unlabeled is primarily singling out the group of color triplets that were not observed in the network and we can not draw conclusions about their prevalence the fourth branch point c in figure however distinguishes a group of five triplets that are for the triad and for the triad this means that the edge between the first two nodes is less likely than expected by chance but once that edge does occur the second edge occurs more often than expected by chance all these triplets begin with a zs member and the triad in this case is effectively a bridging tie between it and another interestingly the bridging node is anything other than an hs whom are primarily consigned to this role in branch a as discussed above the third node was another zs member in four of five triplets this indicates that zs members of the karate club did not often overlap members of other factions but when they did provided it was not with an hs that second person also often overlapped with another zs although the above examples show homophily and bridging analyzing the full colored triad census allows us to draw further conclusions by looking at other colored triads in particular the homophily has mostly been a story of the hs nodes and the bridging primarily about the zs nodes the triad of both of these factions when comprising three nodes of the same faction are observed more often than expected by chance in both cases which has different implications on the results for the hs nodes homophily is strengthened as not only do hs nodes not often overlap with members of other factions they also very strongly overlap with one another this may partially be an artifact of the types of overlap as stated before three of the overlap activities involve direct participation in the instructor s studio but there are no corresponding groups for the president this means that those who are hs or hw may have more opportunity to overlap with one another due solely to the structure of the data on the other hand the triplet of all zs members also has an triad although there are other triads that seem to indicate bridging between zs members c in figure given that zs members are also densely connected to the practical effect of these potential bridging ties is reduced observing this joint effect of homophily and bridging ties was possible only through the complete colored triad census neither a standard triad census nor a brokerage analysis would have revealed the intricacies of these results in sum it is clear from these results that the colored triad census allows one to examine multiple trends simultaneously that are often done in isolated analyses including homophily heterophily and brokerage importantly it also allows for generalizations based on the clustering of various triads or color triplets as well as specific results based on individual triads in this manner the colored triad census can yield results on multiple structural levels simultaneously all while examining local structure nodal attributes and their is net of all alternatives involving mixtures of node coloring and triadic configurations limitations there are some limitations to this method first it is only computationally efficient relative to existing methods including brute force counting networks of nodes or more will take over a day to run using the proposed algorithm for the colored triad census however this is an easily paralellizable process by partitioning the separate algebraic steps for example and so the real time necessary to run the analysis can be greatly reduced by taking advantage of this feature the time needed for the parallelized colored triad census is approximately inversely proportional to the number of computational cores used in the calculation plus some overhead second the complete explication of all colored triads has both benefits and potential pitfalls examining all of the triads simultaneously eliminates the possibility of missing interesting results because a specific colored triad was excluded however the sheer number of colored triads means that making complete sense of results can be difficult due to information overload even if the results are carefully examined for all colored triads it is conceivable that one might miss an important result out of the colored triads in a directed network no matter how meticulous the examiner s eye however use of standard clustering algorithms and heatmaps as here may help to ease interpretation of the results at both a coarse general groups of triads or individual colored triads perspective conclusions in this paper we have extended the matrix algebra methods of to calculate the colored triad census for any network directed or undirected with an arbitrary number of colors in a relatively computationally efficient manner we have shown a number of mathematical results regarding the colored triad census including a generalized equation for an arbitrary colored triad the number of isomorphism classes for arbitrary numbers of colors and the expectation and variances for colored triads we analyzed an empirical social network using our algorithm and calculated approximate for each colored triad based on an analytic exact binomial test for less complex null distributions or approximately through simulation for more complex null distributions we have also shown the type of conclusions that can be drawn from these results observing results that would not be feasible with many other currently available methods one additional benefit of this method is that it can be directly used as a counting tool for sufficient statistics in network inference models such as exponential random graphs ergm the colored triad census essentially allows one to simultaneously evaluate the effect of local structure and node attribute on network structure in an ergm building off previous work where researchers explicated the ergms capacity for including the triad census we believe that the colored triad census is a useful technique with an efficient implementation that can be in social networks research showing the continued importance of the triad census even in this era of stochastic models for complex networks acknowledgements references appendix a variable and functional definitions variable or function notation a e m c kr r i h t i j t l t s t r p t e t v t description of variable or function adjacency matrix symmetrized adjacency matrix complement of symmetrized adjacency matrix adjacency matrix including only mutual ties adjacency matrix including only asymmetric ties coloring matrix for color r function returning the color of node i function returning the matrix of the edge in triad t between nodes i and j an arbitrary colored triad with a man configuration and colored triplet a function returning the number of unique colors for a given colored triad function returning the number of times color r appears in colored triad t the probability of observing triad t the expectation of triad t under a binomial model the variance of triad t under a binomial model table list of variables constants and functions defined in this manuscript
| 8 |
compressive sampling of ensembles of correlated signals ali ahmed and justin dec draft december abstract we propose several sampling architectures for the efficient acquisition of an ensemble of correlated signals we show that without prior knowledge of the correlation structure each of our architectures under different sets of assumptions can acquire the ensemble at a rate prior to sampling the analog signals are diversified using simple implementable components the diversification is achieved by injecting types of structured randomness into the ensemble the result of which is subsampled for reconstruction the ensemble is modeled as a matrix that we have observed through an undetermined set of linear equations our main results show that this matrix can be recovered using a convex program when the total number of samples is on the order of the intrinsic degree of freedom of the ensemble the more heavily correlated the ensemble the fewer samples are needed to motivate this study we discuss how such ensembles arise in the context of array processing introduction this paper considers the exact reconstruction of correlated signals from the samples collected at a subnyquist rate we propose several implementable architectures and derive a sampling theorem that relates the bandwidth and the a priori unknown correlation structure to the sufficient sampling rate for successful signal reconstruction we consider ensembles of signals output from m sensors each of which is bandlimited to frequencies below see figure the entire ensemble can be acquired by taking w uniformly spaced samples per second in each channel leading to a combined sampling rate of m w we will show that if the signals are correlated meaning that the ensemble can be written as or closely approximated by distinct linear combinations of r m latent signals then this net sampling rate can be reduced to approximately rw using coded acquisition the sampling architectures we propose are blind to the correlation structure of the signals this structure is discovered as the signals are reconstructed each architecture involves a different type of analog diversification which ensures that the signals are sufficiently spread out so each point sample captures information about the ensemble ultimately what is measured are not actual samples of the individual signals but rather are different linear combinations that combine multiple signals and capture information over an interval of time later we will show that these samples can be expressed as linear measurements of a matrix over the course of one second we aim to acquire an m w matrix comprised of samples of the ensemble taken at the nyquist rate the proposed sampling architecture produces a series of linear combinations of entries of this matrix conditions under which a matrix can be effectively recovered from an set of linear measurements have been the object of intense study in the recent literature the mathematical contributions in this paper show how these conditions are met by systems with clear implementation potential a and are with the school of electrical and computer engineering at georgia tech in atlanta georgia email alikhan jrom this work was supported by nsf grant onr grant and a grant from the packard foundation draft by ahmed and romberg december our motivation for studying these architectures comes from classical problems in array signal processing in these applications one or more narrowband signals are measured at multiple sensors at different spatial locations while narrowband signals can have significant bandwidth they are modulated up to a high carrier frequency making them very heavily spatially correlated as they arrive at the array this correlation which we review in more detail in section can be systematically exploited for spatial filtering beamforming interference removal estimation and multiple source separation these activities all depend on estimates of the correlation matrix and the rank of this matrix can typically be related to the number of sources that are present compressive sampling has been used in array processing in the past sparse regularization was used for direction of arrival estimation long before any of the sampling theorems started to make the theoretical guarantees concrete these results along with more recent works including show how exploiting the structure of the array response in free space for narrowband signals this consists of samples of a superposition of a small number of sinusoids can be used to either the doa estimate or reduce the number of array elements required to locate a certain number of sources a single sample is associated with each sensor and the acquisition complexity scales with the number of array elements in this paper we exploit this structure in a different way our goal is to completely reconstruct the timevarying signals at all the array elements the structure imposed on this ensemble is more general than the spatial spectral sparsity in the previous work we ask that the signals are correlated in some a priori unknown manner our ensemble sampling theorems remain applicable even when the array response depends on the position of the source in a complicated way moreover our reconstruction algorithms are indifferent to what the spatial array response actually is as long as the narrowband signals remain sufficiently correlated the paper is organized as follows in sections and we describe the signal model and its motivation from problems in array processing in section we introduce the components and their corresponding mathematical models that we will use in our sampling architectures in section we present the sampling architectures show how the measurements taken correspond to generalized measurements of a matrix and state the relevant sampling theorems numerical simulations illustrating our theoretical results are presented in section finally section and section provide the derivation of the theoretical results notation we use upper and lower case bold letters for matrices and vectors respectively scalars are represented by upper and lower case letters the notation denotes a row vector formed by taking the hermitian transpose of a column vector x linear operators and sets are represented using script letters we use n to denote the set n the notation i w denotes a w w for a set b w i b denotes a w w matrix with ones at diagonal positions indexed by b and zeros elsewhere given two matrices a and b we denote by a b the matrix vec a vec b t where vec a and vec b are the column vectors formed by stretching the columns of a and b respectively and t denotes the transpose we will use a b is the usual kronecker product of a and b we will use to denote a p vector of all ones lastly the operator e refers to the expectation operator and p represents the probability measure signal model our signal model is illustrated in figure we denote a signal ensemble by x c t a set of m individual signals t xm t conceptually we may think of x c t as a matrix with a finite number m of rows with each row containing a bandlimited signal our underlying assumption is that every signal in the ensemble can be approximated as the linear combination of underlying r independent signals in a smaller ensemble s c t we write x c t as c t draft by ahmed and romberg december x c t a x s c t a s m w a b figure a our model is that an ensemble of signals are correlated meaning the m signals can be closely approximated by a linear combination of r underlying signals we can write the m signals in x c t as a tall matrix capturing the correlation structure multiplied by an ensemble of r latent signals b the matrix of texpoint fonts used in emf samples inherits the structure of the ensemble read the texpoint manual before you delete this box a where a is an m r matrix with entries a m r we will use the convention that fixed matrices operating to the left of the signal ensembles simply mix the signals and so is equivalent to xm t r x a m r sr t the only structure we impose on the individual signals is that they are and bandlimited to keep the mathematics clean we take the signals to be periodic for now however the results can be extended to signals as will be discussed shortly we begin with a natural way to discretize the problem that is what exists in x c t for t is all there is to know and each signal can be captured exactly with w samples each bandlimited periodic signal in the ensemble can be written as xm t b x where the are complex but are symmetric to ensure that xm t is real we can capture xm t perfectly by taking w equally spaced samples per row we will call this the m w matrix of samples x knowing every entry in this matrix is the same as knowing the entire signal ensemble we can write x cf where c is an m w matrix whose rows contain fourier series coefficients for the signals in x c t and f is a w w normalized discrete fourier matrix with entries f w b w observe that both x and hence c inherit the correlation structure of the ensemble x c t before moving on observe that and impose an r and b dimensional subspace structure on x where rank x min r b if r b then we can take r b with the underlying independent signals in s c t being the known sinusoids at frequencies b however we are interested in the more pertinent and challenging case of r b in this case the underlying independent signals in s c t are not known in advance and the main contribution of this paper is to leverage this unknown correlation structure in x c t to reduce the sampling rate lastly in the interest of readability of our technical results we assume without loss of generality that w m that is the bandwidth of the signals is greater than the number of signals same correlated signal model was considered in for compressive sampling of multiplexed signals two multiplexing architectures were proposed and for each a sampling theorem was proved that dictated minimum number of samples for exact recovery of the signal ensemble this paper presents sampling architectures where we use a separate adc for each channel and rigorously prove that adcs can operate at roughly draft by ahmed and romberg december the optimal sampling rate to guarantee signal recovery other types of correlated signal models have been exploited previously to achieve gains in the sampling rate for example shows that two signals related by a sparse convolution kernel can be reconstructed jointly at a reduced sampling rate the signal model in considers multiple signals residing in a fixed subspace spanned by a subset of the basis functions of a known basis and shows that the sampling rate to successfully recover the signals scales with the number of basis functions used in the construction of the signals in this paper we also show that the sampling rate scales with the number of independent latent signals but we do this without the knowledge of the basis for a more applied treatment of the results with similar flavor as in we refer the reader to as will be shown later we observe the signal ensemble x c t through a limited set of random projections and signal recovery is achieved by a nuclear norm minimization program a related work considers the case when given a few random projections of a signal we find out the subspace to which it belongs by solving a series of programs extension to signals we end this section by noting that their are many ways this problem might be discretized using fourier series is convenient in two ways we can easily tie together the notion of a signal being bandlimited with having a limited support in fourier space and our sampling operators have representations in fourier space that make them more straightforward to analyze in practice however the recovery technique can be extended to signals by windowing the input and representing each finite interval using any one of a number of basis expansions the low rank structure is preserved under any linear representation it is also possible that we are interested in performing the ensemble recovery over multiple time frames and would like the recovery to transition smoothly between these frames for this we might consider a windowed fourier series representations the lapped orthogonal transform in that are carefully designed so that the basis functions are tapered sinusoids so we again get something close to bandlimited signals by truncating the representation to a certain depth but remain orthonormal it is also possible to adjust our recovery techniques to allow for measurements which span consecutive frames yielding another natural way to tie the reconstructions together a framework similar to this for sparse recovery is described in detail in applications in array signal processing one application area where ensembles of signals play a central role is array processing of narrowband signals in this section we briefly review how these ensembles arise the central idea is that sampling a wavefront at multiple locations in space as well as in time leads to redundancies which can be exploited for spatial processing these concepts are very general and are common to applications as diverse as surveillance radars underwater acoustic source localization and imaging seismic exploration wireless communications the essential scenario is that multiple signals are emitted from different locations each of the signals occupies the same bandwidth of size w which has been modulated up to a carrier frequency the signals observed by receivers in the array are to a rough approximation complex multiples of one another to a very close approximation the observed signals lie in a subspace with dimension close to one this subspace is determined by the location of the source this redundancy between the observations at the array elements is precisely what causes the ensemble of signals to be low rank the rank of the ensemble is determined by the number of emitters the only conceptual departure from the discussion in previous sections as we will see below is that each emitter may be responsible for a subspace spanned by a number of latent signals that is greater than one but still small having an array with a large number of appropriately spaced elements can be very advantageous even when there only a relatively small number of emitters present observing multiple delayed versions of a signal draft by ahmed and romberg december allows us to perform spatial processing we can beamform to enhance or null out emitters at certain angles and separate signals coming from different emitters the resolution to which we can perform this spatial processing depends on the number of elements in the array and their spacing the main results of this paper do not give any guarantees about how well these spatial processing tasks can be performed rather they say that the same correlation structure that makes these tasks possible can be used to lower the net sampling rate over time the entire signal ensemble can be reconstructed from this reduced set of samples and spatial processing can follow we now discuss in more detail how these low rank ensembles come about for simplicity this discussion will center on linear arrays in free space as we just need the signal ensemble to lie in a low dimensional subspace and do not need to know what this subspace may be beforehand the essential aspects of the model extend to general array geometries channel responses and channels suppose that a signal is incident on the array as a plane wave at an angle each array element observes a different shift of this signal if we denote what is seen at the array center the origin in figure a by s t then an element m at distance dm from the center sees xm t s t dm sin if the signal consists of a single complex sinusoid s t then these delays translated into different complex linear multiples of the same signal xm t sin in this case the signal ensemble has we can write x c t a where a is an m steering vector of complex weights given above this decomposition of the signal ensemble makes it clear how spatial information is coded into the array observations for instance standard techniques for estimating the direction of arrival involve forming the spatial correlation matrix by averaging in time l rxx x t x t l as the column space of rxx should be a we can correlate the steering vector for every direction to see which one comes closest to matching the principal eigenvector of rxx the ensemble remains low rank when the emitter has a small amount of bandwidth relative to a larger carrier frequency if we take s t sb t t where sb t is bandlimited to then when w the a for will be very closely correlated with one another in the standard scenario where the array elements are uniformly spaced along a line we can make this statement more precise using classical results on spectral concentration in this case the steering vectors a for are equivalent to integer spaced samples of a signal whose fourier transform is bandlimited to frequencies in sin for a bandwidth less than thus the dimension of the subspace spanned by a is to within a very good approximation m figure b illustrates a particular example the plot shows the normalized eigenvalues of the matrix z raa a a for the fixed values of ghz w mhz c equals the speed of light m and we have m and only of the eigenvalues are within a factor of of the largest one it is fair then to say that the rank of the signal ensemble is a small constant times the number of narrow band emitters we are using complex numbers here to make the discussion go smoothly the real part of the signal ensemble is rank having a cos and a sin term draft by ahmed and romberg december kth largest eigenvalue dm sin dm a b figure a a plane wave impinges on a linear array in free space when the wave is a pure tone in time then the responses at each element will simply be phase shifts of one another b eigenvalues for raa on a scale and normalized so that the largest eigenvalue is defined in for an electromagnetic signal with a bandwidth of mhz and a carrier frequency of ghz the array elements are spaced half a apart even when the signal has an appreciable bandwidth the signals at each of the array elements are heavily correlated the effective dimension in this case is r or architectural components in addition to converters our proposed architectures will use three standard components analog multipliers modulators and linear filters the signal ensemble is passed through these devices and the result is sampled using an converter adc taking either uniformly or spaced samples these samples are the final outputs of our acquisition architectures dc t x t avmm x t dc t b x t lti filter hc t x t hc t c m x t a adc x tk d figure a the analog multiplier avmm takes random linear combinations of m input signals to produce n output signals the action of avmm can be thought of as the left multiplication of random matrix a to ensemble x c t intuitively this operation amounts to distributing energy in the ensemble equally across channels b modulators multiply a signal in analog with a random binary waveform that disperses energy in the fourier transform of the signal c random lti filters randomize the phase information in the fourier transform of a given signal by convolving it with hc t in analog which distributes energy in time d finally adcs convert an analog stream of information in discrete form we use both uniform and sampling devices in our architectures the analog multiplier avmm produces an output signal ensemble ax c t when we input it with signal ensemble x c t where a is an n m matrix whose elements are fixed since the matrix operates pointwise on the ensemble of signals sampling output ax c t is the same as applying a to matrix draft by ahmed and romberg december x of the samples sampling commutes with the application of a recently avmm blocks have been built with hundreds of inputs and outputs and with bandwidths in the of megahertz we will use the avmm block to ensure that energy disperses more or less evenly throughout the channels if a is a random orthogonal transform it is highly probable that each signal in ax c t will contain about the same amount of energy regardless of how the energy is distributed among the signals in x c t formalized in lemma below allowing us to deploy equal sampling resources in each channel while ensuring that resources on quiet channels are not being wasted the second component of the proposed architecture is the modulators which simply take a single signal x t and multiply it by fixed and known signal dc t we will take dc t to be a binary waveform that is constant over time intervals of a certain length that is the waveform alternates at the nyquist sampling rate if we take w samples of dc t x t on then we can write the vector of samples y as y dx where x is the w containing the samples of x t on and d is an w w diagonal matrix whose entries are samples d rw of dc t we will choose a binary sequence that randomly generates dc t which amounts to d being a random matrix of the following form d d where d n with probability d w and the d n are independent conceptually the modulator disperses the information in the entire band of x t this allows us to acquire the information at a smaller rate by filtering a as will be shown in section compressive sampling architectures based on the random modulator have been analyzed previously in the literature the principal finding is that if the input signal is spectrally sparse meaning the total size of the support of its fourier transform is a small percentage of the entire band then the modulator can be followed by a filter and an adc that takes samples at a rate comparable to the size of the active band this architecture has been implemented in hardware in multiple applications the third type of component we will use to preprocess the signal ensemble is a linear lti filter that takes an input x t and convolves it with a fixed and known impulse response hc t we will assume that we have complete control over hc t even though this brushes aside admittedly important implementation questions because x t is periodic and bandlimited we can write the action of the lti filter as a w w circular matrix h operating on samples x the first row of h consists of samples h of hc t that is y hx where y is the vector of w samples in t of the signal obtained at the output of the filter we will make repeated use of the fact that h is diagonalized by the discrete fourier transform h f where f is the w normalized discrete fourier matrix with entries and is a diagonal matrix whose entries are w f the vector is a scaled version of the fourier series coefficients of hc t to generate the impulse response we will use a random sequence in the fourier domain in particular we will take w draft by ahmed and romberg december where with prob with uniform w w w w these symmetry constraints are imposed so that h and hence hc t is conceptually convolution with hc t disperses a signal over time while maintaining fixed energy note that h is an orthonormal matrix convolution with a random pulse followed by has also been analyzed in the compressed sensing literature if the random filter is created in the fourier domain as above then following the filter with an adc that samples at random locations produces a universally efficient compressive sampling architecture the number of samples that we need to recover a signal with only s active terms at unknown locations in any fixed basis scales linearly in s and logarithmically in w main results sampling architectures the main contribution of the paper is a design and theoretical analysis of a sampling architecture in section that enables the acquisition of correlated signals we state a sampling theorem that claims exact reconstruction of the signal ensemble using a much fewer samples compared to those dictated by sampling theorem the proof of the theorem involves the construction of a dual certificate via golfing scheme to show that minimization recovers the signal ensemble theorem is also of an independent interest as it is matrix recovery result form measurement ensemble we begin with a straightforward architecture in section that minimizes the sample rate when the correlation structure is known we then combine our components from the last section in a specific way to create architectures that are provably effective under different assumptions on the signal ensemble the main sampling architecture in section uses random modulators prior to the adcs this architecture is effective when the energy in the ensemble is approximately uniformly dispersed across time moreover we expect the signal energy to be dispersed across array elements when the avmm upfront does not mix the signals in section we present a variation of the above architecture in which ensembles are not required to be dispersed a priori instead the ensemble is preprocessed with lti filters and avmm to ensure dispersion of energy across time and array elements fixed projections for known correlation structure if the mixing matrix a for ensemble x c t is known then a straightforward way exists to sample the ensemble efficiently let a u be the singular value decomposition of a where u is m r matrix with orthogonal columns is r diagonal matrix and v is w with orthogonal columns an efficient way is to whiten ensemble a with u and sample the resulting r signals each at rate w this scheme is shown in figure x can be written as a multiplication of matrix u and r w matrix y containing the nyquist samples of signals t xr t respectively in its r rows the discretized signal ensemble x is then simply x uy knowing the correlation structure u the ensemble x and hence x c t using sinc interpolation of samples in x can be recovered using only the rw samples in y observe that rw is the optimal sampling rate as it only scales linearly with r and not with m in many interesting applications the correlation structure of the ensemble x c t is not known at the time of acquisition in this paper we design sampling strategies that are blind to the correlation structure u but are able achieve signal reconstruction at a near optimal sampling rate nonetheless by introducing avmms filters draft by ahmed and romberg december adc avmm adc figure known correlation structure u optimal sampling strategy is to whiten the ensemble x c t with u and then sample and then sample each of the resultant r signal at rate w total rw samples per second is optimal as it is the actual number of degrees of freedom in underlying r independent signals each bandlimited to and modulators intuitively the randomness introduced by these components disperses limited information in the correlated ensemble across time and array elements resultantly the adcs collect more generalized samples that in turn enable the reconstruction algorithm to operate successfully in the regime architecture random sampling of correlated signals the architecture presented in this section shown in figure consists of one sampling nus adc per channel each adc takes samples at randomly selected locations and these locations are chosen independently from channel to channel over the time interval t a nus adc takes input signal xm t and returns the samples xm tk tk tm the average sampling rate in each channel is collectively m nus adcs return m random samples of the input signal ensemble on a uniform grid nus adc nus adc nus adc figure m signals recorded by the sensors are sampled separately by the independent random sampling adcs each of which samples on a uniform grid at an average rate of samples per second this sampling scheme takes on the average a total of m samples per second and is equivalent to observing m entries of the matrix of samples x in at random sampling model is equivalent to observing m randomly chosen entries of the matrix of samples x defined in this problem is exactly the same as the problem where given a few randomly chosen entries of a matrix enable us to fill in the missing entries under some incoherence assumptions on the matrix x since x is its svd is x u draft by ahmed and romberg december where u rm and v rw the coherence is then defined as m w mw u v max max ku em max kv e ku v r m r w r for brevity we will sometime drop the dependence on u and v in u v in the interest of readability we assume without loss of generality here and in the rest of the write up that bandwidth w of the signal is larger or at least equal to their number m that is w m now the result in the noiseless case asserts that if w w then the solution of the minimization in with a rm rm such that a maps x to randomly chosen entries of x exactly equals x with high probability the result indicates that the sampling rate scales within some log factors with the number r of independent signals rather than with the total number m of signals in the ensemble when the measurements y are contaminated with additive measurement noise as in then the result in suggest that the solution to a modified minimization satisfies m where is a constant that depends on the coherence defined in as discussed before the number of samples for matrix completion scale linearly with the coherence parameter quantifies the distribution of energy across the entries of x and is small for matrices with even distribution of energy among their entries see for details in the signal reconstruction application under investigation here this means that for successful recovery a smaller sampling rate would suffice if the signals are across time and array elements one can avoid this dispersion requirement by preprocessing the signals with avmm and filters we will adopt this strategy in the construction of the main sampling architecture of this paper architecture the random modulator for correlated signals to efficiently acquire the correlated signal ensemble the architecture shown in figure follows a approach in the first step the avmm takes m input to produce n output signals where p meaning that the output signals are more than the inputs for now we take n signals at the output to be just p replicas of m input signals without any this amounts to an n m mixing matrix a im p the normalization by p ensures that a i m we will take a more general random orthogonal a in our next sampling architecture in the second step each of the n output signals t t t undergo analog preprocessing which involves modulation and filtering the modulator takes an input signal t and multiplies it by a fixed and known dn t we will take dn t to be a binary waveform that is constant over an interval of length intuitively the modulation results in the diversification of the signal information over the frequency band of width w the diversified analog signals are then processed by an filter implemented using an integrator see for details each of the resultant signals is then acquired using w uniformly spaced samples per second our sampling theorem will show later that it suffices to take the ratio p of the number of output to the input to be reasonably small however as will be suggested by our simulations it seems p is always enough and we believe p merely a technical requirement arising due to the proof method draft by ahmed and romberg december our main sampling result in theorem shows that exact signal reconstruction is achieved in the regime w in particular we only roughly require to be a factor of the nyquist rate w intuitively the acquisition is possible as the signals are diversified across frequency using random demodulators and therefore every sample provides a generalized or global information lti low pass lti low pass adc adc lti low pass adc figure architecture randomly modulated sampling m correlated signals in x c t are replicated p times to produce n output signals this amounts to choosing a i m rn as the mixing matrix in practice p suffices signals are then preprocessed in analog using a bank of modulators and filters the resultant signal is then sampled uniformly by an adc in each channel operating at rate samples per second the net sampling rate is samples per second system model this section the measured samples as the linear measurements of an unknown matrix we will show that signal reconstruction in t from samples in the regime corresponds to recovering a m w approximately matrix from an set of linear equations the input signal ensemble x c t is mixed using avmm to produce an ensemble of n signals ax c t let us denote the individual n signals at the output of avmm by t t t since mixing is a linear operation every signal in the ensemble ax c t is bandlimited just as was the case with x c t in therefore the dft coefficients of the mixed signals are simply e ac c each signal t at the output of avmm is then multiplied by a corresponding binary sequence dn t alternating at rate w each of the binary sequences t t dn t will be generated randomly and independently the output after modulation in the nth channel is yn t t dn t n n and t the modulated outputs yn t are then filtered using an integrator which integrates yn t over an interval of width and the result is then sampled at rate using an adc the th sample acquired by the adc in the nth channel is z yn yn t dt the integration operation commutes with the modulation process hence we can equivalently integrate the signals zn t over the interval of width and treat them as samples z rm of the ensemble z c t some of the initial development in this section may resemble with but it is to be noted that compared to the signal structure to be exploited here is correlations among the signals and not the sparsity this leads to a completely different development towards the end of this section draft by ahmed and romberg december the entries n of the matrix z are z n t dt b x e e e c n e defined in and the bracketed term representing the e are the entries of the matrix c where c n filter e l b where w as defined in we will denote by l as a diagonal matrix containing l along the diagonal it is important to note that l is invertible as l does not vanish on any b in view of it is clear that e z clf aclf ax where x clf inherits its structure from x c t since we have already carried out integration over intervals of length the action of modulator followed by integration over now simply reduces to randomly and independently flipping every entry of z and adding consecutive such entries in a given row to produce the value of the sample acquired by the adc mathematically we can write this concisely by defining a vector dn supported on an index set b of size where we are for simplicity that is a factor of w on the support set b the entries of the vector dn are independent binary random variables and are zeros on b c moreover assume that are the rows of a with these notations in place we can concisely write the th sample in t in the nth branch as yn x dn n n all this shows is that the samples taken by the adc in the sampling architecture in figure are linear measurements of an underlying matrix x rm defined in the rank of x does not exceed r recalling from section that r constitutes the number of linearly independent signals in the ensemble x c t our objective is to recover x from a a few linear measurements yn which amounts to reconstructing x c t at a rate sampling matrix recovery define a linear map a x y where y is a length n vector containing linear measurements yn in as its entries formally a x x dn n n we are mainly interested in the scenario when the linear map a is under determined that is the number of measurements n is much smaller than the number of unknowns m w therefore to uniquely determine the true solution x we solve a penalized optimization program argmin subject to y a x x where is the nuclear norm the sum of the singular values of x the nuclear norm penalty encourages the solution to be low rank and has concrete performance guarantees when the linear map a obeys certain properties in case of noisy measurements y a x slight modification of this can result in an argument when is not a factor of w for details see draft by ahmed and romberg december with bounded noise we solve the following quadratically constrained convex optimization program argmin subject to ky a x x this optimization program is also provably effective see for example for suitable a sampling theorem exact and stable recovery the unknown matrix x in is at most and assume x u is its reduced form svd where u rm and v rw are the matrices of left and right singular vectors respectively and is a diagonal matrix of singular values define coherences of x as u m max ku em v max kv i b r m r m max max u v i b u v r m m and where i b is a diagonal w w matrix containing ones at the diagonal positions indexed by b we may sometime just work with notations and and drop the dependence on u and v when it is clear from the context it can easily be verified that in a similar manner one can show that to see this notice that kv i b kv i b r r r r that is using the fact that kv i b kf kv ki b the upper bound also follows finally similar techniques also show that m one can attach meaning to the values of coherences in the context of sampling application under consideration for example the smallest value of is achieved the energy of x is roughly equally distributed among the columns indexed by in the context of the sampling problem this means that the energy in the signal ensemble x c t should be dispersed equally across time similarly the coherence quantifies the spread of signal energy across array elements and measures the dispersion of energy across both the time and array elements let us define u v max u v u v we are now ready to state our main result that dictates the minimum sampling rate at which each adcs needs to be operated to guarantee the reconstruction of signal ensemble x c t theorem correlated signal ensemble x c t in can be acquired using the sampling architecture in figure by operating each of the adcs at a rate c r w w m where is a universal constant that only depends on the fixed parameter in addition the ratio of the number of output to the input signals in avmm must satisfy c log w where c is a numerical constant the exact signal reconstruction can be achieved with probability at least o w by solving the minimization program in the result indicates that m well spread out correlated signals can be acquired by operating each adc in figure at a rate of times the w to within log factors moreover we also require the number n of output signals at the avmm to be larger than number m of input signals by a log factor however we believe this is merely an artifact of the proof technique and our experiments also corroborate that successful recovery is always obtained for satisfying even when n m or p draft by ahmed and romberg december also note that the result in theorem assumes without loss of generality that w m in the other case when m w the sufficient sampling rate at each acc can be obtained by replacing w in with m another important observation is that the sampling rate scales linearly with coherence implying that the sampling architecture is not as effective for correlated signals concentrated across time to remedy this shortcoming a preprocessing step using random filters a mixing avmm can be added to ensure signals are across time and array elements stable recovery in a realistic scenario the measurements are almost always contaminated with noise yn x dn n n compactly expressed using the vector equality in in the case when the noise is bounded p then following the template of the proof in it can be shown that under the conn n c of obeys ditions of theorem the solution x c x kf c m kx with high probability for more details on this p see a similar stability result in theorem in the upper bound above is suboptimal by a factor of min w m in theory we can improve this suboptimal result and show the effectiveness of the nuclear norm penalty by analyzing a different estimator argmin kx y x this estimator was proposed in and can be theoretically shown to obey essentially optimal stable c is the minimizer of if and only if kx y recovery results using the fact that x f c is a simple soft thresholding of the singular values of the matrix one can show that the estimate x y rm x y ur y v y r where max x in addition ur y and v r y are the left and right singular vectors of the matrix y respectively and y is the corresponding singular value in comparison to the estimator the matrix lasso in does not use the knowledge of the known distribution of a and instead minimizes the empirical risk ky a x a x i ka x knowing the distribution and the fact that e a i holds in our case we replace ka x by its expected value e ka x in the empirical risk to obtain the estimator in by completing the square although the klt estimator is easier to analyze and will be shown to give optimal stable recovery results in theory but it does not empirically perform as well as matrix lasso in we quantify the strength of the noise vector rn through its norm for a random vector z we define inf u e and for scaler random variables we simply take z in the above definition the norm is finite if the entries of z are subgaussain and is proportional to variance if the entries are gaussian we assume that the entries of the noise vector obey and with this the following result is in order draft by ahmed and romberg december theorem fix given measurements y of x in contaminated with additive noise with c to obeys statistics in the solution x c x c max kx f with probability at least w whenever w w where is a universal constant depending only on roughly speaking the stable recovery theorem states that the nuclear norm penalized estimators are stable in the presence of additive measurement noise the results in theorem are derived assuming that are random with statistics in in contrast the stable recovery results in the compressed sensing literature only assume that the noise is bounded where is the noise vector introduced earlier here we give a brief comparison of theorem with the stable recovery results in compare the result in with it follows that our results improve upon the results in by a factor of m we will also compare our stable recovery results against the stable recovery results derived in the result roughly states if the linear operator a satisfies the matrix rip and then the solution c to obeys x c x kf kx the above result is essentially optimal stable recovery result in comparison to the result in is also optimal however we prove it for a different estimator and under a statistical bound on the noise term in addition we also donot require the matrix rip for a which is generally required to prove optimal results of the form of architecture uniform sampling architecture the discussion in section and the result in theorem suggest that sampling rate sufficient for exact recovery using the architecture and scales linearly with the coherence parameter and respectively as discussed earlier the coherence parameters quantify the energy dispersion in the correlated signal ensemble x c t across time and array elements ideally we would like the sampling rate to only scale with factor of w and be independent of signal characteristics coherences to achieve this signals are preprocessed with random filters and avmm so that signal energy is evenly distributed across time and array elements the resultant signals are the randomly modulated filtered and sampled uniformly at a rate the modified sampling architectures are depicted in figure and nus adc nus adc nus adc figure architecture analog multiplier avmm takes random linear combinations of m input signals to produce m output signals this equalizes energy across channels the random lti filters convolve the signals with a diverse waveform that results in dispersion of signals across time the resultant signals are then sampled at locations selected randomly on a uniform grid at an average rate using a sampling nus adc in each channel draft by ahmed and romberg december lti low pass adc lti adc low pass lti low pass adc figure architecture random lti filters disperse each of the m signal across time an analog multiplier avmm takes random linear combinations of m input signals to produce n output signals this amounts to choosing as the mixing matrix where a is as in and is an m m dense randomorthogonal matrix the well dispersed signals across time and array elements are now randomly modulated filtered and sampled at rate recall that random lti filters are all pass and convolve the signals with a diverse impulse response hc t which disperses signal energy over time see lemma we will use the same random lti filter hc t in each channel the action of the random convolution of hc t with each signal in the ensemble can be modeled by the right multiplication of a circulant random orthogonal matrix h rw with the underlying x in the avmm takes the random linear combination of m input signals to produce n output signals which then equalizes the signal energy across array elements regardless of the initial energy distribution as discussed earlier that the action of avmm is left multiplication of a with the ensemble x c t in architecture the avmm is m and to ensure mixing of signals across array elements we take the mixing matrix to be an m m random orthonormal matrix thus the samples collected in architecture are not the subset of the entries of x defined in but of in architecture the avmm is modified from a i m p p rn in to where rm is a random orthonormal matrix this implies that unlike the samples y a x the architecture collects y a where h and a is same as defined in both and multiply the matrix of samples x and x with random orthogonal matrices on the left and right this multiplication results in modifying the singular vectors u rm and v rw of the e rm and ve hv rw note that matrix matrix of samples either x or x to u and are an isometry with and have the same rank as x and x respectively the new left and right draft by ahmed and romberg december singular vectors and ve of or are in some sense random orthogonal matrices and hence incoherent the following lemma shows the incoherence of matrix and lemma fix matrices u rm and v rw of the left and right singular vectors respectively e and ve hv and the create random orthonormal matrices rm and h rw let u e e e e coherences u v and u v be as defined in then for a the following conclusions log m r e ve log w max log m u r e ve log w max u and each holding with probability exceeding o w proof of lemma is presented in section in light of it is clear that samples collected using architecture are randomly selected subset of the entries of and using the result in and the sufficient sampling rate for the successful reconstruction of signals becomes max r log m w m in light of it is clear that the samples collected using architecture are the same as in with x with this observation combining the bound on in lemma with theorem immediately replaced by x provides with the following corollary that dictates the sampling rate sufficient for exact recovery using the uniform sampling architecture in figure corollary fix the correlated ensemble x c t in can be exactly reconstructed using the optimization program in with probability at least o w from the samples collected by each of the adc in figure at a rate max r log m w w m where is a universal constant depending only on in addition the ratio of the number of output to the input signals in avmm must satisfy c log w for a sufficiently large constant numerical experiments in this section we study the performance of the proposed sampling architectures with some numerical experiments we mainly show that a correlated ensemble x c t in can be acquired by only paying a small factor on top of the optimal sampling rate of roughly rw we then show the distributed nature of the sampling architecture in figure by showing that increasing the number of adcs or the array elements the sampling burden on each of the adc can be reduced as the net sampling rate is shared evenly among the adcs finally we show that the reconstruction algorithm is robust to additive noise sampling performance in all of the experiments in this section we generate the unknown matrix x synthetically by multiplying tall m r and fat r w gaussian matrices our objective is to recover a batch of m signals with w samples taken in a given window of time using the sampling architecture in figure we take p or n m for all these experiments and the results hint that p or n m in draft by ahmed and romberg december theorem is only a technical requirement due to the proof technique we will use the following parameters to evaluate the performance of the sampling architecture oversampling factor r w m r where the oversampling factor is the ratio between the cumulative sampling rate and the inherent unknowns in x the successful reconstruction is declared when the relative error obeys relative error c x kf kx kx kf the first experiment shows a graph in figure a between and each point marked with a black dot represents the minimum sampling rate required for the successful reconstruction of an x c t with a specific the probability of success for each point is and is computed empirically by averaging over independent iterations the blue line shows the fit of the black dots it is clear from the plot that the for reasonably large values of r the sampling rate is within a small constant of the optimal rate r w m r sampling rate oversampling in context of the application and under the assumption described in section the graph in figure b shows that for a fixed number of sources r the sufficient sampling rate is inversely proportional to number m of the receiver array elements each black dot represents the minimum sampling rate required for the successful reconstruction with probability the blue line is the fit of these marked points in other words figure b illustrates the relationship between the number m of adcs and the sampling rate for a fixed number of sources r importantly an increase in the receiver array elements reduces the sampling burden on each of the adcs number of source r rank r m number of adcs a b figure performance of sampling architecture in these experiments we take an ensemble of signals each bandlimited to the probability of success is computed over iterations a oversampling factor as a function of the number r of underlying independent signals in x c t the blue line is the fit of the data points b sampling rate versus the number m of recieving antennas the blue line is the fit of the data points stable recovery in the second set of experiments we study the performance of the the recovery algorithm when the measurements are contaminated with additive measurement noise as in we generate noise using the standard draft by ahmed and romberg december gaussian model n i we select l l a natural choice as the condition holds with high probability for the experiments in figure we solve the optimization program in the plot in figure a shows the relationship between the ratio snr kx snr db log and the realtive error db c x kx f kx relative error db log for a fixed oversampling factor the result shows that the relative error degrades gracefully with decreasing snr in the figure b the plot depicts relative error as a function of the oversampling factor for a fixed snr the relative error decrease with increasing sampling rate x relative error relative error db snr db a oversampling b figure recovery using matix lasso in the presence of noise the input ensemble to the simulated random demodulator consists of signals each bandlimited to with number r of latent independent signals a the snr in db versus the relative error in db the oversampling factor b relative error as a function of the sampling rate the snr is fixed at proof of lemma we start with the proof of lemma e au where we are taking a rm to be a random orthogonal matrix and proof recall that u ve hv where h was defined in let em denote the standard basis vectors in rm we begin the proof by noting a standard result see that reads e em max ku m max r log m m with probability at least o m before proving the lemma we prove an intermediate result max kve ek max r log w w w draft by ahmed and romberg december where ek are standard basis vectors in rw assuming w is even it will be clear how to extend the argument to w odd we can write h w where z w w cos w cos w w w w w w n q n w w w w sin w w sin w w w w w w and with equal probability and for are uniform on and all of these random variables are independent it is a fact that in for fixed a and uniform the random variables sign cos a and sign sin a are independent of one another thus h has the same probability distribution as w where z diag z and the entries of z are iid random variables in light of this we will replace h with w for a fixed k we can write e zwk ve ek v h ek q w x z wk q e v and wk w ek and q is the column of q e we will apply the following concentration where q inequality theorem let rn be a vector whose entries i are independent random variables with i and let s be a fixed m n matrix then for every t p e t where e kskf e w k where w k diag wk and z in this case we have we can apply the above theorem with s q e k q f w x e w kk and kq q w e kqk q w kq w x kq w w q q thus p kve ek t and using the union w w bound r r e p max kv ek w w w we can make this probability less than w by taking t c log w and follows now to prove and we can write h w let wk be the kth column of w and let be e for a fixed row index m and column index k we can write an entry of u e ve as the mth row of u h i h i h i e ve e w v eq e zw e zwk u u u q m k m k m k e v is a tall orthonormal matrix let q e since the z are iid random variables a where q m m standard applications of the hoeffding inequality tells us that p e ve u m k e wk z pm zwk where kpm w w thus with probability exceeding h i e ve u m k log w w draft by ahmed and romberg december taking the maximum over m m and k w on both sides and plugging in the bound in shows that mw max max r m w e ve ek u log w max log m r holds with probability at least o w m o w where the equality follows from the fact that w m this proves the first claim in lemma similarly implies that x e ve u m k log w log w w where b is defined in and the last equality follows from the fact that finally evaluating the maximum over m m and on both sides and using the bound in shows that e ve i b log w max log m max max u r m r which proves the second claim in lemma proof of theorem preliminaries recall from we obtain measurements yn of an unknown matrix x through a random m w measurement ensemble an n n where rm denote the rows of mixing matrix a rn and dn rw are random binary on support set b and zero elsewhere in addition the vectors n are independently generated for every n and in theorem the avmm simply replicates without mixing p copies of m input signals to produce n output signals this amounts to choosing p i m i m i m p where p from this construction we have kan and a i m also recall that using the definition of linear map a in the measurements are compactly expressed as y a x moreover the adjoint operator is x x x x y yn an an x dn n n where the second equality is the result of it will also be useful to visualize the linear operator a in a matrix form x x x x a an an an dn n n where denotes the tensor product in general the tensor product of matrices y y with xi rm and y i rw is given by the big matrix y y y y m y y y y y y m y y y y m y y m m y y draft by ahmed and romberg december with this definition it is easy to visualize that e a i let m and v w denote the rows of the matrices u and v respectively we begin by defining a subspace t rm associated with x with decomposition given by x u t x x u z z v z rw z rm the orthogonal projections onto t and its orthogonal complement t are defined as pt z u u z zv v u u zv v and pt z z pt z respectively in the proofs later we repeatedly make use of the following calculation kpt an kf hpt an an i hu an u an i han v an v i hu an v u an v i ku an kan v ku an v ku an kan v observe that ku an ku an w ku an and kan v kan v v p this leads us to w ku an v p finally we will also require a bound on the operator norm of the linear map a to this end note that the measurement matrices an are orthogonal for every in the standard inner product that is han an i whenever this directly implies a following bound on the operator of a r sx mw kak kan dn kf w kpt an n where in the last inequality we used the fact that m w and although a much tighter bound can be achieved using results from random matrix theory the loose bound is sufficient for our purposes sufficient condition for the uniqueness uniqueness of the minimizer to can be guaranteed by the sufficient condition given below proposition the matrix x is the unique minimizer to if range such that null a kpt y k kpt z ku v pt y kf kpt z kf in light of the proposition it is sufficient to show that range such that kpt y u v kf and for every z null a kpt y k kpt z kf kpt z kf holds this can be immediately shown as follows ka z kf ka pt z kf ka pt z kf ka pt z kf w z kf in addition for an arbitrary z we have ka pt z ha pt z a pt z i hz pt apt z i kpt apt pt k kpt z kpt z where the last inequality is obtained by plugging in kpt apt pt k which will be shown to be true under appropriate choice of with probability at least o w in corollary combining the last two inequalities gives us the result in draft by ahmed and romberg december golfing scheme for the random modulator for technical reasons we will work with partial linear maps ap rm rm p p modified from the linear map a in define p partitions p of the index set n as p m pm for every p p clearly and n we will take the number of partitions p the partial linear maps ap are defined as ap x xdn n using the definition of a in it is clear that an n em m m for every p p x an p im n the corresponding adjoint operator maps a vector z rm to an m w matrix x x z zn an it will also be useful to make a note of the following versions of the above definition x x x x ap x an xdn and ap an an where the second definition just emphasizes the fact that the linear map ap can be thought of as a big m w m w matrix that operates on a vectorized x with the linear operators defined on the subsets p above we write the iterative construction of the dual certificate y p y ap pt y u v where y p range where we take y projecting onto the subspace t on both sides results in pt y p pt y pt ap pt y u v define w p pt y p u v the iteration takes the equivalent form w p w pt ap pt w we will take y y p to be our candidate for the dual certificate and the rest of this section concerns showing that y p obeys the conditions in let s start by showing that kpt y p u v kf holds to this end note that from the iterative construction above the following bound immediately follows kw p kf kpt ap pt pt kkw kf from lemma we have kpt ap pt pt k for every p p this means that kw p kf cuts after every iteration giving us the following bound on the frobenius norm of the final iterate w p kw p kf ku v kf r when p using the union bound over p p the bound on kw p kf holds with probability at least o p w o w this proves that the candidate dual certificate y p obeys the first condition in since p this implies that the number n of output channels from the multiplier in figure must be a factor of roughly log w compared to the input channels n cm log we assume that is an can be ensured in the worst case by doubling n draft by ahmed and romberg december however we believe this requirement is merely an artifact of using golfing scheme as the proof strategy for theorem in practice all our simulations point to n m that is the number of channels at the output of the avmm are equal to the input channels pp from the iterative construction it is clear that y p ap w we will now converge on showing that y p satisfies the second condition in begin with kpt y p k p x pt ap w p x pt ap w w where the last equality follows from the fact that w t since kpt k we have p x pt ap w w p x ap w w p x the second last inequality above requires ap w w k for every p p which using lemma is only true when w w with probability at least o p w o w where the factor p comes from the union bound over every p p lemma combining sample complexities in and and using the definition of in gives us the proof of theorem key lemmas we now state the key lemmas to prove theorem lemma fix assume that max r w w m where is a universal constant only depending on then the linear operator ap obeys pt ap pt pt with probability at least o w proof of this lemma will be presented in section r corollary fix assume max m w w where is a universal constant that only depends on then the linear operator a defined in obeys kpt apt k with probability at least o w proof proof of this corollary follows exactly the same steps as the proof of lemma with only difference being that we take p lemma define coherence of the iterates w p as max max w p i b r m m then under the same conditions as in we have with probability at least o w proof the proof of this lemma follows similar techniques and matrix bernstein inequality as used in lemma similar results can be found in we skip the proof due to space constraints draft by ahmed and romberg december using the definition of in and the fact that w v we can see that invoking lemma for every p p we can iteratively conclude that with probability at least o p w o w lemma fix take r w w m for a sufficiently large constant let w be a fixed m w matrix defined in then ap w w with probability at least o w proof of this lemma will be presented in section references fazel matrix rank minimization with applications dissertation stanford university march recht fazel and parrilo guaranteed solutions of linear matrix equations via nuclear norm minimization siam review vol no pp and recht exact matrix completion via convex optimization found comput vol no pp gross recovering matrices from few coefficients in any basis ieee trans inform theory vol no pp gorodnitsky and rao sparse signal reconstruction from limited data using focuss a minimum norm algorithm ieee trans sig vol no pp fuchs multipath detection and estimation ieee trans sig vol pp on the application of the global matched filter to doa estimation with uniform circular arrays ieee trans signal vol no pp april romberg and tao robust uncertainty principles exact signal reconstruction from highly incomplete frequency information ieee trans inform theory vol no pp february kunis and rauhut random sampling of sparse trigonometric polynomials appl comp harmon analysis vol rudelson and vershynin on sparse reconstruction from fourier and gaussian measurements comm pure appl vol no pp duarte and baraniuk spectral compressive sensing appl comp harm analysis vol no pp july tang bhaskar shah and recht compressed sensing off the grid ieee trans inform theory vol no pp draft by ahmed and romberg december and towards a mathematical theory of comm pure appl vol no pp june ali ahmed and justin romberg compressive multiplexing of correlated signals ieee trans inform theory vol pp hormati roy lu and vetterli distributed sampling of signals linked by sparse filtering theory and applications ieee trans sig vol no pp baron duarte wakin sarvotham and baraniuk distributed compressive sensing arxiv preprint mishali and eldar and dounaevsky and shoshan xampling analog to digital at subnyquist rates iet circuits devices vol no pp mishali and eldar blind multiband signal reconstruction compressed sensing for analog signals ieee trans sig vol no pp mishali eldar and elron xampling signal acquisition and processing in union of subspaces ieee trans sig vol no pp mantzel and romberg compressed subspace matching on the continuum arxiv preprint malvar and staelin the lot transform coding without blocking effects ieee trans speech signal vol pp april asif and romberg sparse recovery of streaming signals using ieee trans sig vol no pp schmidt multiple emitter location and signal parameter estimation ieee trans antennas vol no pp roy and kailath of signal parameters via rotational invariance techniques ieee trans speech signal vol no pp slepian on bandwidth proceedings of the ieee vol no pp march prolate spheroidal wave functions fourier analysis and uncertainty v the discete case bell systems tech journal vol pp schlottmann and hasler a highly dense low power programmable analog multiplier the fpaa implementation ieee emerg sel topic circuits vol no pp chawla and bandyopadhyay and srinivasan and hasler a currentmode programmable analog multiplier with over two decades of linearity in proc ieee conf custom integr pp tropp and laska and duarte and romberg and baraniuk beyond nyquist efficient sampling of sparse bandlimited signals ieee trans inform theory vol no pp laska and kirilos and duarte and raghed and baraniuk and massoud theory and implementation of an converter using random demodulation in proc ieee int symp circuits pp yoo becker loh monge and a rate receiver in cmos in proc ieee radio freq integr circuits symp rfic draft by ahmed and romberg december yoo turnes nakamura le becker sovero wakin grant romberg and a compressed sensing parameter extraction platform for radar pulse signal acquisition submitted to ieee emerg sel topics circuits february murray pouliquen andreou and lauritzen design of a cmos data converter theory architecture and implementation in proc ieee annu conf inform sci syst ciss baltimore md pp romberg compressive sensing by random convolution siam imag vol no pp haupt and bajwa and raz and nowak toeplitz compressed sensing matrices with applications to sparse channel estimation ieee trans inform theory vol no pp rauhut and romberg and tropp restricted isometries for partial random circulant matrices appl comput harmonic vol no pp tropp and wakin and duarte and baron and baraniuk random filters for compressive sampling and reconstruction in proc ieee int conf speech signal process icassp toulouse france recht a simpler approach to matrix completion mach learn vol pp and y plan matrix completion with noise proc ieee vol no pp mohan and fazel new restricted isometry results for noisy recovery in proc ieee int symp inform theory isit austin texas june ahmed and recht and romberg blind deconvolution using convex programming ieee trans inform theory vol no pp koltchinskii lounici and tsybakov penalization and optimal rates for noisy matrix completion ann vol no pp fazel and and recht and parrilo compressed sensing and robust recovery of low rank matrices in proc ieee asilomar conf signals syst pacific grove ca pp laurent and massart adaptive estimation of a quadratic functional by model selection ann pp ledoux the concentration of measure phenomenon ams vol tropp tail bounds for sums of random matrices found comput vol no pp eldar and kutyniok compressed sensing theory and applications press draft by ahmed and romberg december cambridge university appendix proof of key lemmas proof of all the key lemmas mainly relies on using matrix bernstein inequality to control the operator norms of sums of random matrices matrix inequality we will use a specialized version of the matrix inequality that depends on the orlicz norms the orlicz norm of a random matrix z is defined as inf u e exp suppose that for some constant kz q u q q then the following proposition holds proposition let z z z q be iid random matrices with dimensions m n that satisfy e z q suppose that for some define q q x x e z z q max e z q z then a constant c such that for all t with probability at least p kz z q k c max t log m n t log m n proof of lemma we start by writing pt ap pt as a sum of independent random matrices using to obtain pt ap pt using and the fact that p x x p pt an pt an e dn i w the expectation of the quantity above evaluates to x x x x e pt ap pt pt p e an dn pt pt p an e dn pt pt the quantity pt ap pt pt can therefore be expressed as a sum of independent zero mean random matrices in the following form x x pt ap pt pt p pt an pt an e pt an pt an we will employ matrix bernstein inequality to control the operator norm of the above sum to proceed define the operator zn which maps z to hpt an zi pt an zn pt an pt an this operator is rank one therefore p p pthe operator norm kzn k kpt an dn kf to ease the notation we will use as a shorthand for we begin by computing the variance in as follows n p x n e zn e zn p x n e zn e zn p draft by ahmed and romberg december x n e zn where the last inequality follows from the fact that e zn and e zn are symmetric and semidefinite matrices the square of the matrices zn is simply given by zn kpt an zn now we develop the operator norm of the result simplified expression using w x e kpt an dn kf zn ku an kv dn zn p n n using the definition in and we can bound ku an m using this fact we have x x x rw rw e kv dn zn e zn p e kv dn zn n n n the second term in can be simplified as x x e kv dn an an pt e kv dn zn pt n n x n e kv dn an an where the last inequality follows form the fact that kpt k since an an an dn and a simple calculation reveals the expectation e kv dn dn kv i b i b b v v i b i b v v i b i b where for diag x is the diagonal matrix obtained by setting the entries of x to zero and i b denotes the w w identity matrix with ones only at the diagonal positions indexed by b this directly implies that x x i b kf i b an an e kv dn an dn an dn n r max kv i b p where the last equality follows from the definition of the coherence in plugging in we have the bound rw finally we calculate the orlicz norm the last ingredient to obtain the bernstein bound first it is important to see that p kzn e zn k kzn k kzn kf kpt an where the equality follows form the fact that zn is the operator using the last equation and we have w ku kv dn p rw r w cp max kv i b kf c pm p max max kpt an n max max n moreover a simple calculation and using the facts that and shows that log m c log w m using this together with and and using t log w m in the bernstein s inequality in proposition we have rw r p rw r kpt ap ap pt pt k c max log w m w m we can conclude now that choosing r r w m ensures that kpt ap pt k which proves the lemma after using the fact that w m draft by ahmed and romberg december proof of lemma just as in the proof of lemma we will start with writing the ap w as a sum of independent random matrices using as follows x x ap w p an w dn recall that dn are random binary defined earlier then the expectation of the random quantity above is x x x x e ap w p e an w d n n p an w i b w where the last two equalities follow from the fact that x an im p x and e dn x i b i w we bound the operator norm ap w w in light of discussion above ap w w can be expressed as a following sum of independent and zero mean random matrices x ap w w p an w dn e an w dn n where p n p p is a shorthand for define z n p an w dn e an w dn to compute the variance in we start with x w e z n z p e an w dn e an w dn an an e an an w dn dn n n where we used the fact that kdn since e z n z is a symmetric matrix that is e z n z this together with definition of z n implies that x w e an w dn e an w dn e an w dn an an n n and therefore x n e z n z p w x w x e w dn an p kan w i b an pw n n max max w i b n w max max w i b m r where the inequalities follow by using the definition of coherence in and for the second variance term in we skip through similar step as for the first term and land directly at x x e z n z x x p kan e w dn dn x x e w dn dn draft by ahmed and romberg december where the last equality is the result of one can show that for a fixed vector x rw and the fact that dn is a vector with independent rademacher random variables at locations indexed by b and zero elsewhere the following e dn dn kxb i b i b holds where xb is equal to x on b and zero elsewhere moreover i b is a diagonal matrix with ones at b and zero elsewhere using with w we have x x e z n z x x x w i b i b max w i b m max max w i b r where in the last inequality we use the definition of in combined with in light of the maximum of and accounts for the variance r r where in the last inequality follows from our assumption that w m finally we need to compute an upper bound on the orlicz norm of the random variable kz n begin by using similar simple facts above that r w kz n k kan an w dn dn k kan kdn w dn w dn n p using standard calculations see for example we can compute the following finite bound on the norm of the random variable w dn max max w dn max max m ke w dn p m c max max w i b p m r r where the last inequality follows from using p and then directly gives us max max kz n r moreover using a loose bound on variance r it is easy to see that log c log the results in and can be plugged in proposition to obtain r r p kap ap w w kf c max r log w m r log w m with t log w m which holds with probability at least o w m recall that w m the lemma now follows by using the bound on in and choosing w for a universal constant that only depends on a fixed parameter draft by ahmed and romberg december proof of theorem the first step in the proof is the following oracle inequality in that gives an upper bound on the deviation c in from the true solution x in the mean squared sense of x theorem oracle inequlaity in suppose we observe the noisy measurements y in of x with c rank x r and it is given that y e y k fro some scalar then the solution x c of the nuclear norm penalized estimator in obeys kx x kf min r all that is required is to bound the spectral norm y e y k a x x k k we begin by bounding the first term above a x x k using a corollary to lemma stated as follows corollary let x be a fixed m w matrix defined in then r r p log w log w ka a x x k ckx kf max with probability at least o w proof the proof of the corollary is very similar to the proof of lemma the main difference is that the number of partitions is p moreover we have x in place of w and in the proof development replace kw kf r with kx kf to obtain bound in which is understandably similar to lemma fix the for a sufficiently large constant c the following bound r p ka k log w holds with probability at least o w using corollary and lemma we can bound and obtain s q kx w with probability at least w taking kx kf without loss of generality and r w w qm where is a universal constant that depends on a fixed parameter allows us to choose with this an application of theorem proves theorem proof of lemma the proof of this lemma requires the p use of matrix bernstein s inequality as it is required to bound the spectral norm of the sum n an we start with the summands z n an because variables are zero mean it follows that e z n we start by computing the variance x n e z n z w x w e an n max n x e x n an draft by ahmed and romberg december w p where the last inequality follows from the facts that n an i m and that for n n p are independent and identically distributed implying n e similarly arguments lead to x n e z n z x n kan e e dn m x m e i b n n combining and and using gives p where p and we assume that w m the final quantity required is the orlicz norm of z n which is simply kz n kan c then kz n log m kz n mw w p m r m w p at the end using t log w in the bernstein s bound we have r r p ka k c max log m w log m w p and using the fact that p o log w and m w from proves the result draft by ahmed and romberg december
| 7 |
white matter fiber segmentation using functional varifolds kuldeep pietro benjamin stanley olivier and christian sep livia de technologie montreal canada aramis inria paris sorbonne upmc univ paris inserm cnrs institut du cerveau et la moelle icm boulevard de paris france departments of neurology and neuroradiology paris france ltci lab images group paristech paris france de montpellier france abstract the extraction of fibers from dmri data typically produces a large number of fibers it is common to group fibers into bundles to this end many specialized distance measures such as mcp have been used for fiber similarity however these distance based approaches require correspondence and focus only on the geometry of the fibers recent publications have highlighted that using microstructure measures along fibers improves tractography analysis also many neurodegenerative diseases impacting white matter require the study of microstructure measures as well as the white matter geometry motivated by these we propose to use a novel computational model for fibers called functional varifolds characterized by a metric that considers both the geometry and microstructure measure gfa along the fiber pathway we use it to cluster fibers with a dictionary learning and sparse framework and present a preliminary analysis using hcp data introduction recent advances in diffusion magnetic resonance imaging dmri analysis have led to the development of powerful techniques for the investigation of white matter connectivity in the human brain by measuring the diffusion of water molecules along white matter fibers dmri can help identify connection pathways in the brain and better understand neurological diseases related to white matter since the extraction of fibers from dmri data known as tractography typically produces a large number of fibers it is common to group these fibers into larger clusters called bundles clustering fibers is also essential for the creation of white matter atlases visualization and statistical analysis of microstructure measures along tracts most fiber clustering methods use specialized distance measures such as mean closest points mcp distance however these approaches require correspondence between fibers and only consider fiber geometry another important aspect for white matter characterization is the statistical analysis of microstructure measures as highlighted in recent publications using microstructure measures along fibers improves tractographic analysis motivated by these we propose to use a novel computational model for fibers called functional varifolds characterized by a metric that considers both the geometry and microstructure measure generalized fractional anisotropy along fiber pathways motivation for this work comes from the fact that the integrity of white matter is an important factor underlying many cognitive and neurological disorders in vivo tissue properties may vary along each tract for several reasons different populations of axons enter and exit the tract and disease can strike at local positions within the tract hence understanding diffusion measures along each fiber tract tract profile may reveal new insights into white matter organization function and disease that are not obvious from mean measures of that tract or from the tract geometry alone recently many approaches have been proposed for tract based morphometry which perform statistical analysis of microstructure measures along major tracts after establishing fiber correspondences while studies highlight the importance of microstructure measures most approaches either consider the geometry or signal along tracts but not both the intuitive approach would be to consider microstructure signal during clustering also however this has been elusive due to lack of appropriate framework as a potential solution we explore a novel computational model for fibers called functional varifolds which is a generalization of the varifolds framework the advantages of using functional varifolds are as follows first functional varifolds can model the fiber geometry as well as signal along the fibers also it does not require pointwise correspondences between fibers lastly fibers do not need to have the same orientation as in the framework of currents we test the impact of this new computational model on a fiber clustering task and compare its performance against existing approaches for this task as clustering method we reformulate the dictionary learning and sparse coding based framework proposed in this choice of framework is driven by its ability to describe the entire of fibers in a compact dictionary of prototypes bundles are encoded as sparse combinations of multiple dictionary prototypes this alleviates the need for explicit representation of a bundle centroid which may not be defined or may not represent an actual object also sparse coding allows assigning single fibers to multiple bundles thus providing a soft clustering the contributions of this paper are threefold a novel computational model for modeling both fiber geometry and signal along fibers a generalized clustering framework based on dictionary learning and sparse coding adapted to the computational models and a comprehensive comparison of models for clustering fibers white matter fiber segmentation using functional varifolds modeling fibers using functional varifolds in the framework of functional varifolds a fiber x is assumed to be a polygonal line of p segments described by their center point xp and tangent vector centered at xp and of length cp respectively yq and dq for a fiber y with q segments let fp and gp be the signal values at center points xp and yq respectively and the vector field belonging to a reproducing kernel hilbert space rkhs w then the fibers x and y can be pp modeled based on functional varifolds as v x f xp fp cp and pq v y g yq gp dq more details can be found in the inner product metric between x and y is defined as hv x f v y g iw q p x x fp gq xp yq cp dq where and are gaussian kernels and is a kernel this can be as q p y t g x x p q p q p q exp cp dq hv x f v y g iw exp c p dq m w where and are kernel bandwidth parameters for varifolds a computational model using only fiber geometry and used for comparison in the experiments we drop the signal values at center pppoints thus the varifoldsbased representation of fibers will be vx xp cp and vy pq yq dq hence the inner product is defined as hvx vy iw q p x x exp y t p q p q cp dq cp dq fiber clustering using dictionary learning and sparse coding for fiber clustering we extend the dictionary learning and sparse coding based framework presented in let vt be the set of n fibers modeled using be the atom matrix representing the dictionary functional varifolds a coefficients for each fiber belonging to one of the m bundles and w be the cluster membership matrix containing the sparse codes for each fiber instead of explicitly representing bundle prototypes each bundle is expressed as a linear combination of all fibers the dictionary is then defined as d vt a since this operation is linear it is defined for functional varifolds the problem of dictionary learning using sparse coding can be expressed as finding the matrix a of m bundle prototypes and the assignment matrix w that minimize the following cost function arg min a w vt aw subject to smax parameter smax defines the maximum number of elements in wi the sparsity level and is provided by the user as input to the clustering method an important advantage of using the above formulation is that the reconstruction error term only requires inner product between the varifolds let q be the gram matrix denoting inner product between all pairs of training fibers qij hvxi fi vxj fj iw matrix q can be calculated once and stored for further computations the problem then reduces to linear algebra operations involving matrix multiplications the solution of eq is obtained by alternating between sparse coding and dictionary update the sparse codes of each fiber can be updated independently by solving the following arg min wi vt awi subject to smax which can be as q i i w arg min i a qaw i i aw i m wi smax the weights wi can be obtained using the kernelized orthogonal matching pursuit komp approach proposed in where the most positively correlated atom is selected at each iteration and the sparse weights ws are obtained by solving a regression problem note that since the size of ws is bounded by smax it can be otained rapidly also in case of a large number of fibers the nystrom method can be used for approximating the gram matrix for dictionary update a is recomputed by applying the following update scheme until convergence aij aij qw ij qaw w ij i n j experiments data we evaluate different computational models on the dmri data of unrelated subjects females and males age from the human connectome project hcp dsi studio was used for the signal reconstruction in mni space and streamline tracking employed to generate fibers per subject minimum length mm maximum length mm generalized fractional anisotropy gfa which extends standard fractional anisotropy to orientation distribution functions was considered as measure of microstructure while we report results obtained with gfa any other measure may have been used parameter impact we performed clustering and manually selected pairs of fibers from clusters most similar to major bundles we then modeled these fibers using different computational models and analyzed the impact of varying the kernel bandwidth parameters the range of these parameters were estimated by observing the values of distance between centers of fiber segments and difference between along tract gfa values for selected multiple pairs of fibers figure top left shows gfa fibers for pairs corresponding to a right corticospinal tract cst r b corpus callosum cc and c right inferior fasciculus ifof r cosine similarity in degrees is reported for the fiber pairs modeled using varifolds var and functional varifolds fvar for mm and figure top left shows gfa fiber pairs the visualization reflect the variation of fiber geometry microstructure measure gfa along fiber and difference in gfa along fiber for the select fiber pairs this visualization of variation and difference in gfa values along fibers support our hypothesis that modeling along tract signal along with geometry provides additional information the change in cosine similarity for cc from degrees fig gfa visualization and cosine similarity between pairs of fibers from three prominent bundles a cst r b cc c ifof r using framework of varifolds var and functional varifolds fvar top left and comparing variation of cosine similarity for the select fiber pairs over kernel bandwidth parameters and for the framework of functional varifolds top right cst r middle left cc middle right ifof r impact of on clustering consistency measured using average silhouette for m for functional varifolds vs varifolds bottom left and functional varifolds vs gfa only bottom right using varifolds to degrees using functional varifolds while for cst r from degrees to degrees reflect more drop in cosine similarity if along tract signal profiles are not similar this shows that functional varifolds imposes penalty for different along fiber signal profiles figure also compares the impact of varying the kernel bandwidth parameters for functional varifolds using similarity angle between pairs of these selected fibers top right cst r bottom left cc bottom right ifof r we show variation over and mm and and comparing the parameter variation images in figure we observe that the cosine similarity values over the parameter space show similar trends for all pairs of fibers this observation allows us to select a single pair of parameter model fvar var gfa mcp fig mean silhouette obtained with varifolds varifolds gfa and mcp computed for varying a number of clusters over subjects and seed values left detailed results obtained for subjects using right values for our experiments we have used mm and for our experiments based on the cosine similarity values in figure the smaller values for and will make the current fiber pairs orthogonal while for larger values we lose the discriminative power as all fiber pairs will have very high similarity quantitative analysis we report a quantitative evaluation of clusterings obtained using as functional varifolds fvar varifolds var mcp and gfa computational model the same dictionary learning and sparse coding framework is applied for all computational models for each of the hcp subjects we compute the gramian matrix using fibers randomly sampled over the full brain for seed values the mcp distance dij is calculated between each fiber pair i j as described in and the gramian matrix obtained using a radial basis function rbf kernel kij exp parameter was set empirically to in our experiments since our evaluation is performed in an unsupervised setting we use the silhouette measure to assess and comparing clustering consistency silhouette values which range from to measure how similar an object is to its own cluster cohesion compared to other clusters separation figure bottom row shows impact of on clustering consistency for functional varifolds varifolds and gfa only figure right gives the average silhouette for m and clusters computed over subjects and seed values the impact of using both geometry and microstructure measures along fibers is evaluated quantitatively by comparing clusterings based on functional varifolds with those obtained using only geometry varifolds mcp and only signal gfa as can be seen using gfa alone leads to poor clusterings as reflected by the negative silhouette values comparing functional varifolds with varifolds and gfa we observe a consistently improved performance for different numbers of clusters to further validate this hypothesis we also report the average silhouette over seed values obtained for subjects using m these results demonstrate that functional varifolds give consistently better clustering compared to other computational models using the same qualitative visualization figure top row shows the dictionary learned for a single subject m using functional varifolds fvar varifolds var and mcp distance for visualization purposes each fiber is assigned to a single cluster which is represented using a unique color the second and third rows of the silhouette analyzes only clustering consistency not the signal profile fvar var mcp fig full clustering visualization m top row single cluster visualization mid row and gfa based color coded visualization of the selected single cluster bottom row using following computational models for fibers functional varifolds left column varifolds middle column and mcp distance right column superior axial views note top row each figure has a unique color code figure depict a specific cluster and its corresponding gfa profiles we observe that all three computational models produce plausible clusterings from the gfa profiles of the selected cluster with correspondence across computational models we observe that functional varifolds enforce both geometric as well as signal profile similarity moreover the clustering produced with varifolds or mcp using only geometric properties of fibers are similar to one another and noticeably different from that of functional varifolds conclusion a novel computational model called functional varifolds was proposed to model both geometry and microstructure measure along fibers we considered the task of fiber clustering and integrated our functional varifolds model within framework based on dictionary learning and sparse coding the driving hypothesis that combining signal with fiber geometry helps tractography analysis was validated quantitatively and qualitatively using data from human connectome project results show functional varifolds to yield more consistent clusterings than gfa varifolds and mcp while this study considered a fully unsupervised setting further investigation would be required to assess whether functional varifolds augment or aid the reproducibility of results acknowledgements data were provided by the human connectome project references charlier charon the fshape framework for the variability analysis of functional shapes foundations of computational mathematics pp charon the varifold representation of nonoriented shapes for diffeomorphic registration siam journal on imaging sciences colby soderberg lebel dinov thompson sowell statistics allow for enhanced tractography analysis neuroimage corouge gouttard gerig towards a shape model of white matter fiber bundles using diffusion tensor mri in isbi pp ieee gori colliot worbe fallani chavez lecomte poupon hartmann ayache et al a prototype representation to approximate white matter bundles with weighted currents in miccai pp springer hagmann jonasson maeder thiran wedeen meuli understanding diffusion mr imaging techniques from scalar imaging to diffusion tensor imaging and beyond radiographics suppl kumar desrosiers a sparse coding approach for the efficient representation and segmentation of white matter fibers in isbi pp ieee kumar desrosiers siddiqi brain fiber clustering using kernelized matching pursuit in machine learning in medical imaging lncs vol pp kumar desrosiers siddiqi colliot toews fiberprint a subject fingerprint based on sparse code pooling for white matter fiber analysis neuroimage maddah grimson warfield wells a unified framework for clustering and quantitative analysis of white matter fiber tracts medical image analysis moberts vilanova van wijk evaluation of fiber clustering methods for diffusion tensor imaging in vis pp ieee o donnell westin golby morphometry for white matter group analysis neuroimage siless medina varoquaux thirion a comparison of metrics and algorithms for fiber clustering in prni pp ieee van essen smith barch behrens yacoub ugurbil consortium et al the human connectome project an overview neuroimage wang yap wu shen application of neuroanatomical features to tractography clustering human brain mapping wassermann bloy kanterakis verma deriche unsupervised white matter fiber clustering and tract probability map generation applications of a gaussian process framework for white matter fibers neuroimage yeatman dougherty myall wandell feldman tract profiles of white matter properties automating quantification plos one yeh tseng a high angular resolution brain atlas constructed by diffeomorphic reconstruction neuroimage
| 1 |
oct when slower is faster carlos and dirk instituto de investigaciones en aplicadas y en sistemas universidad nacional de cgg http centro de ciencias de la complejidad unam senseable city lab massachusetts institute of technology usa mobs lab northeastern university usa itmo university petersburg russian federation department of humanities social and political sciences gess eth http october abstract the slower is faster sif effect occurs when a system performs worse as its components try to do better thus a moderate individual efficiency actually leads to a better systemic performance the sif effect takes place in a variety of phenomena we review studies and examples of the sif effect in pedestrian dynamics vehicle traffic traffic light control logistics public transport social dynamics ecological systems and adaptation drawing on these examples we generalize common features of the sif effect and suggest possible future lines of research introduction how fast should an athlete run a race if she goes too fast she will burn out and become tired before finishing if she runs conservatively she will not get tired but will not make her best time to minimize her race time she has to go as fast as possible but without burning out if she goes faster she will actually race more slowly this is an example of the sif effect in order to run faster sometimes it is necessary to run slower not to burn out it is not trivial to calculate the running speed which will lead to the best race as this depends on the athlete race distance track temperature humidity and daily performance running dash should be done as fast as you can while running a marathon demands a carefully paced race how fast would an athlete run a marathon if she started with a speed for a to finish the marathon successfully she would obviously have to run more slowly there are several other examples of the sif effect which will be described in the next section we then generalize the common features of these phenomena to discuss potential causes and promising lines of research towards a unified explanation of the sif effect examples pedestrian evacuation perhaps the first formal study of the sif effect was related to pedestrian flows helbing et modelling crowds like particles with social forces interacting among them helbing and helbing et it has been shown that when individuals try to evacuate a room too quickly they lead to intermittent clogging and a reduced outflow as compared to a calmer evacuation in this context the sif effect is also known as freezing by heating stanley trying to exit fast makes pedestrians slower while calmer people manage to exit faster this has led people to suggest obstacles close to exits precisely to reduce friction helbing et et counterintuitively a slowdown of the evacuation can increase the outflow also in a related study of aircraft evaluation it was found that there is a critical door width which determines whether competitive evacuation will increase or decrease evacuation time kirchner et in other words pushy people will evacuate slower if there are narrow doors sif but will evacuate faster if the doors are wide enough fif pedestrians crossing a road another example concerns mixed pedestrian and vehicle traffic imagine pedestrians are trying to cross a road at a location where there is no traffic light and no pedestrian crossing is marked this is a typical situation along roads with a speed limit of or in shared spaces for use pedestrians would cross when the gap between two successive vehicles exceeds a certain critical separation that ensures a safe crossing of the road however there are two types of pedestrians patient and pushy ones pushy pedestrians might force a vehicle to slow down while patient pedestrians would not do this they would wait for a larger gap surprisingly if all pedestrians were of the patient type on average they would have to wait for a shorter time period jiang et how does this sif effect come about when a pushy pedestrian has slowed a vehicle down other arriving pedestrians will pass the road too and it takes a long time until no further pedestrians arrive and the stopped cars can accelerate again during the waiting time however a long vehicle queue has formed such that no large enough gap to cross the road occurs between vehicles until the entire vehicle queue has dissolved as a consequence pedestrians will have to wait for a long time until they can cross again altogether it is better if pedestrians wait for large enough gaps such that they don t force vehicles to slow down vehicle traffic sif effects are also known from vehicle traffic helbing and huberman helbing and treiber helbing helbing and nagel surprisingly speed limits can sometimes reduce travel times this is the case when the traffic density enters the metastable regime then traffic flow is sensitive to disruptions and may break down which causes largely increased travel times a speed limit can delay the breakdown of fluid traffic flows because it reduces the variability of vehicle speeds this homogenization avoids disturbances in the flow which are big enough to trigger a breakdown have a amplitude if vehicles go fast the safety distance between vehicles must be increased thus less vehicles will be able to use a road for example at a maximum capacity of about vehicles per km per lane is reached before free traffic flow breaks down at this capacity is reduced to about vehicles per km per lane once vehicles slow down due to an increased density traffic jams will propagate as a following car tends to brake more than the vehicle ahead this phase transition of stable to unstable flow in traffic depends on the desired speed thus to maximize flow the optimal speed of a highway will depend on the current density however the maximum flow lies at the tipping point and thus a small perturbation can trigger waves which can reduce the highway capacity by a similar consideration applies to maneuvers kesting et pushy drivers might force cars in the neighboring lane to slow down when changing lanes to overtake another car while patient drivers would not do this as a consequence pushy drivers may cause a disruption of metastable traffic flow which may trigger a breakdown capacity drop consequently patient drivers will avoid or delay a breakdown of traffic flow thereby managing to progress faster on average one may also formulate this in game theoretical terms when traffic flow is metastable drivers are faced with a social dilemma situation choosing a patient behavior will be beneficial for everyone while pushy behavior will produce small individual advantages at the cost of other drivers as a consequence a tragedy of the commons results pushy drivers undermine the stability of the metastable traffic flow causing congestion that forces everyone to spend more time on travel a complementary phenomenon is observed in braess s paradox braess et steinberg and zangw where adding roads can reduce the flow capacity of a road network traffic light control the sif effect is also found in further systems such as urban traffic light control helbing and mazloumian here a approach works only well at low traffic volumes otherwise forcing vehicles to wait for some time can speed up their overall progress the reason is that this will produce vehicle platoons such that a green light will efficiently serve many vehicles in a short time period gershenson gershenson and rosenblueth zubillaga et similarly it may be better to switch traffic lights less frequently because switching reduces service times due to time lost on amber lights a green wave a coordination of vehicle flows such that several successive traffic lights can be passed without stopping is another good example demonstrating that waiting at a red light may be rewarding altogether similarly interesting observations can be made for traffic light control which based on decentralized flow control distributed control et and helbing helbing if each intersection strictly minimizes the travel times of all vehicles approaching it according to the principle of a homo economicus this can create efficient traffic flows when these are low or moderate invisible hand phenomenon however vehicle queues might get out of hand when the intersection utilization increases therefore it is beneficial to interrupt travel time minimization in order to clear a vehicle queue when it has grown beyond a certain critical limit this avoids spillover effects which would block other intersections and cause a quick spreading of congestion over large parts of a city consequently waiting for a long queue to be cleared can speed up traffic altogether putting it differently can beat the selfish optimization la homo economicus who strictly does the best but neglects a coordination with neighbors logistics and supply chains similar phenomena as in urban traffic flows are found in logistic systems and supply chains helbing and helbing et seidel et peters et we have studied for example a case of harbor logistics using automated guided vehicles for container transport our proposal was to reduce the speed of these vehicles this reduced the required safety distances between vehicles such that less conflicts of movement occurred and the automatic guided vehicles had to wait less in this way transportation times could be overall reduced even though movement times obviously increased we made a similar observation in semiconductor production wet benches are used to etch structures into silicium wavers using particular chemical solutions to achieve good results the wavers should stay in the chemical baths longer than a minimum and shorter than a maximum time period therefore it might happen that several silicium wavers need to be moved around at about the same time while a moving gripper the handler must make sure to stay within the minimum and maximum times it turns out that slightly extending the exposure time in the chemical bathes enables much better coordination of the movement processes thereby reaching a percent higher throughput in a third logistics project the throughput of a packaging plant had to be increased one of the central production machines of this plant frequently broke down such that it was operated at full speed whenever it was operating well however this filled the buffer of the production plant to an extent that made its operation inefficient this effect can be understood with queuing theory according to which cycle times can dramatically increase as the capacity of a buffer is approached public transport in public transportation systems it is desirable to have equal headways between vehicles such as buses to reach regular time separations between vehicles however the equal headway configuration is unstable gershenson and pineda forcing equal headways minimizes waiting times at stations nevertheless the travel time is not independent of the waiting time as equal headways imply idling or leaving some passengers at stations this is because there is a different demand for each vehicle at each station still can be used to regulate the headways adaptively gershenson considering only local information vehicles are able to respond adaptively to the immediate demand of each station with this method there is also a sif effect as passengers wait more time at a station but reach their destination faster once they board a vehicle because there is no idling necessary to maintain equal headways social dynamics axelrod axelrod proposed an interesting model of opinion formation in this model agents may change their opinion depending on the opinion of their neighbors eventually the opinions converge to a stable state however if agents switch their opinion too fast this might delay convergence stark et b thus there is a sif effect because the fastest convergence will not necessarily be obtained with the fastest opinion change in this model there is also a phase transition which is probably related to the optimal opinion change rate vilone et there is also experimental evidence of the sif effect in group decisions while designing new buildings slowing down the deliberative process of teams accelerates the design and construction of buildings cross et extrapolating these results one may speculate that financial trading narang may also produce a sif effect in the sense that trading at the microseconds scale generates price and information fluctuations which could generate market instabilities leading to crashes and slower economic growth easley et in combinatorial game theory siegel sometimes the best possible move taking a queen in chess is not necessarily the best move in the long term in other words having the highest possible gain at each move does not give necessarily the best game result russell and norvig pp ecology if a predator consumes its prey too fast there will be no prey to consume and the predator population will decline thus a prudent predator slobodkin goodnight et will actually spread faster than a greedy one a similar sif effect applies to relationships where parasites taking too many resources from their host are causing their own demise dunne et over long timescales evolution will favor symbiotic over parasitic relationships promoting mechanisms for cooperation which can regulate the interaction between different individuals sachs et virgo et we can see that the same principle applies to natural resource management such as fisheries pauly et if catches are excessive there will not be enough fishes left to maintain their numbers and subsequent catches will be poor it is estimated that apart from its ecological impact overfishing has left a void of us billion per year due to reduced catches toppe et however regulating how much fish is caught per year is complicated the maximum sustainable yield varies from species to species maunder so the calculation of the optimal yields per year is not at all a trivial task adaptation evolution development and learning can be seen as different types of adaptation acting at different timescales aguilar et also adaptation can be seen as a type of search downing in computational searches it is known that there needs to be a balance between exploration and exploitation blum and roli an algorithm can explore different possible solutions or exploit solutions similar to those already found too much exploration or too much exploitation will lead to longer search times too much breadth exploration will only explore slightly different types of solution while too much depth exploitation might lead to local optima and data overfitting a key problem is that the precise balance between exploration diversification and exploitation intensification depends the precise search space wolpert and macready and timescale gershenson watson et an example of the sif was described in biological evolution sellis et haploid species with a single copy of their genome such as bacteria can adapt faster than diploid species with two copies of their genome such as most plants and animals still in a fastchanging environment haploids adapt too fast the population loses genome variation while diploids can maintain a diversity having such a diversity diploids can adapt faster to changes in their environment as they can begin an evolutionary search from many different states at once in principle it would be desirable to find a solution as fast as possible exploiting current solutions still as mentioned this might lead to suboptimality sif in evolving new features optimizing a multidimensional function or training a neural network to be efficient search should eventually slow down as it is known from simulated annealing as too much exploration would be suboptimal also the critical question is how to find the precise balance to speed up search as much as possible computationally it seems that this question is not reducible wolfram so we can only know a posteriori the precise balance for a given problem still finding this balance would be necessary for adiabatic quantum computation farhi et aharonov et as if the system evolves too fast the information is destroyed generalization what do all the above examples have in common they can be described as complex dynamical systems composed of many interacting components in the above cases the system can have at least two different states an efficient and an inefficient one unfortunately the efficient state can be unstable such that the system will tend to end up in the inefficient state in the case of freeway traffic for example it is well known that the most efficient state with the highest throughput is unstable thereby causing the traffic flow to break down sooner or later capacity drop to avoid the undesired outcome the system components must stay sufficiently away from the instability point which requires them to be somewhat slower than they could be but as a reward they will be able to sustain a relatively high speed for a long time if they go faster the efficient state will break down and trigger another one that is typically slower this situation might be characterized as a tragedy of the commons hardin even though it might be counterintuitive the sif effect occurs in a broad variety of systems for practical purposes many systems have a monotonic relation between inputs and outputs this is true for systems that break ashby for example if temperature is increased in a constrained gas with a constant volume pressure rises yet if temperature increases too much then the gas container will break leading to a pressure reduction still without breaking many physical and systems have thresholds where they become unstable and a phase transition to a different systems state occurs a typical situation of systems is that they may get overloaded and turn dysfunctional through a cascading effect to reduce the sif effect we can seek to adjust the interactions which cause a reduction in the system performance gershenson the vehicle traffic case offers an interesting example when vehicles go too fast and their density crosses a critical density their changes in speed will affect other vehicles generating an amplification of oscillations which lead to traffic and as a consequence to a reduced average speed if vehicles go slower then such oscillations can be avoided and the average speed will be higher the key here is that the critical speed where traffic flow changes from laminar where fif to unstable where sif changes with the density however suitably designed adaptive systems such as driver assistance systems can be used to drive systems towards their best possible performance in their respective context gershenson helbing discussion it could be argued that the sif effect is overly simplistic as there is only the requirement of having two dynamical phases where one comes with a reduced efficiency after crossing the phase transition point still as we have presented the sif effect shows up in a variety of interesting phenomena at different scales thus we can say that having a better understanding of the sif effect can be useful and potentially have a broad impact a challenge lies in characterizing the nature of the different types of interactions which will reduce efficiency gershenson we can identify the following necessary conditions for the sif effect there is an instability internal or external in the system the instability is amplified sometimes through cascading effects there is a transition from the unstable to a new stable state which leads to inefficiency such a state can be characterized as overloaded it is worth noting that in some cases single variables may be stable to perturbations but their interactions are the ones that trigger instability this implies that the sif in these cases has to be studied at two scales the scale of the components and the scale of the system as studying components in isolation will not provide enough information to reproduce the sif effect whether all phenomena with a sif effect can be described with the same mathematical framework remains to be seen we believe this is an avenue of research worth pursuing and with relevant implications for the understanding of complex systems acknowledgments we should like to thank luis de icaza jeni cross tom froese marios kyriazis gleb oshanin sui phang frank schweitzer diamantis sellis simone severini thomas wisdom zenil and two anonymous referees for useful comments was supported by conacyt projects and sni membership was supported by erc advanced grant momentum references aguilar bonfil froese and gershenson the past present and future of artificial life frontiers in robotics and ai url http aharonov van dam kempe landau lloyd and regev o adiabatic quantum computation is equivalent to standard quantum computation siam review url http ashby the nervous system as physical machine with special reference to the origin of adaptive behavior mind january url http axelrod the dissemination of culture a model with local convergence and global polarization journal of conflict resolution url http blum and roli a metaheuristics in combinatorial optimization overview and conceptual comparison acm comput surv url http braess nagurney and wakolbinger on a paradox of traffic planning transportation science november translated from the original german braess dietrich ein paradoxon aus der verkehrsplanung unternehmensforschung url http cross barr putnam dunbar and plaut the social network of integrative design tech institute for the built environment colorado state university fort collins co usa downing intelligence emerging adaptivity and search in evolving neural systems mit press cambridge ma usa dunne j lafferty dobson hechinger kuris martinez mclaughlin mouritsen poulin reise stouffer thieltges williams and zander parasites affect food web structure primarily through increased diversity and complexity plos biol url http easley de prado and o hara the microstructure of the flash crash flow toxicity liquidity crashes and the probability of informed trading the journal of portfolio management winter url http farhi goldstone gutmann and sipser tum computation by adiabatic evolution tech mit http quanurl gershenson traffic lights complex systems url http gershenson design and control of systems copit arxives mexico http url http gershenson computing networks a general framework to contrast neural and swarm cognitions paladyn journal of behavioral robotics url http gershenson leads to supraoptimal performance in public transportation systems plos one url http gershenson the sigma profile a formal tool to study organization and its evolution at multiple scales complexity url http gershenson the implications of interactions for science and philosophy foundations of science url http gershenson and pineda a why does public transport not arrive on time the pervasiveness of equal headway instability plos one url http gershenson and rosenblueth a traffic lights at intersections complexity url http goodnight rauch sayama de aguiar baranger and y evolution in spatial models and the prudent predator the inadequacy of organism fitness and the concept of individual and group selection http complexity url hardin the tragedy of the commons science url http helbing traffic and related systems reviews of modern physics helbing economics the natural step towards a participatory market society evolutionary and institutional economics review helbing thinking on big data digital revolution and participatory market society springer helbing buzna johansson and werner pedestrian crowd dynamics experiments simulations and design solutions transportation science helbing farkas and vicsek simulating dynamical features of escape panic nature helbing farkas and vicsek ing in a driven mesoscopic system phys rev lett http freezing by url helbing and huberman b a coherent moving states in highway traffic nature url http helbing and supply and production networks from the bullwhip effect to business cycles in networks of interacting machines production organization in complex industrial systems and biological cells armbruster mikhailov and kaneko world scientific singapore helbing and mazloumian a operation regimes and effect in the controlof traffic intersections the european physical journal b condensed matter and complex systems url http helbing and social force model for pedestrian dynamics physical review e helbing and nagel the physics of traffic gional development contemporary physics http and reurl helbing seidel and peters principles in supply networks and production systems in econophysics and sociophysics chakrabarti chakraborti and chatterjee wiley weinheim url http helbing and treiber jams waves and clusters science url http ehtamo helbing and korhonen patient and impatient pedestrians in a spatial game for egress congestion phys rev e url http jiang helbing shukla and wu inefficient emergent oscillations in intersecting driven flows physica a statistical mechanics and its applications url http kesting treiber and helbing general model mobil for models transportation research record journal of the transportation research board kirchner nishinari schadschneider and schreckenberg simulation of competitive egress behavior comparison with aircraft evacuation data physica a statistical mechanics and its applications url http and helbing of traffic lights and vehicle flows in urban road networks stat mech url http and helbing decentralized signal control of realistic saturated network traffic tech santa fe institute kori peters and helbing decentralised control of material or traffic flows in networks using physica a april url http maunder the relationship between fishing methods fisheries management and the estimation of maximum sustainable yield fish and fisheries url http narang trading in inside the black box a simple guide to quantitative and trading ed john wiley sons hoboken nj usa url http pauly christensen dalsgaard froese and torres fishing down marine food webs science url http peters seidel and helbing logistics networks coping with nonlinearity and complexity in managing complexity insights concepts applications helbing springer berlin heidelberg url http russell and norvig artificial intelligence a modern approach ed prentice hall upper saddle river new jersey sachs mueller wilcox and bull j the evolution of cooperation the quarterly review of biology pp url http seidel hartwig sanders and helbing an agentbased approach to production in swarm intelligence introduction and applications blum and merkle springer berlin url http sellis callahan petrov and messer heterozygote advantage as a natural consequence of adaptation in diploids proceedings of the national academy of sciences url http siegel combinatorial game theory american mathematical society slobodkin b growth and regulation of animal populations holt reinhart and winston new york stanley physics freezing by heating nature url http stark tessone and schweitzer decelerating microdynamics can accelerate macrodynamics in the voter model phys rev lett url http stark tessone and schweitzer slower is faster fostering consensus formation by heterogeneous inertia advances in complex systems steinberg and zangwill i the prevalence of braess paradox transportation science url http toppe hasan josupeit subasinghe halwart and james aquatic biodiversity for sustainable diets the role of aquatic foods in food and nutrition security in sustainable diets and biodiversity directions and solutions for policy research and action burlingame and dernini fao rome url http vilone vespignani and castellano ordering phase transition in the axelrod model the european physical journal b condensed matter and complex systems url http virgo froese and ikegami the positive role of parasites in the origins of life in artificial life alife ieee symposium on ieee pp url http watson mills and buckley global adaptation in networks of selfish components emergent associative memory at the system scale artificial life url http wolfram a new kind of sciene http wolfram media url wolpert and macready no free lunch theorems for search tech santa fe institute url http wolpert and macready no free lunch theorems for optimization ieee transactions on evolutionary computation zubillaga cruz aguilar aguilar rosenblueth and gershenson measuring the complexity of traffic lights entropy url http
| 9 |
rank deformations of hyperbolic lattices feb samuel ballas julien paupert pierre will february abstract let x be a negatively curved symmetric space and a lattice in isom x we show that small deformations of into the isometry group of any negatively curved symmetric space containing x remain discrete and faithful the cocompact case is due to guichard this applies in particular to a version of bending deformations providing for all n infnitely many noncocompact lattices in so n which admit discrete and faithful deformations into su n we also produce deformations of the knot group into su not of bending type to which the result applies introduction this paper concerns an aspect of the deformation theory of discrete subgroups of lie groups namely that of lattices in rank semisimple lie groups more specifically we consider the following questions given a discrete subgroup of a rank lie group h does admit any deformations in h if so do these deformations have any nice properties remain discrete and faithful what if we replace h with a larger lie group g here we call deformation of in h any continuous family of representations h for t in some interval satisfying the inclusion of in h and not conjugate to for any t we say that is locally rigid in h if it does not admit any deformations into when h is a semisimple real lie group without compact factors there are a variety of general local rigidity results which we now outline weil proved in w that is locally rigid in h if is compact and h not locally isomorphic to sl r garland and raghunathan extended this result to the case where is a lattice in a rank semisimple group h not locally isomorphic to sl r or sl c theorem of gr the exclusion of sl r and sl c is necessary generically lattices in h sl r admit many deformations in the identification of psl r with allow us to relate lattices in sl r with hyperbolic structures which are in turn parameterized by the classical space when is a surface group the case of deformations of a discrete subgroup of h sl r into g sl c is also classical and well understood by the bers simultaneous uniformization theorem bers in this setting psl c can be identified with and the discrete group sl c gives rise to a hyperbolic structure on the manifold m r where is a hyperbolic surface deforming in sl c corresponds to deforming the hyperbolic structure on m such deformations are abundant and according the bers simultaneous uniformization can be parameterized by a cartesian product of two copies of the classical space of the surface notice that the existence of deformations into g does not violate weil s result as is not compact this situation can be generalized to the case where h n hnr and g so n isom hr in this setting a lattice in h gives rise to a hyperbolic structure on m hnr again regarding as a subgroup of n gives rise to a hyperbolic structure on m r and deformations of into g correspond to deforming this hyperbolic structure in this more general setting there is no general theorem that guarantees the existence of deformations of into however when n this deformation problem has been studied by scannell sc bsc and kapovich kap who prove some rigidity results returning to the case many lattices in h sl c are known to admit deformations into in particular when is thurston showed that for each cusp there exists a real family of deformations of into h called dehn surgery deformations see section of t geometrically in each of these families the commuting pair of parabolic isometries generating the correspdonding cusp group is deformed to a pair of loxodromic isometries sharing a common axis in particular these deformations are all or if is an orbifold then the existence of deformations depends more subtly on the topology of the cusp another case of interest in the context of deformations of geometric structures is that of projective deformations of hyperbolic lattices deformations of lattices of h so n into g sl n r when is a cocompact lattice in so n such that the hyperbolic manifold m hnr contains an embedded totally geodesic hypersurface johnson and millson showed in jm that admits a family of deformations into sl n r they obtained these deformations called bending deformations of m along by introducing an algebraic version of thurston s bending deformations of a hyperbolic along a totally geodesic surface this algebraic version is very versatile and can be generalized in a variety of ways for example the hypothesis that m is compact may be dropped see bm furthermore the construction can be applied to the setting of other lie groups and will provide us with a rich source of examples that are discussed in section in addition to deformations constructed via bending there are also instances of projective deformations that do not arise via the previously mentioned bending technique see bdl on the other hand despite the existence of these bending examples empirical evidence complied by cltii suggests that the existence of deformations into sl r is quite rare for closed hyperbolic in another direction complex hyperbolic deformations of fuchsian groups have also been extensively studied see the survey pp and references therein with the above notation this concerns deformations of discrete subgroups of h into g with h g so su or su su recall that the lie groups sl r so su are all isomorphic up to index it turns out that for any n by work of clt there is an intricate relationship between projective deformations and complex hyperbolic deformations of finitely generated subgroups of so n based on the fact that the lie algebras of sl n r and su n are isomorphic as modules over the so n group ring specifically they prove theorem clt let be a finitely generated group and let n be a smooth point of the representation variety hom sl n r then is also a smooth point of hom su n and near the real dimensions of hom sl n r and hom su n are equal the primary motivation for this article is to construct examples of complex hyperbolic deformations of real hyperbolic lattices that have nice algebraic and geometric properties our main result can be roughly described as providing a sufficient condition for a deformation of a lattice in h into g to continue to be faithful and have discrete image in what follows the condition of being roughly means that parabolic elements remain parabolic see definition for a more precise statement theorem let x be a negatively curved symmetric space s a totally geodesic subspace of x and denote g isom x h stabg s let be a lattice in h and let denote the inclusion of into then any representation g sufficiently close to is discrete and faithful remarks the dehn surgery deformations of lattices in so described above are either indiscrete or showing the necessity of the assumption in general it was pointed out to us by elisha falbel that theorem still holds with the same proof under the weaker hypothesis that is a subgroup of a lattice in h with no global fixed point in if is not itself a lattice this is equivalent in this context see cg to saying that is a thin subgroup of that lattice an subgroup with the same as the lattice when is a cocompact lattice in h the result is a consequence of the following result of guichard as is then in theorem gui let g be a semisimple lie group with finite center h a rank subgroup of g a finitely generated discrete subgroup of h and denote g the inclusion map if is then has a neighborhood in hom g consisting entirely of discrete and faithful representations we prove theorem in section then apply it in section to a family of deformations of the knot group so into su denoting s where is the knot and so its hyperbolic representation the holonomy representation of the complete hyperbolic structure on s we obtain theorem let be the knot group and so its hyperbolic representation then there exists a family of discrete faithful deformations of into su in section we apply theorem to a variation of the bending deformations to obtain the following result as above given a hyperbolic manifold m hnr we call hyperbolic representation of m the holonomy representation into so n of the complete hyperbolic structure on m this is up to conjugation by mostow rigidity theorem for any n there exist infinitely many cusped hyperbolic whose corresponding hyperbolic representation admits a family of discrete faithful deformations into su n here two groups h are commensurable in the wide sense if g has finite index in both and g for some g the incommensurability conclusion ensures that in each dimension n the manifolds in theorem are quite distinct in the sense that they are not obtained by taking covering spaces of a single example discreteness and faithfulness of deformations in this section we prove theorem stated in the introduction our strategy of proof in the case is to use invariant horospheres more precisely a variation of what schwartz called neutered space see definition below from cartan s classification of real semisimple lie groups any negatively curved symmetric space is a hyperbolic space hnk with k r c h or o and n if k r n if k o we refer the reader to cg for general properties of these spaces and their isometry groups in particular isometries of such spaces are roughly classified into the following types elliptic having a fixed point in x parabolic having no fixed point in x and exactly one on x or loxodromic having no fixed point in x and exactly two on x for our purposes we will need to distinguish between elliptic isometries with an isolated fixed point in x which we call elliptic and elliptic isometries having boundary fixed points which we call boundary elliptic definition let x be a negatively curved symmetric space g isom x and a subgroup of a representation g is called if for every parabolic resp boundary elliptic element is again parabolic resp boundary elliptic remark if is a parabolic subgroup of a subgroup fixing a point on x then any parabolicpreserving representation of is faithful on indeed all elements of id are parabolic or boundary elliptic lemma let be a discrete subgroup of g containing a parabolic element p denote fix p x and then for any representation g preserves each horosphere based at fix p proof it is well known that first parabolic and boundary elliptic isometries with fixed point x preserve each horosphere based at and secondly in a discrete group of hyperbolic isometries loxodromic and parabolic elements can not have a common fixed point therefore consists of parabolic and possibly boundary elliptic isometries and likewise for if is the only thing that remains to be seen is that for any q and representation g q fixes fix p this follows from the fact that pairs of isometries having a common fixed boundary point can be characterized algebraically namely by the assumption that p is parabolic p and q have a common fixed boundary point if and only if the group hp qi is virtually nilpotent this property is preserved by any representation of definition let x be a negatively curved symmetric space g isom x a subgroup of g and a subgroup of we say that a closed horoball in x is if the following conditions hold for all and for all definition given two disjoint horoballs in x we call orthogeodesic for the pair the unique geodesic segment with endpoints in the boundary horospheres and perpendicular to these horospheres note that it is is the unique geodesic segment between and we will call the set of points of which are endpoints of a geodesic ray perpendicular to and intersecting the shadow of on remark since the geodesics othogonal to are exactly those geodesics having the vertex of as an endpoint the shadow of on is the intersection with of the geodesic cone over from lemma given two disjoint horoballs with orthogeodesic the shadow of on is the intersection with of a closed ball centered at proof note that any isometry fixing the geodesic pointwise preserves and hence the shadow of on has rotational symmetry around the statement follows by observing that this shadow is closed bounded and has which is clear in the upper model of hnr and the related siegel domain models of the other hyperbolic spaces where horospheres based at the special point are horizontal slices of the domain and geodesics through are vertical lines see go for the complex case and kp for the quaternionic case proposition let x be a negatively curved symmetric space g isom x a discrete subgroup of g and a subgroup of assume that there exists a horoball in x such that acts cocompactly on the horosphere then the set of lengths of orthogeodesics for pairs with is discrete and each of its values is attained only finitely many times modulo the action of proof first note that for and d d and let k be a compact subset of whose covers the orbit is closed since the single cusp neighborhood p p is closed in denoting p the projection map x hence so is then the distance between the compact set k and the closed set is positive and attained say by some point k and some point in the horoball we claim that only finitely many of horospheres in realize this minimum to see this it suffices to show that any horosphere based at intersects finitely many of horospheres in fix a horosphere based at consider a horosphere h in that intersects and let bh be its shadow on by lemma bh is the intersection with of a closed ball there exists r depending only on such that the radius of bh is at least indeed r is the radius of the shadow of any horosphere tangent to now consider two such horospheres and and assume they are disjoint call and the centers of their shadows and on we claim that the distance between and is at least if not then and consider the geodesic connecting to since is the center of the intersection is the geodesic ray connecting the highest point on to the endpoint of which isn t as and are disjoint is a compact geodesic segment contained in permuting the roles of and gives the opposite situation on the geodesic connecting to now if we move continuously from to along a curve x t and consider the associated pencil of geodesics connecting to x t we see that there must be a value of t for which and intersect contradicting disjointness of and finally since k was a compact subset of whose cover for any horosphere h in we can apply an element of that maps the center of bh to a point of if meets an infinite number of classes in we obtain in this way a sequence of distinct points in k which must accumulate by compactness of but by consistency the corresponding horospheres are disjoint and the previous discussion tells us that the distance between the centers of their shadows is uniformly bounded from below a contradiction the result follows inductively repeating the argument after removing the first layer of closest horoballs remark the hypothesis that the cusp stabilizer acts cocompactly on any horosphere based at the cusp holds for any lattice in fact it holds more generally for any discrete group with a maximal rank parabolic subgroup proposition let x be a negatively curved symmetric space s a totally geodesic subspace of x and denote g isom x h stabg s if is a lattice in h then for any representation g sufficiently close to the inclusion g there exists a horoball where is any cusp stabilizer in proof since is it contains a parabolic isometry p let as above fix p s s and then there exists a horoball in s based at which is this can be seen by lifting to s an embedded horoball neighborhood of the image of in the quotient see s lemma of since s is totally geodesic is the intersection of s with a horoball in x which is now for any representation g and any by lemma which is condition of definition for the pair it follows from proposition that for sufficiently close to the horoballs with stay disjoint from as long as some finite subcollection of them do note that since s is totally geodesic and the horoballs convex the distance beween horoballs and based at points of is given by their distance in s hence condition of definition for the pair holds for sufficiently close to proposition let x be a negatively curved symmetric space denote g isom x and let be a subgroup of g without a global fixed point in x if there exists a horoball in x for some subgroup of then is discrete proof first assume for simplicity that does not preserve any proper totally geodesic subspace of x then is either discrete or dense in g corollary of cg if is dense in g then the orbit of any point of x is dense in x but if is a horoball the orbit of any point of is entirely contained in in which case it can not be dense in x as and x both have nonempty interior therefore must be discrete now if does preserve a strict totally geodesic subspace of x and if s is the minimal such subspace then by the same argument either is discrete or every orbit of a point of s is dense in but the consistent horoball must be based at a point of s since it is preserved by all elements of hence it intersects s along a horoball of s and we conclude as before lemma let x be a negatively curved symmetric space denote g isom x let be a subgroup of g and a subgroup of if g is a representation such that there exists a horoball then is faithful proof let id if then id by and if then id by condition of the definition of horoball now theorem follows immediately from propositions and and lemma deformations of the knot group into su in this section we construct a family of deformations of the hyperbolic representation of the knot group into su consider s where is the knot and denote so the holonomy of the complete hyperbolic structure on s recall that in the presence of a smoothness hypothesis on the relevant representation varieties theorem implies that the existence of deformations of into sl r guarantees the existence of deformations of into su work of bdl shows that the smoothness hypothesis is guaranteed in the presence of a cohomological condition specifically they prove the following theorem bdl let m be an orientable complete finite volume hyperbolic manifold with fundamental group and let so be the holonomy representation of the complete hyperbolic structure if m is infinitesimally projectively rigid rel boundary then is a smooth point of hom sl r and its conjugacy class is a smooth point of sl r roughly speaking infinitesimally projectively rigid rel boundary is a cohomological condition that says that a certain induced map from the twisted cohomology of m into the twisted cohomology of is an injection for a more precise definition see hp by work of hp it is known that the knot complement is infinitesimally rigid rel boundary and so we can apply theorems and to produce deformations of into su however there is no reason why these representations should be and in many cases the deformations will not have this property fortunately work of the first author see provides a family of deformations of into sl r whose corresponding deformations into su are parabolic preserving theorem let be the knot group then there exists a family of discrete faithful deformations of into sl r the construction of this family can be found in and ultimately constructs a curve of representations of into sl r containing the hyperbolic representation at t in fact allowing the parameter t to take complex values gives a parameter family of representations into sl c moreover it turns out that taking to be a unit complex number u gives a family of representations into su the reason for this choice of value of the parameter is that the eigenvalues of one of the peripheral elements in are and a power of see section of we now give explicit matrices for the generators and hermitian form for this family using the presentation and notation of section of there the following presentation of was used hm n mw wni where w n the family of representations sl c is defined by m mu and mu and nu u n nu where when the group preserves the hermitian form hu on given by h x y x t ju where u u u u u u ju u u u u u lemma the form hu has signature for all u with and signature when proof computing the determinant of ju gives det ju u cos the latter function of is negative for and positive for the result then follows by noting that hu has signature when u corresponding to the hyperbolic representation and for u lemma the representations are pairwise in sl c proof a straightforward computation gives tr mu nu u lemma the representations are proof the peripheral subgroup of is generated by m and l wwop n with the notation of the presentation see now mu m is unipotent for all u and a straightforward computation using eg maple shows that lu l is with eigenvalues u u u for all u hence parabolic since all elements of m l i will also remain parabolic for u in a neighborhood of the previous result along with theorem has the following immediate corollary corollary the representations are discrete and faithful for u in some neighborhood of in u it would be interesting to know how far u can get from before discreteness or faithfulness is lost bending deformations in this section we construct additional examples in arbitrary dimensions proving theorem stated in the introduction we start with a cusped hyperbolic manifold m hnr and so n the hyperbolic representation of m the holonomy representation of the complete hyperbolic structure on m we will construct a family of representations su n s such that using the bending procedure described by jm their construction is quite general and allows one to deform representations in a variety of lie groups we briefly outline how to use bending to produce families of representations in the complex hyperbolic setting define a hermitian form h on via the formula h x y x t where is the diagonal matrix diag with signature n using this form we produce a projective model for hnc given by hnc v cp n h v v using the splitting c cn we can embed hc into the cp corresponding to the second factor we will refer to this copy of hc as using this embedding we can identify u n with the intersection of su n and the stabilizer of the second factor and we will refer to this subgroup as n it is well known that all other copies of hc inside hnc are isometric to and similarly all copies of u n inside su n are conjugate to n let denote the identity component of the centralizer of n in su n is a onedimensional lie group isomorphic to s and can be written explicitly in block form as e in let be a lattice in so n su n then m hnr is a finite volume hyperbolic for simplicity we will assume that is and thus m will be a hyperbolic manifold suppose that m contains an embedded orientable totally geodesic hypersurface by applying a conjugacy of so n we can assume that hr where hr is thought of as the set of real points of and is a lattice in n so n the hypersurface provides a decomposition of into either an amalgamated free product or an hnn extension depending on whether or not is separating using this decomposition we can construct a family su n such that where is the inclusion of into su n as follows if is separating then m consists of two connected components and with fundamental groups and respectively in this case the group is generated by and we define on this generating set since centralizes we see that the relations coming from the amalgamated product decomposition are satisfied and so su n is well defined if is then m m is connected if we let be the fundamental group of m then we can arrive at the decomposition in this case is generated by t where t is a free letter and we define on generators as t again since centralizes we see that the relations for the hnn extension are satisfied and so su n is well defined the representations constructed above are called bending deformations of along or just bending deformations if and are clear from context by work of jm this path of representations is in fact a deformation of the are pairwise for small values of proof proof of theorem we proceed by constructing infinitely many commensurability classes of cusped hyperbolic manifolds containing totally geodesic hypersurfaces this is done via a well known arithmetic construction see ber the rough idea is to look at the group of integer points of the orthogonal groups of various carefully selected quadratic forms of signature n the quotient m hnr will be a cusped hyperbolic containing a totally geodesic hypersurface after passing to a carefully selected cover we can produce our parabolic preserving representations via the bending construction we now discuss the details for a specific form and observe that the proof is essentially unchanged if one n the group clearly selects a different form let sl n z so n and let contains unipotent elements and so we see that hnr is a cusped hyperbolic which contains an immersed totally geodesic suborbifold isomorphic to hr by combining work of bergeron of ber and proposition of mrs we can find finite and corresponding manifolds m hn and with the index subgroups and r r following properties is is embedded in m m has only torus cusps each m contains the totally geodesic hypersurface along which we can bend to produce a family of representations from into su n we now show that the representatons are then by theorem the are discrete and faithful for small values of lemma the representations su n obtained by bending along are proof by construction we have arranged that is and so there are no elliptic elements to consider furthermore the only parabolic elements of correspond to loops in m that are freely homotopic to one of the torus cusps we now discuss how such an element is modified when one bends let be a parabolic element of and let be its fixed point on hnc there is a foliation of hnc by horospheres centered at and preserves this foliation leafwise furthermore leafwise preservation of this foliation characterizes parabolic isometries of hnc that fix thus it suffices to show that preserves this foliation regard as a loop in m based at and lift to a path in hnc based at let be the lift of that contains each time intersects a lift of to hnc counted with orientation the holonomy is modified by composing with a heisenberg rotation of angle centered at that acts as the identity on each of these modifications is by an element of su n that leafwise preserves the foliation of horospheres centered at and so also preserves this foliation leafwise and is thus parabolic more specifically if we pk let then there are two cases if then is a unipotent parabolic which is conjugate to if then is an isometry whose angle of rotation is see apanasov ap for a detailed description in the n case remark it is well known see t or more generally ht that the complement in s of the knot does not contain an embedded totally geodesic hypersurface therefore the deformations produced in theorem are distinct from those produced by theorem references ap apanasov bending deformations of complex hyperbolic surfaces reine angew math ballas deformations of noncompact projective manifolds algebr geom topol no ballas finite volume properly convex deformations of the knot geom dedicata bdl ballas danciger lee convex projective structures on preprint arxiv bl ballas long constructing thin subgroups commensurable with the knot group algebr geom topol no bm ballas marquis properly convex bending of hyperbolic manifolds preprint arxiv bsc bart and scannell the generalized cuspidal cohomology problem canad j math no ber bergeron premier nombre de betti et spectre du laplacien de certaines hyperboliques enseign math bers bers simultaneous uniformization bull amer math soc cg chen greenberg hyperbolic spaces in contributions to analysis academic press new york clt cooper long and thistlethwaite flexing closed hyperbolic manifolds geom topol cltii cooper long and thistlethwaite computing varieties of representations of hyperbolic into sl r of experimental math vol pp gr garland raghunathan fundamental domains for lattices in rank semisimple lie groups ann of math go goldman complex hyperbolic geometry oxford mathematical monographs oxford university press gps gromov groups in lobachevsky spaces publ math ihes gui guichard groupes dans un goupe de lie math ann no ht hatcher and thurston incompressible surfaces in knot complements invent math no hp heusener and porti infinitesimal projective rigidity under dehn filling geom topol no jm johnson and millson deformation spaces associated to compact hyperbolic manifolds in papers in honor of mostow on his sixtieth birthday roger howe progress in mathematics kap kapovich deformations of representations of discrete subgroups of so math annalen kp kim parker geometry of quaternionic hyperbolic manifolds math proc camb phil soc pp parker platis complex hyperbolic groups in geometry of riemann surfaces london mathematical society lecture notes mrs mcreynolds and reid and stover collisions at infinity in hyperbolic manifolds math proc cambridge philos soc sc scannell local rigidity of hyperbolic after dehn surgery duke math j no schwartz the classification of rank one lattices publ math ihes schwartz complex hyperbolic triangle groups from proceedings of the international congress of mathematicians vol ii beijing higher ed press beijing t thurston the geometry and topology of electronic version available at http w weil discrete subgroups of lie groups ii ann of math samuel ballas department of mathematics florida state university ballas julien paupert school of mathematical and statistical sciences arizona state university paupert pierre will institut fourier de grenoble i
| 4 |
published as a conference paper at iclr on the importance of single directions for generalization mar ari david barrett neil rabinowitz matthew botvinick deepmind london uk arimorcos barrettdavid ncr botvinick a bstract despite their ability to memorize large datasets deep neural networks often achieve good generalization performance however the differences between the learned solutions of networks which generalize and those which do not remain unclear additionally the tuning properties of single directions defined as the activation of a single unit or some linear combination of units in response to some input have been highlighted but their importance has not been evaluated here we connect these lines of inquiry to demonstrate that a network s reliance on single directions is a good predictor of its generalization performance across networks trained on datasets with different fractions of corrupted labels across ensembles of networks trained on datasets with unmodified labels across different hyperparameters and over the course of training while dropout only regularizes this quantity up to a point batch normalization implicitly discourages single direction reliance in part by decreasing the class selectivity of individual units finally we find that class selectivity is a poor predictor of task importance suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity but also that individually selective units may not be necessary for strong network performance i ntroduction recent work has demonstrated that deep neural networks dnns are capable of memorizing extremely large datasets such as imagenet zhang et despite this capability dnns in practice achieve low generalization error on tasks ranging from image classification he et to language translation wu et these observations raise a key question why do some networks generalize while others do not answers to these questions have taken a variety of forms a variety of studies have related generalization performance to the flatness of minima and bounds hochreiter schmidhuber keskar et neyshabur et dziugaite roy though recent work has demonstrated that sharp minima can also generalize dinh et others have focused on the information content stored in network weights achille soatto while still others have demonstrated that stochastic gradient descent itself encourages generalization bousquet elisseeff smith le wilson et here we use ablation analyses to measure the reliance of trained networks on single directions we define a single direction in activation space as the activation of a single unit or feature map or some linear combination of units in response to some input we find that networks which memorize the training set are substantially more dependent on single directions than those which do not and that this difference is preserved even across sets of networks with identical topology trained on identical data but with different generalization performance moreover we found that as networks begin to overfit they become more reliant on single directions suggesting that this metric could be used as a signal for early stopping corresponding author arimorcos published as a conference paper at iclr we also show that networks trained with batch normalization are more robust to cumulative ablations than networks trained without batch normalization and that batch normalization decreases the class selectivity of individual feature maps suggesting an alternative mechanism by which batch normalization may encourage good generalization performance finally we show that despite the focus on selective single units in the analysis of dnns and in neuroscience le et zhou et radford et britten et the class selectivity of single units is a poor predictor of their importance to the network s output a pproach in this study we will use a set of perturbation analyses to examine the relationship between a network s generalization performance and its reliance upon single directions in activation space we will then use a measure of class selectivity to compare the selectivity of individual directions across networks with variable generalization performance and examine the relationship between class selectivity and importance s ummary of models and datasets analyzed we analyzed three models a layer mlp trained on mnist an convolutional network trained on and a residual network trained on imagenet in all experiments relu nonlinearities were applied to all layers but the output unless otherwise noted batch normalization was used for all convolutional networks ioffe szegedy for the imagenet resnet accuracy was used in all cases partially corrupted labels as in zhang et al we used datasets with differing fractions of randomized labels to ensure varying degrees of memorization to create these datasets a given fraction of labels was randomly shuffled and assigned to images such that the distribution of labels was maintained but any true patterns were broken p erturbation analyses ablations we measured the importance of a single direction to the network s computation by asking how the network s performance degrades once the influence of that direction was removed to remove a single direction we clamped the activity of that direction to a fixed value ablating the direction ablations were performed either on single units in mlps or an entire feature map in convolutional networks for brevity we will refer to both of these as critically all ablations were performed in activation space rather than weight space more generally to evaluate a network s reliance upon sets of single directions we asked how the network s performance degrades as the influence of increasing subsets of single directions was removed by clamping them to a fixed value analogous to removing increasingly large subspaces within activation space this analysis generates curves of accuracy as a function of the number of directions ablated the more reliant a network is on activation subspaces the more quickly the accuracy will drop as single directions are ablated interestingly we found that clamping the activation of a unit to the empirical mean activation across the training or testing set was more damaging to the network s performance than clamping the activation to zero see appendix we therefore clamped activity to zero for all ablation experiments addition of noise as the above analyses perturb units individually they only measure the influence of single directions to test networks reliance upon random single directions we added gaussian noise to all units with zero mean and progressively increasing variance to scale the variance appropriately for each unit the variance of the noise added was normalized by the empirical variance of the unit s activations across the training set q uantifying class selectivity to quantify the class selectivity of individual units we used a metric inspired by the selectivity indices commonly used in systems neuroscience de valois et britten et freedman published as a conference paper at iclr figure memorizing networks are more sensitive to cumulative ablations networks were trained on mnist layer mlp a convolutional network b and imagenet resnet c in a all units in all layers were ablated while in b and c only feature maps in the last three layers were ablated error bars represent standard deviation across random orderings of units to ablate assad the mean activity was first calculated across the test set and the selectivity index was then calculated as follows selectivity with representing the highest mean activity and representing the mean activity across all other classes for convolutional feature maps activity was first averaged across all elements of the feature map this metric varies from to with meaning that a unit s average activity was identical for all classes and meaning that a unit was only active for inputs of a single class we note that this metric is not a perfect measure of information content in single units for example a unit with a little information about every class would have a low class selectivity index however it does measure the discriminability of classes along a given direction the selectivity index also identifies units with the same class tuning properties which have been highlighted in the analysis of dnns le et zeiler fergus coates et zhou et radford et however in addition to class selectivity we replicate all of our results using mutual information which in contrast to class selectivity should highlight units with information about multiple classes and we find qualitively similar outcomes appendix we also note that while a class can be viewed as a highly abstract feature implying that our results may generalize to feature selectivity we do not examine feature selectivity in this work e xperiments g eneralization here we provide a rough intuition for why a network s reliance upon single directions might be related to generalization performance consider two networks trained on a large labeled dataset with some underlying structure one of the networks simply memorizes the labels for each input example and will by definition generalize poorly memorizing network while the other learns the structure present in the data and generalizes well network the minimal description length of the model should be larger for the memorizing network than for the structurefinding network as a result the memorizing network should use more of its capacity than the network and by extension more single directions therefore if a random single direction is perturbed the probability that this perturbation will interfere with the representation of the data should be higher for the memorizing network than for the assuming that the memorizing network uses a fraction of its capacity published as a conference paper at iclr a b figure memorizing networks are more sensitive to random noise networks were trained on mnist layer mlp a and convolutional network b noise was scaled by the empirical variance of each unit on the training set error bars represent standard deviation across runs is on a log scale a b figure networks which generalize poorly are more reliant on single directions networks with identical topology were trained on unmodified a cumulative ablation curves for the best and worst networks by generalization error error bars represent standard deviation across models and random orderings of feature maps per model b area under cumulative ablation curve normalized as a function of generalization error to test whether memorization leads to greater reliance on single directions we trained a variety of network types on datasets with differing fractions of randomized labels and evaluated their performance as progressively larger fractions of units were ablated see sections and by definition these curves must begin at the network s training accuracy approximately for all networks tested and fall to chance levels when all directions have been ablated to rule out variance due to the specific order of unit ablation all experiments were performed with mutliple random ablation orderings of units as many of the models were trained on datasets with corrupted labels and by definition can not generalize training accuracy was used to evaluate model performance consistent with our intuition we found that networks trained on varying fractions of corrupted labels were significantly more sensitive to cumulative ablations than those trained on datasets comprised of true labels though curves were not always perfectly ordered by the fraction of corrupted labels fig we next asked whether this effect was present if networks were perturbed along random bases to test this we added noise to each unit see section again we found that networks trained on corrupted labels were substantially and consistently more sensitive to noise added along random bases than those trained on true labels fig the above results apply to networks which are forced to memorize at least a portion of the training set there is no other way to solve the task however it is unclear whether these results would apply to networks trained on uncorrupted data in other words do the solutions found by networks with the same topology and data but different generalization performance exhibit differing reliance upon single directions to test this we trained networks on and evaluated their generalization error and reliance on single directions all networks had the same topology and published as a conference paper at iclr were trained on the same dataset unmodified individual networks only differed in their random initialization drawn from identical distributions and the data order used during training we found that the networks with the best generalization performance were more robust to the ablation of single directions than the networks with the worst generalization performance fig to quantify this further we measured the area under the ablation curve for each of the networks and plotted it as a function of generalization error fig interestingly networks appeared to undergo a discrete regime shift in their reliance upon single directions however this effect might have been caused by degeneracy in the set of solutions found by the optimization procedure and we note that there was also a negative correlation present within clusters top left cluster these results demonstrate that the relationship between generalization performance and single direction reliance is not merely a of training with corrupted labels but is instead present even among sets networks with identical training data r eliance on single directions as a signal for model selection this relationship raises an intriguing question can single direction reliance be used to estimate generalization performance without the need for a test set and if so might it be used as a signal for early stopping or hyperpameter selection as a experiment for early stopping we trained an mlp on mnist and measured the area under the cumulative ablation curve auc over the course of training along with the train and test loss interestingly we found that the point in training at which the auc began to drop was the same point that the train and test loss started to diverge fig furthermore we found that auc and test loss were negatively correlated spearman s correlation fig as a experiment for hyperparameter selection we trained models with different hyperparemeter settings hyperparameters with repeats each see appendix we found that auc and test accuracy were highly correlated spearman s correlation fig and by performing random subselections of hyperparameter settings auc selected one of the top and settings and of the time respectively with an average difference in test accuracy between the best model selected by auc and the optimal model of only mean std these results suggest that single direction reliance may serve as a good proxy for hyperparameter selection and early stopping but further work will be necessary to evaluate whether these results hold in more complicated datasets r elationship to dropout and batch normalization dropout our experiments are reminiscent of using dropout at training time and upon first inspection dropout may appear to discourage networks reliance on single directions srivastava et however while dropout encourages networks to be robust to cumulative ablations up until the dropout fraction used in training it should not discourage reliance on single directions past that point given enough capacity a memorizing network could effectively guard against dropout by merely copying the information stored in a given direction to several other directions however the network will only be encouraged to make the minimum number of copies necessary to guard against the dropout fraction used in training and no more in such a case the network would be robust to dropout so long as all redundant directions were not simultaneously removed yet still be highly reliant on single directions past the dropout fraction used in training to test whether this intuition holds we trained mlps on mnist with dropout probabilities on both corrupted and unmodified labels consistent with the observation in arpit et al we found that networks with dropout trained on randomized labels required more epochs to converge and converged to worse solutions at higher dropout probabilities suggesting that dropout does indeed discourage memorization however while networks trained on both corrupted and unmodified labels exhibited minimal loss in training accuracy as single directions were removed up to the dropout fraction used in training past this point networks trained on randomized labels were much more sensitive to cumulative ablations than those trained on unmodified labels fig interestingly networks trained on unmodified labels with different dropout fractions were all similarly robust to cumulative ablations these results suggest that while dropout may serve as an effective regularizer to prevent memorization of randomized labels it does not prevent on single directions past the dropout fraction used in training published as a conference paper at iclr a b c figure single direction reliance as a signal for hyperparameter selection and early stopping a train blue and test purple loss along with the normalized area under the cumulative ablation curve auc green over the course of training for an mnist mlp loss has been cropped to make divergence visible b auc and test loss for a convnet are negatively correlated over the course of training c auc and test accuracy are positively corrleated across a hyperparameter sweep hyperparameters with repeats for each a b figure impact of regularizers on networks reliance upon single directions a cumulative ablation curves for mlps trained on unmodified and fully corrupted mnist with dropout fractions colored dashed lines indicate number of units ablated equivalent to the dropout fraction used in training note that curves for networks trained on corrupted mnist begin to drop soon past the dropout fraction with which they were trained b cumulative ablation curves for networks trained on with and without batch normalization error bars represent standard deviation across model instances and random orderings of feature maps per model published as a conference paper at iclr a b figure batch normalization decreases class selectivity and increases mutual information distributions of class selectivity a and mutual information b for networks trained with blue and without batch normalization purple each distribution comprises model instances trained on uncorrupted batch normalization in contrast to dropout batch normalization does appear to discourage reliance upon single directions to test this we trained convolutional networks on with and without batch normalization and measured their robustness to cumulative ablation of single directions networks trained with batch normalization were consistently and substantially more robust to these ablations than those trained without batch normalization fig this result suggests that in addition to reducing covariate shift as has been proposed previously ioffe szegedy batch normalization also implicitly discourages reliance upon single directions r elationship between class selectivity and importance our results thus far suggest that networks which are less reliant on single directions exhibit better generalization performance this result may appear in light of extensive past work in both neuroscience and deep learning which highlights single units or feature maps which are selective for particular features or classes le et zeiler fergus coates et zhou et radford et here we will test whether the class selectivity of single directions is related to the importance of these directions to the network s output first we asked whether batch normalization which we found to discourage reliance on single directions also influences the distribution of information about class across single directions we used the selectivity index described above see section to quantify the discriminability between classes based on the activations of single feature maps across networks trained with and without batch normalization interestingly we found that while networks trained without batch normalization exhibited a large fraction of feature maps with high class the class selectivity of feature maps in networks trained with batch normalization was substantially lower fig in contrast we found that batch normalization increases the mutual information present in feature maps fig these results suggest that batch normalization actually discourages the presence of feature maps with concentrated class information and rather encourages the presence of feature maps with information about multiple classes raising the question of whether or not such highly selective feature maps are actually beneficial we next asked whether the class selectivity of a given unit was predictive of the impact on the network s loss of ablating said unit since these experiments were performed on networks trained on unmodified labels test loss was used to measure network impact for mlps trained on mnist we found that there was a slight but minor correlation spearman s correlation between a unit s class selectivity and the impact of its ablation and that many highly selective units had minimal impact when ablated fig by analyzing convolutional networks trained on and imagenet we again found that across layers the ablation of highly selective feature maps was no more impactful than the ablation of feature maps figs and in fact in the networks there was actually a negative correlation between class selectivity and feature map importance spearman s correlation fig to test whether this relationship was we calculated the correlation between class selectivity and importance separately for each layer and found that the vast majority of the negative correlation was driven by early and dead feature maps feature maps with no activity would have a selectivity index of published as a conference paper at iclr convnet mnist mlp a b c imagenet resnet d e figure selective and directions are similarly important impact of ablation as a function of class selectivity for mnist mlp a convolutional network and imagenet resnet c and e show regression lines for each layer separately layers while later layers exhibited no relationship between class selectivity and importance figs and interestingly in all three networks ablations in early layers were more impactful than ablations in later layers consistent with theoretical observations raghu et additionally we performed all of the above experiments with mutual information in place of class selectivity and found qualitatively similar results appendix as a final test we compared the class selectivity to the of the filter weights a metric which has been found to be a successful predictor of feature map importance in the model pruning literature li et consistent with our previous observations we found that class selectivity was largely unrelated to the of the filter weights and if anything the two were negatively correlated fig see appendix for details taken together these results suggest that class selectivity is not a good predictor of importance and imply that class selectivity may actually be detrimental to network performance further work will be necessary to examine whether class feature selectivity is harmful or helpful to network performance r elated work much of this work was directly inspired by zhang et al and we replicate their results using partially corrupted labels on and imagenet by demonstrating that memorizing networks are more reliant on single directions we also provide an answer to one of the questions they posed is there an empirical difference between networks which memorize and those which generalize our work is also related to work linking generalization and the sharpness of minima hochreiter schmidhuber keskar et neyshabur et these studies argue that flat minima generalize better than sharp minima though dinh et al recently found that sharp minima can also generalize well this is consistent with our work as flat minima should correspond to solutions in which perturbations along single directions have little impact on the network output another approach to generalization has been to contextualize it in information theory for example achille soatto demonstrated that networks trained on randomized labels store more published as a conference paper at iclr information in their weights than those trained on unmodfied labels this notion is also related to tishby which argues that during training networks proceed first through a loss minimization phase followed by a compression phase here again our work is consistent as networks with more information stored in their weights less compressed networks should be more reliant upon single directions than compressed networks more recently arpit et al analyzed a variety of properties of networks trained on partially corrupted labels relating performance and to capacity they also demonstrated that dropout when properly tuned can serve as an effective regularizer to prevent memorization however we found that while dropout may discourage memorization it does not discourage reliance on single directions past the dropout probability we found that class selectivity is a poor predictor of unit importance this observation is consistent with a variety of recent studies in neuroscience in one line of work the benefits of neural systems which are robust to noise have been explored barrett et al montijn et al another set of studies have demonstrated the presence of neurons with multiplexed information about many stimuli and have shown that task information can be decoded with high accuracy from populations of these neurons with low individual class selectivity averbeck et al rigotti et al mante et al raposo et al morcos harvey zylberberg perturbation analyses have been performed for a variety of purposes in the model pruning literature many studies have removed units with the goal of generating smaller models with similar performance li et anwar et molchanov et and recent work has explored methods for discovering maximally important directions raghu et al a variety of studies within deep learning have highlighted single units which are selective for features or classes le et zeiler fergus coates et zhou et radford et agrawal et additionally agrawal et al analyzed the minimum number of sufficient feature maps sorted by a measure of selectivity to achieve a given accuracy however none of the above studies has tested the relationship between a unit s class selectivity or information content and its necessity to the network s output bau et al have quantified a related metric concept selectivity across layers and networks finding that units get more with depth which is consistent with our own observations regarding class selectivity see appendix however they also observed a correlation between the number of units and performance on the dataset across networks and architectures it is difficult to compare these results directly as the data used are substantially different as is the method of evaluating selectivity nevertheless we note that bau et al measured the absolute number of units across networks with different total numbers of units and depths the relationship between the number of units and network performance may therefore arise as a result of a larger number of total units if a fixed fraction of units is and increased depth we both observed that selectivity increases with depth d iscussion and future work in this work we have taken an empirical approach to understand what differentiates neural networks which generalize from those which do not our experiments demonstrate that generalization capability is related to a network s reliance on single directions both in networks trained on corrupted and uncorrupted data and over the course of training for a single network they also show that batch normalization a highly successful regularizer seems to implicitly discourage reliance on single directions one clear extension of this work is to use these observations to construct a regularizer which more directly penalizes reliance on single directions as it happens the most obvious candidate to regularize single direction reliance is dropout or its variants which as we have shown does not appear to regularize for single direction reliance past the dropout fraction used in training section interestingly these results suggest that one is able to predict a network s generalization performance without inspecting a validation or test set this observation could be used in several interesting ways first in situations where labeled training data is sparse testing networks reliance on single directions may provide a mechanism to assess generalization performance without published as a conference paper at iclr ing training data to be used as a validation set second by using computationally cheap empirical measures of single direction reliance such as evaluating performance at a single ablation point or sparsely sampling the ablation curve this metric could be used as a signal for or hyperparameter selection we have shown that this metric is viable in simple datasets section but further work will be necessary to evaluate viability in more complicated datasets another interesting direction for further research would be to evaluate the relationship between single direction reliance and generalization performance across different generalization regimes in this work we evaluate generalization in which train and test data are drawn from the same distribution but a more stringent form of generalization is one in which the test set is drawn from a unique but overlapping distribution with the train set the extent to which single direction reliance depends on the overlap between the train and test distributions is also worth exploring in future research this work makes a potentially surprising observation about the role of individually selective units in dnns we found not only that the class selectivity of single directions is largely uncorrelated with their ultimate importance to the network s output but also that batch normalization decreases the class selectivity of individual feature maps this result suggests that highly class selective units may actually be harmful to network performance in addition it implies than methods for understanding neural networks based on analyzing highly selective single units or finding optimal inputs for single units such as activation maximization erhan et may be misleading importantly as we have not measured feature selectivity it is unclear whether these results will generalize to featureselective directions further work will be necessary to clarify all of these points acknowledgments we would like to thank chiyuan zhang ben poole sam ritter avraham ruderman and adam santoro for critical feedback and helpful discussions r eferences alessandro achille and stefano soatto on the emergence of invariance and disentangling in deep representations pp url http pulkit agrawal ross b girshick and jitendra malik analyzing the performance of multilayer neural networks for object recognition eccv guillaume alain and yoshua bengio understanding intermediate layers using linear classifier probes url http sajid anwar kyuyeon hwang and wonyong sung structured pruning of deep convolutional neural networks devansh arpit nicolas ballas david krueger emmanuel bengio maxinder kanwal tegan maharaj asja fischer aaron courville yoshua bengio and simon a closer look at memorization in deep networks issn url http bruno b averbeck peter e latham and alexandre pouget neural correlations population coding and computation nature reviews neuroscience may issn doi url http david barrett sophie and christian machens optimal compensation for neuron loss elife issn doi david bau bolei zhou aditya khosla aude oliva and antonio torralba network dissection quantifying interpretability of deep visual representations doi url http olivier bousquet and elisseeff stability and generalization journal of machine learning research jmlr mar issn url http published as a conference paper at iclr kenneth h britten michael n shadlen william t newsome and j anthony movshon the analysis of visual motion a comparison of neuronal and psychophysical performance journal of neuroscience adam coates andrej karpathy and andrew y ng emergence of features in unsupervised feature learning nips pp issn doi url http coateskarpathyng russell l de valois e william yund and norva hepler the orientation and direction selectivity of cells in macaque visual cortex vision research laurent dinh razvan pascanu samy bengio and yoshua bengio sharp minima can generalize for deep nets gintare karolina dziugaite and daniel roy computing nonvacuous generalization bounds for deep stochastic neural networks with many more parameters than training data url http dumitru erhan yoshua bengio aaron courville and pascal vincent visualizing features of a deep network technical report pp david j freedman and john a assad representation of visual categories in parietal cortex nature sep issn doi url http kaiming he xiangyu zhang shaoqing ren and jian sun deep residual learning for image recognition issn doi sepp hochreiter and schmidhuber flat minima neural comput sergey ioffe and christian szegedy batch normalization accelerating deep network training by reducing internal covariate shift arxiv url http nitish shirish keskar dheevatsa mudigere jorge nocedal mikahail smelyanskiy and ping tak peter tang on training for deep learning generalization gap and sharp minima in iclr pp quoc v le marc aurelio ranzato rajat monga matthieu devin kai chen greg s corrado jeff dean and andrew y ng building features using large scale unsupervised learning international conference in machine learning issn doi hao li asim kadav igor durdanovic hanan samet and hans peter graf pruning filters for efficient convnets valerio mante david sussillo krishna v shenoy and william t newsome computation by recurrent dynamics in prefrontal cortex nature november issn doi url http pavlo molchanov stephen tyree tero karras timo aila and jan kautz pruning convolutional neural networks for resource efficient inference iclr jorrit montijn guido meijer carien lansink and cyriel m a pennartz neural codes are robust to variability from a multidimensional coding perspective cell reports issn doi url http ari s morcos and christopher d harvey variability in population dynamics during evidence accumulation in cortex nature neuroscience october issn doi url http published as a conference paper at iclr behnam neyshabur srinadh bhojanapalli david mcallester and nathan srebro exploring generalization in deep learning url https alec radford rafal jozefowicz and ilya sutskever learning to generate reviews and discovering sentiment url http maithra raghu ben poole jon kleinberg surya ganguli and jascha on the expressive power of deep neural networks url http maithra raghu justin gilmer jason yosinski and jascha svcca singular vector canonical correlation analysis for deep understanding and improvement pp url http david raposo matthew t kaufman and anne k churchland a neural population supports evolving demands during nature neuroscience november issn doi url http mattia rigotti omri barak melissa r warden wang nathaniel d daw earl k miller and stefano fusi the importance of mixed selectivity in complex cognitive tasks nature issn doi url http ravid and naftali tishby opening the black box of deep neural networks via information arxiv pp url http samuel smith and quoc le understanding generalization and stochastic gradient descent pp url http nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever and ruslan salakhutdinov dropout a simple way to prevent neural networks from overfitting journal of machine learning research jmlr issn ashia wilson rebecca roelofs mitchell stern nathan srebro and benjamin recht the marginal value of adaptive gradient methods in machine learning pp url http yonghui wu mike schuster zhifeng chen quoc v le mohammad norouzi wolfgang macherey maxim krikun yuan cao qin gao klaus macherey et al google s neural machine translation system bridging the gap between human and machine translation arxiv preprint matthew zeiler and rob fergus visualizing and understanding convolutional networks computer visioneccv issn doi url http chiyuan zhang samy bengio moritz hardt benjamin recht and oriol vinyals understanding deep learning requires rethinking generalization url http bolei zhou aditya khosla agata lapedriza aude oliva and antonio torralba object detectors emerge in deep scene cnns url http joel zylberberg untuned but not irrelevant the role of untuned neurons in sensory information coding september url https published as a conference paper at iclr a a ppendix c omparison of ablation methods to remove the influence of a given direction its value should be fixed or otherwise modified such that it is no longer dependent on the input however the choice of such a fixed value can have a substantial impact for example if its value were clamped to one which is highly unlikely given its distribution of activations across the training set network performance would likely suffer drastically here we compare two methods for ablating directions ablating to zero and ablating to the empirical mean over the training set using convolutional networks trained on we performed cumulative ablations either ablating to zero or to the feature map s mean means were calculated independently for each element of the feature map and found that ablations to zero were significantly less damaging than ablations to the feature map s mean fig interestingly this corresponds to the ablation strategies generally used in the model pruning literature li et anwar et molchanov et figure ablation to zero ablation to the empirical feature map mean t raining details mnist mlps for class selectivity generalization early stopping and dropout experiments each layer contained and units respectively all networks were trained for epochs with the exception of dropout networks which were trained for epochs convnets convolutional networks were all trained on for epochs layer sizes were with strides of respectively all kernels were for the hyperparameter sweep used in section learning rate and batch size were evaluated using a grid search imagenet resnet residual networks he et were trained on imagenet using distributed training with workers and a batch size of for steps blocks were structured as follows stride filter sizes output channels x x x x for training with partially corrupted labels we did not use any data published as a conference paper at iclr a b figure class selectivity increases with depth class selectivity distributions as a function of depth for a and imagenet b a b figure class selectivity is uncorrelated with relationship between class selectivity and the of the filter weights for a and imagenet b tation as it would have dramatically increasing the effective training set size and hence prevented memorization d epth dependence of class selectivity here we evaluate the distribution of class selectivity as a function of depth in both networks trained on fig and imagenet fig selectivity increased as a function of depth this result is consistent with bau et al who show that increases with depth it is also consistent with alain bengio who show depth increases the linear decodability of class information though they evaluate linear decodability based on an entire layer rather than a single unit r elationship between class selectivity and the filter weight norm importantly our results on the lack of relationship between class selectivity and importance do not suggest that there are not directions which are more or less important to the network s output nor do they suggest that these directions are not predictable they merely suggest that class selectivity is not a good predictor of importance as a final test of this we compared class selectivity to the published as a conference paper at iclr of the filter weights a metric which has been found to be a strongly correlated with the impact of removing a filter in the model pruning literature li et since the of the filter weights is predictive of impact of a feature map s removal if class selectivity is also a good predictor the two metrics should be correlated in the imagenet network we found that there was no correlation between the of the filter weights and the class selectivity fig while in the network we found there was actually a negative correlation fig r elationship between mutual information and importance mnist mlp a convnet b c imagenet resnet d e figure mutual information is not a good predictor of unit importance impact of ablation as a function of mutual information for mnist mlp a convolutional network and imagenet resnet c and e show regression lines for each layer separately to examine whether mutual information which in contrast to class selectivity highlights units with information about multiple classes is a good predictor of importance we performed the same experiments as in section with mutual information in place of class selectivity we found that while the results were a little less consistent there appears to be some relationship in very early and very late layers in mutual information was generally a poor predictor of unit importance fig
| 2 |
dec interactive visualization of persistence modules michael matthew abstract the goal of this work is to extend the standard persistent homology pipeline for exploratory data analysis to the persistence setting in a practical computationally efficient way to this end we introduce rivet a software tool for the visualization of persistence modules and present mathematical foundations for this tool rivet provides an interactive visualization of the barcodes of affine slices of a persistence module m it also computes and visualizes the dimension of each vector space in m and the bigraded betti numbers of m at the heart of our computational approach is a novel data structure based on planar line arrangements on which we can perform fast queries to find the barcode of any slice of m we present an efficient algorithm for constructing this data structure and establish bounds on its complexity contents introduction algebra preliminaries augmented arrangements of persistence modules querying the augmented arrangement computing the arrangement a m computing the barcode templates cost of computing and storing the augmented arrangement speeding up the computation of the augmented arrangement preliminary runtime results conclusion a appendix references notation index columbia university new york ny usa mlesnick olaf college northfield mn usa introduction overview topological data analysis tda is a relatively new branch of statistics whose goal is to apply topology to develop tools for studying the global geometric features of data persistent homology one of the central tools of tda provides invariants of data called barcodes by associating to the data a filtered topological space f and then applying standard topological and algebraic techniques to in the last years persistent homology has been widely applied in the study of scientific data and has been the subject of extensive theoretical work for many data sets of interest such as point cloud data with noise or in density a single filtered space is not a rich enough invariant to encode the structure of interest in our data this motivates the consideration of multidimensional persistent homology which in its most basic form associates to the data a topological space simultaneously equipped with two or more filtrations persistent homology yields algebraic invariants of data far more complex than in the setting new methodology is thus required for working with these invariants in practice in the tda community it is widely appreciated that there is a need for practical data analysis tools for handling multidimensional persistent homology however whereas the community has been quick to develop fast algorithms and good publicly available software for persistent homology it has been comparatively slow to extend these to the setting indeed to date there is to the best of our knowledge no publicly available software which extends the usual persistent homology pipeline for exploratory data analysis to the multidimensional persistence this work seeks to address this gap in the case of persistence building on ideas presented in and we introduce a practical tool for working with persistent homology in exploratory data analysis applications and develop mathematical and algorithmic foundations for this tool our tool can be used in much the same way that persistent homology is used in tda but offers the user significantly more information and flexibility than standard persistent homology does our tool which we call rivet the rank invariant visualization and exploration tool is allows the user to dynamically navigate a collection of persistence barcodes derived from a persistence module this is in contrast to previous visualization tools for persistent homology which have presented static displays because the invariants considered in this paper are larger and more complex than the standard persistence invariants it is essential for the user have some nice way of browsing the invariant on the computer screen we expect that as tda moves towards the use of richer invariants in practical applications interactive visualization paradigms will play an increasingly prominent role in the tda workflow it should be possible to extend our approach for persistence to at a tational cost however there are a number of practical challenges in this not the least of which is designing and implementing a suitable graphical user interface since there is already plenty to keep us busy in we restrict attention to the case in this paper in the remainder of this section we review multidimensional persistent homology introduce the rivet visualization paradigm and provide an overview of rivet s mathematical and computational underpinnings given the length of this paper we expect that some readers will be content to limit their first reading to this introduction we invite you to begin this way availability of the rivet software we plan to make our rivet software publicly available at http within the next few months in the meantime a demo of rivet can be accessed through the website multidimensional filtrations and persistence modules we start our introduction to rivet by defining multidimensional filtrations and persistence modules and reviewing the standard persistent homology pipeline for tda here and throughout we freely use basic language from category theory an accessible introduction to such language in the context of persistence theory can be found in notation for categories c and d let dc denote the category whose objects are functors c d and whose objects are natural transformations for c a poset category f c d a functor and c obj c let fc f c and for c d obj c let f c d fc fd denote the image under f of the unique morphism in homc c d let simp denote the category of simplicial complexes and simplicial maps for a fixed field k let vect denote the category of spaces and linear maps define a partial order on rn by taking an bn if and only if ai bi for all i and let rn denote the corresponding poset category for s a set we let s denote the set s finite multidimensional filtrations define an filtration to be a functor f rn simp such that for all a b f a b fa fb is an inclusion we will usually refer to a filtration as a bifiltration filtrations are the basic topological objects of study in multidimensional persistence in the computational setting we of course work with filtrations that are specified by a finite amount of data let us now introduce language and notation for such filtrations we say an filtration stabilizes if there exists rn such that fa whenever a we write fmax we say a simplex s in fmax appears a finite number of times if there is a finite set a rn such that for each a a s fa and for each b rn with s fb there is some a a with a b if s appears a finite number of times then a minimal such a is unique we denote it a s and call it the set of grades of appearance of we say an filtration f is finite if f stabilizes fmax is finite and each simplex s fmax appears a finite number of times for f finite we define the size of f by x s in the computational setting we can represent a finite filtration f in memory by storing the simplicial complex fmax along with the set a s for each s fmax multidimensional persistence modules define an persistence module to be a functor m rn vect we say m is pointwise finite dimensional if dim ma n for all a rn as we explain in section the category vectr of persistence modules is isomorphic to a category of modules over suitable rings in view of this we may define presentations of persistence modules see section for details for i let hi simp vect denote the ith simplicial homology functor with coefficients in for f a finite filtration there exists a finite presentation for hi f which implies in particular that hi f is see section for details on presentations barcodes of persistence modules shows that persistent homology modules decompose in an essentially unique way into indecomposable summands and that the isomorphism classes of these summands are parameterized by intervals in thus we may associate to each persistence module m a multiset b m of intervals in r which records the isomorphism classes of indecomposable summands of m we call b m the barcode of m in general we refer to any multiset of intervals as a barcode the barcode of a finitely presented persistence module consists of a finite set of intervals of the form a b with a r b r a barcode of this form can be represented as a persistence diagram a multiset of points in a b r r with a b the right side of fig depicts a barcode together with its corresponding persistence diagram figure a point cloud circle left and its persistence barcode right obtained via the construction a barcode can be visualized either by directly plotting each interval green bars oriented vertically or a via persistence diagram green dots right the single long interval in the barcode encodes the presence of a cycle in the point cloud persistence barcodes of data the standard persistent homology pipeline for data analysis associates a barcode bi p to a data set p for each i we regard each interval of each barcode bi p as a topological feature of the data and we interpret the length of the interval as a measure of robustness of the feature the pipeline for constructing bi p proceeds in three steps we associate to the data set p a finite filtration f p we apply hi to obtain a finitely presented persistence module hi f p we take bi p b hi f p this pipeline for the construction of barcodes of data is quite flexible in principle we can work with a data set p of any kind and may consider any number of choices of the filtration f p different choices of f p topologically encode different aspects of the structure of our data the barcodes bi p are readily computed in practice see or section for details persistence barcodes of finite metric spaces here is one standard choice of p and f p in the tda pipeline let p be a finite metric space and let f p rips p the filtration of p be defined as follows for t rips p t is the maximal simplicial complex with p and the pairs p q with d p q for t rips p t is the empty simplicial complex informally the long intervals in the barcodes bi p correspond to cycles in the data see fig for an illustration stability the following result shows that these barcode invariants of finite metric spaces are robust to certain perturbations of the data theorem stability of barcodes of finite metric spaces for all finite metric spaces p q and i dgh p q db bi p bi q where dgh denotes distance and db denotes the bottleneck distance on barcodes see for definitions see also for a generalization of this theorem to compact metric spaces multidimensional persistent homology in many cases of interest a filtration is not sufficient to capture the structure of interest in our data in such cases we are naturally led to associate to our data an filtration for some n as in the case applying homology with field coefficients to an filtration yields an persistence module the question then arises of whether we can associate to this filtration an generalization of a barcode in a way that is useful for data analysis in what follows we motivate the study of multidimensional persistence by describing one natural way that bifiltrations arise in the study of finite metric spaces other ways that bifiltrations arise in tda applications are discussed for example in and we then discuss the algebraic difficulties involved in defining a multidimensional generalization of barcode bifiltrations of finite metric spaces in spite of theorem which tells us that the barcodes bi p bi hi rips p of a finite metric space p are well behaved in a certain sense these invariants have a couple of important limitations first they are highly unstable to the addition and removal of outliers see fig for an illustration second and relatedly when p exhibits in density the barcodes bi p can be insensitive to interesting structure in the high density regions of p see fig to address these issues proposed that we associate a bifiltration to p and study the persistent homology of this bifiltration we describe here both the proposal of which depends on a choice of bandwidth parameter and a simple variant which to the best of our knowledge has not appeared elsewhere the construction of the bifiltration proposed in depends on a choice of codensity function p r a function on p whose value is high at dense points and low at outliers for example proposes to take to be a neighbors density function x y x z y z figure barcodes are unstable with respect to addition of outliers and can be insensitive to interesting structure in high density regions of the data thus though the point clouds x and y share a densely sampled circle and differ only by the addition of a few outliers x and y are quite different from one another x has a long interval not appearing in y in contrast the point cloud z contains no densely sampled circle but the longest intervals of y and z are of similar length in general the choice of a density function depends on a choice of a bandwidth parameter given we may define a filtration rips by taking rips a b rips a b if a and b then rips a b rips thus the collection of all such simplicial complexes together with these inclusions yields a functor rips rop r simp rop r is naturally isomorphic to example there is an isomorphism sending each object a b to b upon the identification of these two categories we may regard rips as a bifiltration note that the definition of rips in fact makes sense for any function p as discussed in there are numerous possibilities for interesting choices of in our study of data aside from density estimates to obtain a variant of rips for a b we first define the graph ga b to be the subgraph of the of rips p b consisting of vertices whose degree is at least a we then define p a b to be the maximal simplicial complex with ga b as above upon the identification of rop r with the collection of simplicial complexes p a b a b defines a bifiltration p hi p is a richer algebraic invariant than hi rips p and in particular is sensitive to interesting structure in high density regions of p in a way that hi rips p is not barcodes of persistence modules we now explain the algebraic difficulties with defining the barcode of a persistence module for n closely following a discussion in as for the case n for n finitely presented persistence modules also decompose in an essentially unique way into indecomposable summands this follows easily from a standard formulation of the theorem however it is a consequence of standard quiver theory results as described for example in that the set of isomorphism classes of indecomposable persistence modules is in contrast to the case extremely in particular for n the dimension of a vector space in a finitely presented indecomposable persistence module can be arbitrarily large thus while in principle we could define the barcode of a persistence module to be its multiset of isomorphism classes of indecomposables as in the case for n this invariant will typically not be useful for data visualization and exploration in the way that the barcode is in general it seems that for the purposes of tda there is no entirely satisfactory way of defining the barcode of an persistence module for n even if we consider invariants which are incomplete invariants which can take the same value on two modules three simple invariants of a multidimensional persistence module nevertheless it is possible to define simple useful computable invariants of a multidimensional persistence module our tool rivet computes and visualizes three such invariants of a persistence module m the dimension function of m the function which maps a to dim ma the fibered barcode of m the collection of barcodes of affine slices of m the multigraded betti numbers of m the dimension function of m is a simple intuitive and easily visualized invariant but is unstable and provides no information about persistent features in m the next two subsections introduce the fibered barcode and the multigraded betti numbers for example there is a fully faithful functor from the category of representations of the wild quiver to vectr which maps indecomposables to indecomposables see also for a study of the possible isomorphism types of a multidimensional persistence module the rank invariant and fibered barcodes the rank invariant for n let hn rn rn denote the set of pairs s t with s t let n denote the integers and let m be an persistence module following we define rank m hn n the rank invariant of m by rank m a b rank m a b using the structure theorem for persistence modules it s easy to check that for m a persistence module m rank m and b m determine each other for m an persistence module n rank m does not encode the isomorphism class of m see example example nevertheless the rank invariant does capture interesting first order information about the structure of a persistence module observed that if m is a persistence module rank m carries the same data as a family of barcodes each obtained by restricting m to an affine line in rn we call this parameterized family of barcodes the fibered barcode of m in particular if m is a persistence module the fibered barcode of m is a family of barcodes in what follows we give the definition of a fibered barcode of a persistence module the space of affine lines in with slope let denote the collection of all unparameterized lines in with possibly infinite slope and let l denote the collection of all unparameterized lines in with finite slope note that is a submanifold with boundary of an affine grassmannian of dimension with the induced topology is homeomorphic to in this sense it is appropriate to think of as a family of lines a standard duality to be described in section provides an identification between l and the r we will make extensive use of this duality later in the paper the duality does not extend to to vertical lines in a natural way definition of the fibered barcode for l let l denote the associated full subcategory of the inclusion l induces a functor il l for m a persistence module we define m l m il define an interval i in l to be a subset of l such that whenever a b c l and a c i we also have that b i as l is isomorphic to r the structure theorem for persistence modules yields a definition of the barcode b m l as a collection of intervals in we define b m the fibered barcode of m to be the map which sends each line l to the barcode b m l proposition b m and rank m determine each other proof for each a b there exists a unique line l such that a b clearly then rank m and the collection of rank invariants rank m l l determine each other as noted above for n a persistence module b n and rank n determine each other the result follows stability of the fibered barcode adapting arguments introduced in and a recent note by claudia landi establishes that b m is stable in two senses the results of in fact hold for persistent homology modules of arbitrary dimension however in this paper we are primarily interested in the case in landi s first stability result says that for l a line which is neither horizontal nor vertical the map m b m l is lipschitz continuous with respect to the interleaving distance on persistence modules and the bottleneck distance on barcodes the lipschitz constant c depends on the slope of l c when the slope of l is and c grows larger as the slope of l deviates from tending towards infinity as the slope of l approaches or we refer to this stability result as the external stability of b m also presents an internal stability result for b m which tells us that when m is finitely presented the map l b m l is continuous in a suitable sense at lines l which are neither horizontal nor vertical in fact the result says something stronger which put loosely is that the closer the slope of the line l is to the more stable b m l is to perturbations of in sum the stability results of tell us that for m a persistence module and l a line which is neither horizontal or vertical the barcode b m l is robust to perturbations both of m and l the more diagonal l is the more robust b m l is multigraded betti numbers we next briefly introduce multigraded betti numbers called bigraded betti numbers in the case of persistence see section for a precise definition and examples for m a finitely presented persistence module and i the ith graded betti number of m is a function m rn it follows from the hilbert basis theorem a classical theorem of commutative algebra that m is identically for i n so m is only of interest for i we will be especially interested in m and m in this paper for a rn m a and m a are the number of generators and relations respectively at index a in a minimal presentation for m see section m has an analogous interpretation in terms of a minimal resolution of m neither of the invariants b m nor m i n determines the other but the invariants are intimately connected this connection in the case that n plays a central role in the present work one of the main mathematical contributions of this project is a fast algorithm for computing the multigraded betti numbers of a persistence module m our algorithm is described in the companion paper the rivet visualization paradigm overview we propose to use the fibered barcode in exploratory data analysis in much the same way that barcodes are typically used in particular this requires us to have a good way of visualizing the fibered barcode though discretizations of fibered barcodes have been used in shape matching applications to the best of our knowledge there is no prior work on visualization of fibered barcodes this work introduces a paradigm called rivet for the interactive visualization of the fibered barcode of a persistence module and presents an efficient computational framework for implementing this paradigm our paradigm also provides for the visualization of the dimension function and bigraded betti numbers of the module the visualizations of the three invariants complement each other and work in concert our visualizations of the dimension function and bigraded betti numbers provide a coarse global view of the structure of the persistence module while our visualization of the fibered barcodes which focuses on a single barcode at a time provides a sharper but more local view we now give a brief description of our rivet visualization paradigm additional details are in appendix given a persistence module m rivet allows the user to interactively select a line l via a graphical interface the software then displays the barcode b m l as the user moves the line l by clicking and dragging the displayed barcode is updated in real time the rivet interface consists of two main windows the line selection window left and the persistence diagram window right fig shows screenshots of rivet for a single choice of m and four different lines the line selection window for a given finitely presented persistence module m the line selection window plots a rectangle in containing the union of the supports of functions m i the greyscale shading at a point a in this rectangle represents dim ma a is unshaded when dim ma and larger dim ma corresponds to darker shading scrolling the mouse over a brings up a popup box which gives the precise value of dim ma points in the supports of m m and m are marked with green red and yellow dots respectively the area of each dot is proportional to the corresponding function value the dots are translucent so that for example overlaid red and green dots appear brown on their intersection this allows the user to read the values of the betti numbers at points which are in the support of more than one of the functions figure screenshots of rivet for a single choice of persistence module m and four different lines rivet provides visualizations of the dimension of each vector space in m greyscale shading the and betti numbers of m green red and yellow dots and the barcodes of the slices m l for each l in purple the line selection window contains a blue line of slope with endpoints on the boundary of the displayed region of this line represents a choice of l the intervals in the barcode b m l are displayed in purple offset from the line l in the perpendicular direction the persistence diagram window in the persistence diagram window right a persistence diagram representation of b m l is displayed to represent b m l via a persistence diagram we need to choose an parameterization r l of l which we regard as a functor we then display b m l our choice of the parameterizations is described in appendix as with the betti numbers the multiplicity of a point in the persistence diagram is indicated by the area of the corresponding dot interactivity the user can click and drag the blue line in the left window with the mouse thereby changing the choice of l clicking the blue line away from its endpoints and dragging moves the line in the direction perpendicular to its slope while keeping the slope constant the clicking and dragging an endpoint of the line moves that endpoint while keeping the other fixed this allows the user to change the slope of the line as the line moves the both the interval representation of b m l in the left window and its persistence diagram representation in the right window are updated in real time querying the fibered barcode our algorithm for fast computation of betti numbers of persistence modules described in performs an efficient computation of the dimension function of m as a subroutine thus in explaining the computational underpinnings of rivet we will focus on the rivet s interactive visualization of the fibered barcode because our visualization paradigm needs to update the plot of b m l in real time as we move the line l it must be able to very quickly access b m l for any choice of l in this paper we introduce an efficient data structure m the augmented arrangement of m on which we can perform fast queries to determine b m l for l we present a theorem which guarantees that our query procedure correctly recovers b m l and describe an efficient algorithm for computing m structure of the augmented arrangement m consists of a line arrangement a m in that is a cell decomposition of induced by a set of intersecting lines together with a collection t e of pairs a b stored at each e of a m we call t e the barcode template at as we explain in section t e is defined in terms of the barcode of a discrete persistence module derived from m queries of m we now briefly describe how we query m for the barcodes b m l further details are given in section as noted above a standard duality described in section provides an identification of l with for simplicity let us restrict attention for now to the generic case where l lies in a e of a m the general case is similar and is treated in section to obtain b m l for each pair a b t e we push the points of each pair a b t e onto the line l by moving a and b upwards or rightwards in the plane along horizontal and vertical lines this gives a pair of points pushl a pushl b l if b we take pushl b our theorem the central mathematical result underlying the rivet paradigm tells us that b m l pushl a pushl b a b t e see fig for an illustration thus to obtain the barcode of b m l it suffices to identify the cell e and then compute pushl a and pushl b for each a b t e l figure an illustration of how we recover the barcode b m l from the barcode template t e by pushing the points of each pair in the barcode template onto in this example t e and b m l consists of two disjoint intervals complexity results for computing storing and querying the augmented arrangement once m has been computed computing b m l via a query to m is far more efficient than computing b m l from scratch for typical persistence modules arising from data the query of b m l can be performed in real time as desired at the same time the cost of computing and storing m is reasonable the following two theorems provide a theoretical basis for these claims for m a persistence module and i let supp m a rn m a let for and the number of unique x and y coordinates respectively of points in supp m supp m we call the coarseness of m theorem computational cost of querying the augmented arrangement i for l l lying in a of a m we can query m for b m l in time o log m l where m l denotes the number of intervals in b m l ii for all other lines l we can query m for b m l in time o log m l for some arbitrarily small perturbation of theorem for f a bifiltration of size m and m hi f i m is of size o ii our algorithm computes m from f using o m log elementary operations and o storage we prove theorem in section and theorem in section to keep our exposition brief in this introduction we have assumed in the statement of theorem that m is a persistent homology module of a bifiltration however our algorithm for computing augmented arrangements does handle purely algebraic input see section we give more general complexity bounds in the algebraic setting in section coarsening and the interpretation of theorem for a fixed choice of l b m l is of size o m and the time required to compute b m l via an application of the standard persistence algorithm is o thus theorem indicates that as one would expect computation and storage of m is more expensive than computation and storage of b m l for some fixed for m as above is o in the worst case thus in the worst case our bounds on the size and time to compute m grow like which on the surface may appear problematic for practical applications however as we explain in section we can always employ a simple coarsening procedure to approximate m by a module m for which is a small constant say m encodes b m exactly and so in view of landi s external stability result m encodes b m approximately more details on coarsening are given in section computation of the augmented arrangement our algorithm for computing m decouples into three main parts computing m and m constructing the line arrangement a m computing the barcode template t e at each e of a m we next say a few words about each of these computing bigraded betti numbers as noted above one of the main mathematical contributions underlying rivet is a fast algorithm for computing the bigraded betti numbers of a persistence module m not only does rivet provide a visualization of the betti numbers but it also makes essential use of the betti numbers in constructing m for m a persistent homology module of a bifiltration with n simplices our algorithm for computes the bigraded betti numbers of m in o time see and section in section of this paper we present preliminary experimental results on the performance of our algorithm for computing bigraded betti numbers these indicate that the cost of the algorithm is very reasonable in practice computing the line arrangement the second phase of computation constructs the line arrangement a m underlying m line arrangements have been the object of intense study by computational geometers for decades and there is machinery for constructing and working with line arrangements in practice our algorithms for constructing and querying m leverage this machinery see section and section computing the barcode templates the third phase of our computation of m computes the barcode templates t e stored at each e a m as noted above each t e is defined in terms of the barcode b m e of a certain persistence module m e to compute each t e for each e we need to compute each b m e this is the most expensive part of the computation of m both in theory and in practice in section we introduce our core algorithm for this based on the vineyard algorithm for updating persistent homology computations in section we present a modification of this algorithm which is much faster in practice in fact as explained in section our algorithm for computing barcode templates is embarrassingly parallelizable computational experiments section presents preliminary results on the performance of our algorithm for computing augmented arrangements as explained there our present implementation of rivet is not yet fully optimized and our timing results should be regarded as loose upper bounds on what can be achieved using the algorithms of this paper still the results demonstrate that even with our current code computation of an augmented arrangement of a bifiltration containing millions of simplices is feasible on a standard personal computer provided we employ some modest coarsening thus the current implementation already performs well enough to be used in the analysis of modestly sized real world data sets with more implementation work including the introduction of parallelization we expect rivet to scale well enough to be useful in many of the same settings where persistence is currently used for exploratory data analysis outline we conclude this introduction with an outline of the remainder of the paper section reviews basic algebraic facts about persistence modules their minimal presentations and graded betti numbers we also discuss the connection between rn persistence modules and their zn discretizations section defines the augmented arrangement m of a persistence module m sections and give our main result on querying m for the barcodes b m l this is theorem in section we describe how m is stored in memory and apply theorem to give an algorithm for querying m the remaining sections introduce our algorithm for computing m first section specifies how persistence modules are represented as input to our algorithm and explains our algorithm for computing a m section explains our core algorithm for computing the barcode templates t e this completes the specification of our algorithm for computing m in its basic form section analyzes the time and space complexity of our algorithm for computing m section describes several practical strategies to speed up the computation of m section presents our preliminary timing results for the computation of m appendix expands on the introduction to the rivet interface given in section providing additional details algebra preliminaries in this section we present the basic algebraic definitions and facts we will need to define and study augmented arrangements of persistence modules a description of persistence modules in section we defined a persistence module to be an object of the functor category n n vectr here we give a description of vectr the ring pn let the ring pn be the analogue of the usual polynomial ring k xn in n variables where exponents of the indeterminates in pn are allowed to take on arbitrary in rather than only values in for example if k r then is an element of more formally pn can be defined as a monoid ring over the monoid n for a an n we let xa denote the monomial xann pn let i pn be the ideal generated by the set i n rn pn since the field k is a subring of pn any pn comes naturally equipped with the structure of a space we define an rn pn l to be a pn m with a direct sum decomposition as a space m ma such that the action of pn on m satisfies xb ma for all a rn b n the rn pn form a category whose morphisms are the module homomorphisms f m n such that f ma na for all a rn there is an obvious n isomorphism between this vectr and this category so that we may identify the two categories henceforth we will refer to rn pn as persistence modules or for short remark as a rule the familiar definitions and constructions for modules make sense n in the category vectr for example as the reader may check we can define submodules quotients direct sums tensor products resolutions and tor functors n in vectr as we next explain we also can define free and presentations of free and presentations sets define an set to be a pair w w grw for some set w and function grw w rn formally we may regard w as the set of pairs w grw w w w and we ll sometimes make use of this representation we ll often abuse notation and write w to mean the set w also when w is clear from context we ll abbreviate grw as gr we say a subset y of an m is homogeneous if ma clearly we may regard y as an set shifts of modules for m an and v rn we define m v to be the such that for a rn m v a and for a b rn m v a b m a v b v for example when n m is obtained from m by shifting all vector spaces of m down by one and to the left by one free the usual notion of a free module l extends to the setting of as follows for w an set let free w pn gr w we identify w with a set of generators in free w in the obvious way a free f is an such that f free w for some set equivalently we can define a free as an which satisfies a certain universal property see for y a homogeneous subset of a free f let hyi denote the submodule of f generated by matrix representations of morphisms of free modules let w w be finite for a graded sets with ordered underlying sets w wl w wm morphism f free w free w we can represent f by a matrix f with coefficients in k if gr gr wj we define f ij to be the unique solution to f ij xgr wi wj f wj where free w free wi is the projection if gr gr wj we define f ij presentations of a presentation of an m is a pair w y where w is an set and y free w is a homogeneous with m free w we denote the presentation w y as if there exists a presentation for m with w and y finite then we say m is finitely presented note that the inclusion y free w induces a morphism free y free w we denote this morphism as example consider the persistence modules m h a b c a bi n h a b the induced linear maps m m m and n n n do not have equal ranks hence m and n are not isomorphic however rank m rank n this shows that the rank invariant does not completely determine the isomorphism type of a persistence module minimal presentations of let m be an we say a presentation of m is minimal if hyi i free w and ker i free y the following proposition is a variant of lemma and is proved in the same way it makes clear that minimal presentations are indeed minimal in a reasonable sense proposition a finite presentation is minimal if and only if w descends to a minimal set of generators for coker and y is a minimal set of generators for hyi remark it follows immediately from proposition that every finitely presented persistence module has a minimal presentation graded betti numbers of persistence modules for m an define dimf m rn n the dimension function of m by dimf m a dim ma for i define m dimf tori m pn the functions m are called the graded betti numbers of m betti numbers of multigraded k xn are defined analogously these are discussed in many places in in our study of augmented arrangements and fibered barcodes we will only need to consider m and m we omit the straightforward proof the following result proposition if is a minimal presentation for m then for all a rn m a y a m a w a example the presentations of the modules m and n given in example are minimal using this it s easy to see that if a if a m a m a otherwise otherwise if a n a n n m otherwise example for m h a a ai if a if a m a m a otherwise otherwise if a m a otherwise grades of influence for m a persistence module and a rn let im a y a y or y lemma for m finitely presented and a b rn with im a im b m a b is an isomorphism proof let be a minimal presentation for m let m be the with free w a and m a the map induced by the inclusion free w a clearly m isomorphic to m using proposition it s easy to see that for a b rn with im a im b the map free w a b is an isomorphism sending hyia isomorphically to hyib hence m a b is an isomorphism since m and m are isomorphic m a b is an isomorphism as well continuous extensions of discrete persistence modules in the computational setting the persistence modules we encounter are always finitely presented it turns out that finitely presented persistence modules are in a sense essentially discrete we now explain this a discrete or zn persistence module is a functor zn vect where zn is the poset category of zn let an k xn denote the ordinary polynomial ring in n variables in analogy with the rn case we can regard a discrete persistence module as a an in the obvious way all of the basic definitions and machinery we ve described above for rn persistence modules can be defined for discrete persistence modules in essentially the same way in particular we may define the betti numbers q zn n of a discrete persistence module q grid functions for n we define an grid to be a function g zn rn given by g zn gn zn for some functions gi z r with lim gi z and lim gi z for each i we define flg a to be the maximum element of im g ordered before a in the partial order on rn that is for g a grid we let flg r im g be given by flg t max s im g s t and for g an grid function we define flg rn im g by flg an flgn an continuous extensions of discrete persistence modules for g an grid we n n define a functor eg vectz vectr as follows for q a zn persistence module and a b rn eg q a qy eg q a b q y z where y max w zn g w flg a z max w zn g w flg b the action of eg on morphisms is the obvious one we say that an m is a continuous extension of q along g if m eg q proposition any finitely presented m is a continuous extension of a finitely generated discrete persistence module along some grid proof let g zn rn be any grid such that supp m supp m im we regard g as a functor zn rn in the obvious way using lemma it s easy to check that m is a continuous extension of m g along further m g is finitely generated a finite presentation for m induces one for m g of the same size betti numbers of continuous extensions proposition suppose an m is a continuous extension of q along an injective grid then for all i q z if a g z for some z zn m a otherwise rivet exploits proposition to compute the betti numbers of finitely presented persistence modules by appealing to local formulae for the betti numbers of persistence modules see section proof of proposition let f f f be a free resolution of it s easy to see that eg preserves exactness so eg eg eg eg f eg f eg f is a free resolution for m write gi eg f i and i eg by definition m dimf hi where hi is the ith homology module of the following chain complex pn pn pn n n we have two functors vectr vectr acting respectively on objects by n n pn n with the action on morphisms defined in the obvious way note that these are naturally isomorphic thus the above chain complex is isomorphic to the chain complex where i is the map induced by i since gi is a continuous extension along g it is clear that if a im g then gi a hence if a im g then m a as claimed it remains to consider the case that a im let j denote the maximal homogeneous ideal of an q dimf ki for ki the ith homology module of the chain complex f f f where is the map induced by if a g z then by the way we have defined the functor eg it is clear that we have isomorphisms fzi gia sending jfzi isomorphically to igia such that the following diagram commutes z a z a z a taking quotients we obtain a commutative diagram f z f z a a it follows that kzi hai so m a q z as desired f z a barcodes of discrete persistence modules we discussed the barcodes of rindexed persistence modules in section the structure theorem of tells us that the barcode b q of a discrete persistence module q is also well defined provided dim va for all a less than some z when q is finitely generated b q is a finite multiset of intervals a b with a b z barcodes under continuous extension we omit the easy proof of the following proposition for q a finitely generated discrete persistence module and m a continuous extension of q along g b m g a g b a b b q g a g b where we define g remark as already noted in section for l the poset category corresponding to a line l l is isomorphic to hence by adapting the definitions given above in the setting we can define the grid function g z l a function flg l im g and the functor eg vectz vectl as in the case we say m l vect is a continuous extension of a persistence module q if m eg q clearly then proposition also holds for continuous extensions in the setting in section we will use proposition in the setting to prove our main result on queries of augmented arrangements augmented arrangements of persistence modules in this section we define the augmented arrangement m associated to a finitely presented persistence module m first in section we define the line arrangement a m associated to m next in section we present a characterization of the of a m finally using this characterization in section we define the barcode template t e stored at a e of a m the augmented arrangement m is defined to be the arrangement a m together with the additional data t e at each definition of a m let s supp m supp m to keep our exposition simple we will assume that each element of s has using the shift construction described in section we can always translate the indices of m so that this assumption holds so there is no loss of generality in this assumption duality recall the definitions of l and from section as mentioned there a standard duality gives a parameterization of l by the we now explain this define dual transforms d and dp as follows d l r dp r l y ax b a c d y cx d this duality does not extend naturally to vertical lines lines in dp w dp v w u d l v d dp u l figure illustration of duality the following lemma whose proof we omit is illustrated in fig by the point w and line lemma the transforms d and dp are inverses and preserve incidence in the sense that for w r and l l w l if and only if d l dp w line arrangements in a cell is a topological space homeomorphic to rn for some n we define a cell complex on r to be a decomposition of r into a finite number of cells so that the topological boundary of each cell lies in the union of cells of lower dimension by standard topology each cell in a cell complex on r has dimension at most according to the definition a cell complex on r is not a as some cells will necessarily be unbounded by a line arrangement in r we mean the cell complex on r induced by a set w of lines in in this cell complex the consists of the union of all lines in w together with the line x definition of a m for w al bl a finite subset of let lub s the least upper bound of w be given by lub s max ai max bi i i for example lub say that a pair of distinct elements u v s is weakly incomparable if one of the following is true u and v are incomparable with respect to the partial order on u and v share either their first or second coordinate call r an anchor if lub u v for u v s weakly incomparable we define a m to be the line arrangement in r induced by the set of lines dp is an anchor in view of lemma then the m of a m is given by m d l l l contains an anchor r it is clear that a m is completely determined by of a m note that for two anchors dp and dp intersect in if and only if there exists some l l containing both and such a line l exists if and only if and are comparable and have distinct size of a m we now bound the number of cells in a m of each dimension as in section let for and the number of unique x and y coordinates respectively of points in clearly the number of anchors for s is bounded above by hence the number of lines in a m is also bounded above by precise bounds on the number of vertices edges and faces in an arbitrary line arrangement are well known and can be computed by simple counting arguments these bounds tell us that the number of vertices edges and faces in a m is each not greater than characterization of the of a m we next give our alternate description of the of a m to do so we first define the set crit m of critical lines in here denotes the topological interior of l the set of affine lines in with positive finite slope the push map note that for each l the partial order on restricts to a total order on this extends to a total order on l by taking v for each v for l define the push map pushl l by taking pushl a min v l a v note that im pushl if and only if l is horizontal or vertical for a and pushl u l either or see fig for r s pushl r pushl s l pushl b a pushl a b figure illustration of the push map for lines of positive finite slope continuity of push maps for any a the maps pushl induce a map pusha defined by pusha l pushl a recall that we consider as a topological space with the topology the restriction of topology on the affine grassmannian of lines in lemma for each a pusha is continuous on proof note that for any a and l pusha l is the unique intersection of l with y y x x from this the result follows readily critical lines for l pushl induces a totally ordered partition s l of s elements of the partition are restrictions of levelsets of pushl to s and the total order on s l is the pullback of the total order on this partition is illustrated in fig we call l regular if there is an open ball b containing l such that s l s l for all b we call l critical if it is not regular let crit m denote the set of critical lines in theorem characterization of the of a m the of a m is exactly d crit m r l s l figure the totally ordered partition s l of the ith element of the partition is labeled as sil u lub u v lub u v u v v figure an illustration of the geometric idea behind theorem for u v s and a line lying just above lub u v u v whereas for a line lying just below lub u v v u thus any line passing through lub u v is critical proof in view of the description of the of a m given by eq proving the proposition amounts to showing that l is critical if and only if l contains some anchor suppose l contains an anchor w and let u v s be weakly incomparable with w lub u v then we must have that pushl u pushl v further it s easy to see that we can find an arbitrarily small perturbation of l so that either pushl u pushl v or pushl v pushl u thus l is critical see fig for an illustration in the case that u and v are incomparable to prove the converse assume that l does not contain any anchor and consider distinct u v s with pushl u pushl v note that u v must lie either on the same horizontal line or the same vertical line otherwise u and v would be incomparable and we would have pushl u pushl v lub u v assume without loss of generality that u v and u v lie on the same horizontal line then we must also have that pushl u pushl v lies on since v lub u v v is an anchor however l does not contain any anchor so we must have v pushl v any sufficiently small perturbation of l will also intersect h at a point p to the right of v so that we have u v thus for all in a neighborhood of l push u push v in fact since s is finite we can choose a single neighborhood n of l in such that for any u v s with pushl u pushl v we have u v for all n by lemma choosing n to be smaller if necessary may further assume that if pushl u pushl v then u v for all n thus the partition s l is independent of the choice of n moreover by lemma again the total order on s l is also independent of the choice of n therefore l is regular corollary if the duals of l l are contained in the same in a m then sl sl proof each is connected and open so this follows from theorem remark in fact corollary can be strengthened to show that for any l the duals of l and lie in the same cell of a m if and only if s l s l however we will not need this stronger result the barcode templates t e using corollary we now define the barcode templates t e stored at each e of a m this will complete the definition of the augmented arrangement m by corollary we can associate to each e in a m a totally ordered partition s e of let sie denote the ith element of the partition a persistence module at the we next use s e to define a discrete persistence module m e if m we take m e assume then that m e first we define a map n r by lub sie if z e e z lub s if z e we define p e im e and call this the set of template points at cell note that the restriction of e to e is an injection so that e e let p denote the poset category of positive integers e y e z whenever y z so e induces a functor p to which we also denote e finally the functor m e p vect extends to a functor m e z vect a persistence module by taking mze whenever z the definition of barcode templates clearly m e is so it has a barcode b m e and it s easy to see that b m e consists of intervals a b with a b e let us write we define t e to be a collection of pairs of points in p e p e as follows t e e a e b a b b m e this completes the definition of m remark m is completely determined by the fibered barcode b m and the set indeed a m is completely determined by s and using proposition it s easy to see that t e is completely determined by b m and the main result of the next section theorem shows that conversely m completely determines b m and does so in a simple way querying the augmented arrangement in the previous section we defined the augmented arrangement m of a finitely presented persistence module m we now explain how m encodes the fibered barcode b m that is for a given l we explain how to recover b m l from m the main result of this section theorem is the basis of our algorithm for querying m we describe the computational details of our query algorithm in section recall that the procedure for querying m for a barcode b m l was discussed for the case of generic lines l in section more generally for any l querying m for the barcode b m l involves two steps first we choose a e in a m if l l then e is a coface of the cell in containing d l second we obtain the intervals of b m l from the pairs of t e by pushing the points in each pair a b t e onto the line l via the map pushl of section we now describe in more detail the first step of selecting the coface selecting a coface e of l for l we choose the coface e of l as follows if l then there exists a of a m whose closure contains d l we take e to be any such if l is horizontal then the cofaces of the cell containing d l are ordered vertically in r we take e to be the bottom coface note that d l has only one coface unless l contains an anchor for l vertical say l is the line x a let be the line in the arrangement a m of maximum slope amongst those having slope less than or equal to a if a unique such line exists if there are several such lines take to be the one with the largest if such exists it contains a unique unbounded in a we take e to be the lying directly above this if such a line does not exist then we take e to be the bottom unbounded of a m since we assume all of s to be this cell is uniquely defined the selection of cofaces for several lines is illustrated in fig d e d a b figure three anchors are drawn as black dots in a the corresponding line arrangement a m is shown in b for each line li in a the dual point d li and the corresponding ei chosen as in section are shown in b in the same color the query theorem here is the main mathematical result underlying rivet theorem querying the augmented arrangement for any line l and e a chosen as in section the barcode obtained by restricting m to l is b m l pushl a pushl b a b t e pushl a pushl b note that if l is such that d l lies in a of a m then pushl a pushl b for all a b t e so the theorem statement simplifies for such in general however it is possible to have pushl a pushl b proof of theorem we prove the result for the case l the proof for l horizontal or vertical is similar but easier and is left to the reader the result holds trivially if assume then that m let e be a coface of the cell containing dp l to keep notation simple we will write push pushl and e keeping remark in mind we define a grid g z to do so we first define the restriction of g to e by taking g z push z note that this is we choose an arbitrary extension of this to a grid g z by proposition to finish the proof it suffices to show that m l is a continuous extension of m e along g that there exists an isomorphism eg m e m l given t l let z max w z g w flg t note that we have e eg m t mze z if z if z note also that g z t g z to define the maps eg m e z t mtl we will consider separately the three cases z e and z e for a rn recall the definition of im a from immediately above lemma and note that g push is the minimal element a of l with respect to the partial order on such that im a hence by eq if z then im t so mtl further if z then eg m e t thus for z we necessarily take the isomorphism eg m e t mtl to be the zero map for z e eq and the definition of g give that push z t push z this implies z t z so im z im t thus m z t is an isomorphism by lemma since eg m e t z we may regard the map m z t as an isomorphism eg m e t mt mtl for z e we have that z lub s so im z further the chain of inequalities z lub s push lub s g e g z t gives that z t and further that s im z im t s so in fact im z im t then by lemma again m z t is an isomorphism which as above can be interpreted as an isomorphism eg m e t mtl we have now defined isomorphisms eg m e t mtl for all t clearly these isomorphisms commute with internal maps in eg m e and m l so they define an isomorphism eg m e m l as desired computational details of queries we next explain the computational details of storing and querying m we also give the complexity analysis of our query algorithm proving theorem dcel representation of a m as noted in the introduction we represent the line arrangement a m that underlies m using the dcel data structure a standard data structure for representing line arrangements in computational geometry the dcel consists of a collection of vertices edges and together with a collection of pointers specifying how the cells fit together to form a decomposition of representing the barcode templates to represent the augmented arrangement m we store the barcode template t e at each e in the dcel representation of a m recall that t e is a multiset this means a pair a b may appear in t e multiple times we thus store t e as a list of triples a b k where k n gives the multiplicity of a b in t e our query algorithm given a line l the query of m for b m l proceeds in two steps the first step performs a search for the e of a m specified in section once the e is selected we obtain b m l from t e by applying pushl to the endpoints of each pair a b t e let us describe our algorithm to find the e in detail in the case that l l l is not vertical it suffices to find the cell of a m containing d l in general the problem of finding the cell in a line arrangement containing a given query point is known as the point location problem this is a very well studied problem in computational geometry when we need to perform many point location queries on an arrangement or when we need to perform the queries in real time it is standard practice to precompute a data structure on the which point locations can be performed very efficiently this is the approach we take for v the number of vertices in the arrangement there are a number of different strategies which in time o v log v compute a data structure of size o v on which we can perform a point location query in time o log v a m has o vertices so computing such a data structure for a m takes time o log the data structure is of size o and the point location query takes time o log in the case that l is vertical d l is not defined and we need to take a different approach to find the we precompute a separate simpler search data structure to handle this case let y denote the set of lines j in a m such that there is no other line in a m with the same slope lying above j we compute a array which contains for each j y a pointer to the rightmost unbounded of a m contained in j sorted according to slope given a m computing this array takes o nl time where nl o is the number of anchor lines once the array has been computed for any vertical line l we can find the appropriate e via a binary search over the array this takes log nl time we are now ready to prove our result from the introduction on the cost of querying a m proof of theorem from the discussion above it is clear that once we have puted the appropriate data structures finding the cell e takes o log time each evaluation of pushl takes constant time so computing b m l from t e takes total time o e thus the total time to query m for b m l is o e log if d l e then e m l this gives theorem i if on the other hand d l e then we may not have e m l but we do have e m l for an arbitrarily small perturbation of l with d theorem ii follows computing the arrangement a m we now turn to the specification of our algorithm for computing a m first in sections and we specify the algebraic objects which serve as the input to our algorithm and explain how these objects arise from bifiltrations free implicit representations of persistence modules the input to our algorithm for k define k to be the set of integers k and let denote the empty set define a free implicit representation of an persistence module m to be a quadruple such that for i gri mi rn is a function for some mi and are matrices with coefficients in k of respective dimensions and for some note that either or may be an empty matrix if mi if then m if let rn denote the constant function mapping to the greatest lower bound of im then defining ordered sets wi mi gri for i we require that in the notation of section and are the matrix representations respectively of maps free free free free such that and m ker im we refer to defined above as the dimensions of and write m note that a presentation of m is an of m in the degenerate case that so that is an empty matrix our algorithm for computing the augmented arrangement of a finitely presented persistence module m takes as input an of m storing a free implicit representation in memory we store the matrices and in a data structure as used in the standard persistent homology algorithm columns of dj are stored in an array of size mj the ith column is stored as a list in position i of the array we also store grj i at position i of the same array motivation free implicit representations from finite filtrations we are interested in studying the ith persistent homology module hi f of a finite bifiltration f arising from data our choice to represent a persistence module via an is motivated by the fact that in practice one typically has ready access to an of hi in contrast we generally do not have direct access to a presentation of hi f at the outset and while there are known algorithms for computing one they are computationally expensive we ll now describe in more detail how of persistent homology modules arise in practice the chain complexes of recall from section that an filtration is a functor f rn simp such that the map fa fb is an inclusion whenever a b rn f gives rise to a chain complex cf of persistence modules given by f cj f f where we define cj f by taking the vector space cj f a to be generated by the of fa and taking the map cj f a b to be induced by the inclusion fa fb the morphism is induced by the j th boundary maps of the simplicial complexes fa note that hj f ker im and filtrations to explain how arise from finite bifiltrations it is helpful to first consider a special case following we define a filtration f to be a finite filtration where for each s fmax s here as in section a s denotes the set of grades of appearance of if f is finite and not we say f is bifiltrations arising in tda applications are often but not always for example for p a finite metric space and p r a function the bifiltration rips of section is but the filtration p of that section generally is not it s easy to see that if f is then each cj f is free we have an obvious isomorphism between cj f and free fj for fj the set given by fj s a s s a in fmax thus choosing an order for each fj the boundary map cj f f can be represented with respect to fj by a matrix with coefficients in the field k as explained in section is exactly the usual matrix representation of the j th boundary map of fmax free implicit representations of hi f in the case the ordered ngraded sets fj and matrices determine the chain complex cf and hence each of the homology modules hj f up to isomorphism in fact for oj fj the total order on fj we have that grfj is an of hj f for any j free implicit representations of hi f in the case for f a multicritical filtration the modules cj f are not free nevertheless as explained in there is an easy construction of an of hj f generalizing the construction given above in the setting letting x lj s s a in fmax the dimensions of satisfy the following bounds when f is a bifiltration lj lj see for details computation of the free implicit representation of a bifiltration as explained in section we can store a finite simplicial bifiltration in memory as a simplicial complex together with a list of grades of appearance for every simplex let f be a finite or bifiltration of size it s not hard to show that given this input for any i we can compute the of hi f described above in time o l log l one homology index at a time or all homology indices at once the standard algorithms for computing persistence barcodes of a filtration f compute b hi f for all i up to a specified integer in a single pass when one is interested in the barcodes at each homology index this is more efficient than doing the computations one index at a time because the computations of b hi f and b f share some computational work in contrast the algorithm described in this paper and implemented in the present version of rivet computes hi f of a finite bifiltration f for a single choice of i this approach allows us to save computational effort when we are only interested in a single homology module and seems to be the more natural approach when working with filtrations that said within the rivet framework for a bifiltration f one can also handle all persistence modules hi f for i up to a specified integer in a single computation of betti numbers our first step in the computation of a m is to compute s supp m supp m since rivet also visualizes the betti numbers of m directly we choose to compute this by fully computing and in any case we do not know of any algorithm for computing s that is significantly more efficient than our algorithm for fully computing and computing betti numbers of persistence modules we can define a of a persistence module q just as we have for an persistence module above in this case the functions and take values in in a companion article we show how to fully compute q q and q given the algorithm runs in time runs in time o for the dimensions of and m one way to compute the bigraded betti numbers of m quite standard in computational commutative algebra is to compute a free resolution for m however this gives us more than we need for our particular application instead of following this route our algorithm computes the betti numbers via carefully scheduled column reductions on matrices taking advantage of a characterization of betti numbers in terms of the homology of kozul complexes the natural way to do this is not to compute hi f for each i l but rather to compute ll the single augmented line arrangement a hi f labeling the intervals of the discrete barcode at l l each of a hi f by homology degree so that a query of hi f provides the homology ll l degree of each interval of b hi f l this labeled variant of hi f can be computed using essentially the same algorithm as presented in this paper for computation of a single augmented arrangement l when f is not to compute hi f we need to first replace c f by a chain complex of free persistence modules as noted in this can be done via a mapping telescope construction though this may significantly increase the size of c f computing betti numbers of persistence modules in fact our algorithm for computing betti numbers in the discrete setting can also be used to compute the bigraded betti numbers of a finitely presented persistence module indeed as we will now explain a of an persistence module m induces a discrete and an injective grid g such that m is a continuous extension of m along the matrices are the same in the two free implicit representations given this we can compute the multigraded betti numbers of using the algorithm of and deduce the multigraded betti numbers of m from those of by proposition the construction of and the grid g are simple let ox respectively oy denote the ordered set of unique of elements of im im let nx and ny let nx ox be the bijection sending i to the ith element of ox define ny oy analogously we choose g to be an arbitrary extension of nx ny ox oy for j we define to be the function grj we leave to the reader the easy check that m is a continuous extension of along computation and storage of anchors and template points recall from section that an anchor is the least upper bound of a weakly incomparable pair of points in s and that the set of anchors determines the line arrangement a m in this section we will let a denote the set of anchors to compute the line arrangement a m we need to first compute a list anchors of all elements of a moreover our algorithm for computing the barcode templates described in section requires us to represent the set p pe e a in a m of all template points using a certain sparse matrix data structure it s easy to see that p a we will see that because of this it is convenient to compute the list anchors and the sparse matrix representation of p at the same time in this section we specify our data structure for p and describe our algorithm for simultaneously computing both this data structure and the list anchors sparse matrix data structure for note that s im given this it s easy to see that also p im thus to store p it suffices to store p and the maps and we store and in two arrays of size nx and ny and we store p in a sparse matrix tptsmat of size nx ny the triple tptsmat is our data structure for henceforth to keep notation simple we will assume that and are the identity maps on nx and ny respectively so that p let us now describe tptsmat in detail an example tptsmat is shown in fig each element u p is represented in tptsmat by a quintuple u pl pd u u in this quintuple pl and pd are pointers possibly null pl points to the element of p immediately to the left of u and pd points to the element of p immediately below u the objects u u are lists used in the computation of the barcode templates t e initially these lists are empty we discuss them further in section the data structure tptsmat also contains an array rows of pointers of length ny the ith entry of rows points to the rightmost element of p with i rows figure example of tptsmat for nx and ny each element of p is represented by a square shaded squares represent elements of s and squares with solid borders represent anchors each entry contains pointers to the next entries down and to the left pointers are illustrated by arrows the lists u and u stored at each u p are not shown computation of tptsmat and anchors our algorithm for computing betti numbers given in computes m u and m u at each bigrade u nx ny by iterating through nx ny in lexicographical order as we iterate through nx ny it is easy to also compute both tptsmat and the list anchors let us explain this in detail upon initialization anchors is empty tptsmat contains no entries and all pointers in rows are null we create a temporary array columns of pointers of length nx with each pointer initially null at each u nx ny once the betti numbers at u have been computed if u a then we add u to the list anchors if u s a p then we add the quintuple u rows columns to tptsmat and set rows and columns to both point to u these updates to columns and rows ensure that for each i ny rows i always points to the rightmost entry with i added to tptsmat thus far for each j nx columns j always points to the topmost entry with j added to tptsmat thus far it remains to explain how we determine whether u a note that u a if and only if at least one of the following two conditions holds when we visit u both rows and columns are not null u s and either rows or columns is not null using this fact we can check whether u a in constant time beyond the o time required to compute s this algorithm for computing tptsmat and anchors takes o nx ny o time building the line arrangement recall that the anchors of m correspond under duality to the lines in the arrangement a m thus once the list of anchors has been determined we are ready to build the dcel representation of a m for this our implementation of rivet uses the algorithm which constructs the dcel representation of a line arrangement with n lines and k vertices in time o n k log n since our arrangement contains o lines and o vertices the algorithm requires o log elementary operations as explained in section the number of cells in a m is o the size of the dcel representation of any arrangement is of order the number of cells in the arrangement so the size of the dcel representation of a m is also o remark the log term in the bound of theorem ii arises from our use of the algorithm there are asymptotically faster algorithms for constructing line arrangements that would give a slightly smaller term in the bound of theorem ii in fact we can remove the log factor however the algorithm which is relatively simple and performs well in practice is a standard choice each e in a m lies on the line dual to some anchor in our dcel representation of a m we store a pointer at e to the entry of tptsmat corresponding to numerical considerations line arrangement computations as with many computations in computational geometry are notoriously sensitive to numerical errors that arise from arithmetic much effort has been invested in the development of smart arithmetic models for computational geometry which allow us to avoid the errors inexact arithmetic can produce without giving up too much computational efficiency because exact arithmetic is generally far more computationally expensive than arithmetic these models typically take a hybrid approach relying on arithmetic in cases where the resulting errors are certain to not cause problems and switching over to exact arithmetic for calculations where the errors could be problematic our implementation of rivet relies on a simple such hybrid model specially tailored to the problem at hand computing the barcode templates once we have found all anchors and constructed the line arrangement a m we are ready to complete the computation of m by computing the barcode templates t e for all e of a m this section describes our core algorithm for this in section we describe a refinement of the algorithm which performs significantly faster in practice the input to our algorithm consists of three parts a of m represented in the way described in section our sparse matrix representation tptsmat of the set p of all template points a dcel representation of the line arrangement a m recall that is given to us as input to our algorithm for computing m the computation of tptsmat and a m from has already been described above recall from section that t e e y e z y z b m e thus to compute t e at each e it suffices to compute the pair b m e e at each this is essentially what our algorithm does though it turns out to be unnecessary to explicitly store either b m e or e at any point in the computation note that m if and only if s if and only if each t e thus we may assume that m trimming the free implicit representation let box s y y lub s we say a for m is trimmed if im im box s as a preliminary step in preparation for the computation of the barcode templates if is not already trimmed we replace with a smaller trimmed while it is possible to work directly with an untrimmed to compute the barcode templates it is more efficient to work with a trimmed one in addition assuming that our is trimmed allows for some simplifications in the description of our algorithm for j let lj j box s there is a unique map lj j box s we define grj we define to be the submatrix of whose columns correspond to elements of im and we define to be the submatrix of whose rows and columns correspond to elements of im and im respectively let proposition h proof associated to and we have respective chain complexes of free modules with ker im h m and ker im h as in the definition of a further since is a trimming of we have obvious maps j fj for j making the following diagram commute the maps i induce a map h h m it s easy to see that is an isomorphism for a box s to finish the proof it remains to check that is also an isomorphism for b box s for b box s there is a unique element a box s minimizing the distance to b note that a b by commutativity it suffices to see that m a b is an isomorphism and h a b is an isomorphism lemma gives that m a b is an isomorphism and since im im box s it s easy to see directly that h a b is an isomorphism clearly is trimmed given tptsmat and our representation of we can compute from in o m d time where d o is the number of entries of henceforth we will assume that the of m given as input to our algorithm is trimmed ru and computation of persistence barcodes to prepare for a description of our algorithm for computing the barcode templates we begin with some preliminaries on the computation of persistence barcodes there is a large and growing body of work on this topic see for a recent overview with an emphasis on publicly available software we restrict attention here to what is needed to explain our algorithm the standard algorithm for computing persistence barcodes was introduced in building on ideas in see also for a succinct description of the algorithm together with implementation details the algorithm takes as input a of a persistence module with and and returns b of course in applications typically comes from the chain complex of a filtration with the simplices in each dimension ordered according to their grade of appearance the algorithm is a variant of gaussian elimination it performs column additions to construct certain factorizations of and from which the barcode b m can be read off directly let us explain this in more detail drawing on ideas introduced in let r be an m n matrix and for j the index of a column of r let r j denote the maximum row index of a entry in column j of we say r is reduced if r j r j whenever j j are the indices of columns in the standard persistence algorithm yields a decomposition d ru of any matrix d with coefficients in the field k where r is reduced and u is an matrix for d an r s matrix the algorithm runs in time o we define an ru of simply to be a pair of ru and we can read b m off of and to explain this suppose is of dimensions and let ri j denote the j th column of ri define pairs j j j ess j j and j k for any column k of while the ru of a matrix is not unique it is shown in that pairs and ess are independent of the choice of ru of theorem b m j k j k pairs j j ess vineyard updates to barcode computations suppose that d is a matrix and that is obtained from d by transposing either two adjacent rows or two adjacent columns of introduces an algorithm known as the vineyard algorithm for updating an ru of d to obtain an ru of in time o this algorithm is an essential subroutine in our algorithm for computing the barcode templates permutations of free implicit representations as mentioned above the standard persistence algorithm takes as input a of a persistence module with and the reason we need and to be is that the formula theorem for reading the barcode off of the ru holds only under this assumption on and now suppose that we are given a with either or not how can we modify to obtain a of h with grade functions so that we can read the barcode of h off of an ru decomposition of we now answer this question it is easy to check that the following lemma holds lemma suppose is a of an persistence module m of dimensions then for and any permutations on and respectively and and the corresponding permutation matrices we have that is also a of m thus in the case that m is finding a of m with grade functions amounts to finding permutations and as above with and and applying the corresponding permutations to the rows and columns of and finding the permutations by sorting for j we may use a sorting algorithm to find a permutation which puts the list grj grj grj mj in order the function gri is then to take advantage of the vineyard algorithm in our main algorithm we will want to work with a sorting algorithm which generates the permutation as a product of transpositions of adjacent elements in mj for this we use the well known algorithm this yields as a minimum length product of adjacent transpositions induced free implicit representations at each using lemma above we next show that for any e in a m yields a of the discrete persistence module m e introduced in section with and nondecreasing thus we can compute b m e by computing an ru of lift maps a function recall the definition of the set of template points p e from section define lifte box s p e by taking lifte a u for u the minimum element of p e such that a u where as elsewhere denotes the partial order order in fig we illustrate lifte and lifte for a pair of adjacent e and u l u v v figure illustration of lifte left and lifte right at two adjacent cells e and s e e containing the duals of lines l and l respectively the black dots represent points of p e p note that u p e and v p e but u p e and v p e the maps lifte lifte are illustrated by red arrows for a few sample points purple dots the shaded region in each figure is the subset of box s on which lifte lifte let orde p e e denote the unique bijection the inverse of the restriction of e to e free implicit representation of m e suppose that our of m is of dimensions for j let mj mj be any permutation such that grej orde lifte grj mj z is and write e let and denote the permutation matrices corresponding to and respectively let and proposition is a of m e proof it s not hard to check that orde lifte orde lifte is a of m e given this the result follows from lemma of in general is not uniquely defined because it depends on the pair of permutations e which needn t be unique we will sometimes write e to emphasize the dependence on e we say any e chosen as above is valid reading off t e from an ru of for j let fje lifte grj orde grej and write f e we call f e the template map for note that f e is independent of the choice of a valid e from theorem proposition and the definition of the barcode template t e in section we have the following relationship between f e and t e t e j k j k pairs j j ess our algorithm eq tells us that for a e in a m to compute the barcode template t e it suffices to compute the template map f e along with the ru of e for some valid e this is exactly what our algorithm does we give a description of the algorithm here deferring some details to later sections we need to compute t e at every a approach would be to do the computation from scratch at each however we can do better by leveraging the work done at one cell to expedite the computation at a neighboring cell we proceed as follows let g denote the dual graph of a m this is the undirected graph with a vertex for each e of a m and an edge e for each pair of adjacent e a m the dual graph is illustrated in fig we compute a path ew in g which visits each at least once our algorithm for computing is discussed below in section let us adopt the convention of abbreviating an expression of the form ei by i for example we write f ei as f i b a c d f i e g h figure the line arrangement a m in grey together with its dual graph g in blue the path through g might visit the vertices in the order a b c d e f e g h i once we have computed for each i w we compute the template map f i and an ru of i for some valid choice of i we proceed in order of increasing i for j we store fji in memory by separately storing its factors grj and we discuss the data structures for this in section thus to compute fji we compute both grj and the initial cell is chosen in a way that allows for a simple combinatorial algorithm to compute grj and see section letting pj denote the matrix representation of we have that thus and can be obtained from and by performing row and column permutations given and we compute an ru of via an application of the standard persistence algorithm for i w we compute f i as an update of f and we compute the ru decomposition of as an update of the ru of let us explain this in more detail with yet more detail to come in later sections to update we update both grj and first we update grj to obtain lifti grj details of this computation are given in section second we update to obtain as follows define gr ij ordi lifti grj note the distinction between gr ij and grij the former is defined in terms of while the i latter is defined in terms of applying the algorithm to gr ij we compute a sequence of transpositions of adjacent elements in mj such that for mj mj the composition of these transpositions gr ij is we take clearly then grij ordi lifti grj is so i is valid note that for pj the matrix representation of we have to compute an ru of e from our ru of e we exploit the decomposition of as a sequence of transpositions provided by the algorithm and apply the ru algorithm of repeatedly performing an update of the ru for each transposition in the sequence we note that neither nor nor the decomposition of into transpositions ever needs to be stored explicitly in memory rather we use each transposition to perform part of the update as soon as it is computed after this there is no need to store the transposition it remains to explain how we compute grj and and how we update grj to obtain lifti grj in what follows we explain all of this and also fill in some details about the data structures used by our algorithm computing the path we first explain how we choose the path ew as above let g be the dual graph of the line arrangement a m to compute we first compute a weight for each edge of the weight of e is chosen to be an estimate of the amount of work our algorithm must do to pass between cells e and in either direction we defer the details of how we define and compute these edge weights until section while the choice of edge weights impacts our choice of and hence the speed of our algorithm for computing the barcode templates our asymptotic complexity bounds for our algorithm are independent of the choice of edge weights we take to be the topmost in a m the points of correspond under pointline duality to lines that pass to the right of all points in call a path in g starting at and visiting every vertex of g a valid path we d like to choose to be a valid path of minimum length but we do not have an efficient algorithm for computing such a path indeed we expect the problem is instead we compute a path whose length is approximately minimum let be a valid path of minimum length it is straightforward to compute a valid path such that length length first we compute a minimum spanning tree m for g via a standard algorithm such as kruskal s algorithm via search of m starting at we can find a valid path in m which traverses each edge of m at most twice since length m length we have that length length in fact an algorithm with a better approximation ratio is known shows that a variant of the christofides algorithm for the traveling salesman problem on a metric graph yields a valid path with length length data structures before completing the specification of our algorithm for computing the barcode templates t e we need to describe the data structures used internally by the algorithm persistent homology and vineyard update data structures first we mention that our algorithm uses each of the data structures specified in for computing and updating ru these consist consists of sparse to store each of and well as several additional arrays which aid in performing the persistence and vineyard algorithms and in reading barcodes off of the matrices since we use these data structures only in the way described in we refer the reader to that paper for details array data structures for j we also maintain arrays gradesj liftsj sigj and siginvj each of length mj gradesj is a static array with gradesj k grj k liftsj k is an array of pointers to entries of tptsmat after our computations at cell ei are complete liftsj k lifti grj k sigj k k siginvj k k remark note that after all computations at cell ei are complete we can use liftsj and sigj together to perform constant time evaluations of the template map fji lifti grj together with the ru of this allows us to efficiently read off t e using eq the lists levsetj u we mentioned in section that for each u p we store lists u and u at the entry of tptsmat corresponding to u we now specify what these lists store after our computations at cell ei are complete levsetj u stores lifti grj u for u p i and levsetj u is empty for u p i as we will see in section our algorithm uses the lists levsetj u to efficiently perform the required updates as we pass from cell to cell ei computations at the initial cell we next describe in more detail the computations performed at the initial cell building on the explanation of section to begin our computations at cell for each u p we initialize levsetj u grj u note that grj u is nonempty only if u is the rightmost element of p on the horizontal line passing through u thus the elements u p such that levsetj u is nonempty have unique to efficiently initialize the lists levsetj u we use an o m log m time sweep algorithm described in appendix when we add k mj to the list levsetj u we also set liftsj k u next we concatenate the lists levsetj u into single list of length mj in increasing order of the of u and set siginvj equal to this list for j given siginvj we construct sigj in the obvious way in time o mj we define to be the permutation whose array representation is sigj letting pj denote the matrix representation of we use the arrays and to compute using the column sparse representation of and described in which allows for implicit representations of row permutations this takes o m time as already explained in section we then apply the standard persistence algorithm to compute the this completes the work done by the algorithm at cell computations at cell ei for i in section we outlined our algorithm for updating the template map f and ru decomposition as we pass from cell to we now give a more detailed account of this algorithm filling in some details omitted earlier as explained in section to update grj our algorithm separately updates the factors grj and while it is possible to first completely update grj and then update via an application of insertion sort it is slightly more efficient to interleave the updates of the two factors so that when we update the value of grj k for some k mj we immediately perform the transpositions necessitated by that update along with the corresponding updates to the this is the approach we take we assume without loss of generality that lies below ei the shared boundary of and ei lies on the line dual to some anchor tptsmat provides constant time access to the element u p immediately to the left of and the element v p immediately below if such u and v exist to keep our exposition simple we will assume that u and v do both exist the cases that either u or v do not exist are similar but simpler note that u p v p i u p i v p see fig recall that for each i lifti grj is represented in memory using the data structures liftsj and levsetj whereas is represented using sigj and siginvj to perform the required updates as we pass from cell to cell ei we first iterate through the list levsetj in decreasing order for each k levsetj if the of grj k is less than or equal to then lifti grj k v we remove k from levsetj add k to the beginning of the list levsetj v and set liftsj k if on the other hand the of grj k is greater than then lifti grj k and we do not perform any updates to liftsj levsetj or levsetj v for this value of if in addition liftsj siginvj sigj k v then we apply insertion sort to update sigj siginvj and the ru specifically we compute sortoneelement sigj k for sortoneelement the algorithm defined below algorithm sortoneelement w input w mj such that liftsj siginvj y liftsj siginvj z whenever w y z output updated sigj and siginvj such that liftsj siginvj y liftsj siginvj z whenever w y z correspondingly updated ru y w while y mj and liftsj siginvj y liftsj siginvj y do swap siginvj y and siginvj y sigj siginvj y y sigj siginvj y y perform the corresponding updates to the ru as described in section y once we have finished iterating through the list levsetj we next iterate through the list levsetj u in decreasing order for each k levsetj u we perform updates exactly as we did above for elements of levsetj with one difference if the second coordinate of grj k is greater than then we must remove k from levsetj u add k to the beginning of the list levsetj and set liftsj k when we have finished iterating through the list levsetj u our updates at cell ei are complete choosing edge weights for g we have seen in section that the path depends on our choice of weights w e on the edges e of we now explain how we choose and compute these weights as we will explain in section computing these weights is also the first step in two practical improvements to our algorithm in practice the cost of our algorithm for computing the barcode templates is dominated by the cost of updating the ru on average we expect the cost of updating the ru as we traverse edge e in g to be roughly proportional to the total number of transpositions performed thus if it were the case that the average number t e of transpositions performed as we traverse e were independent of the choice of path then it would be reasonable to take w e t e in fact t e does depend on nevertheless we can give a simple computable estimate of t e which is independent of we choose w e to be this estimate for e our definition of w e in fact depends only on the anchor line l containing the common boundary of so that we may write w e w l to prepare for the definition of w l we introduce some terminology which we will also need in section switches and separations for e and adjacent of a m we say r s box s switch at e if either lifte r lifte s or lifte r lifte s and and lifte s lifte r lifte s lifte r similarly for r s incomparable we say r s separate at e if either lifte r lifte s and lifte r lifte s lifte r lifte s and lifte r lifte s or we omit the straightforward proof of the following lemma suppose e and f f are pairs of adjacent of a m with the shared boundary of each pair lying on the same anchor line then i r s switch at e if and only if r s switch at f f ii r s separate at e if and only if r s separate at f f in view of lemma for l an anchor line we say that a b switch at l if a b switch at e for any adjacent e whose boundary lies on analogously we speak of a b separating at lemma if r s switch at any anchor line l then r s are incomparable proof suppose l dp for some anchor if r s switch at l then there exist u v as in fig and exchanging and r and s if necessary we have that remark every time we cross an anchor line l our algorithm performs one insertionsort transposition for each pair k l mj such that grj k grj l switch at for each pair k l mj such that grj k grj l separate at l the algorithm may only perform a corresponding transposition when crossing l in one from above to may sometimes not perform such a transposition even when crossing l in this direction it is reasonable to estimate then that for each pair k l mj such that grj k grj l separate at l the algorithm performs a corresponding transposition roughly of the time definition of w l for an anchor line l a finite set y and a function f y define swl f respectively sepl f to be the number of unordered pairs a b y such that f a f b switch respectively separate along motivated by remark we define w l swl swl sepl sepl computing the weights w l the weights w l can be be computed using a simplified version of our main algorithm for computing all barcode templates first we choose a path q through the of a m starting at and crossing every anchor line l once for example we can choose q to be a path through the rightmost cells of a m we then run a variant of the algorithm for computing the barcode templates described above using the path q in place of p and omitting all of the steps involving matrices and updates of ru for e adjacent in q with shared boundary on the anchor line l dp we compute w l as we pass from cell e to cell to explain how this works let u v be as in section for simplicity assume that u and v exist as we did there for any pair of elements r s that switch or separate at l lifte r lifte s u so to compute w l we only need to consider pairs whose elements lie in the lists levsetj u and levsetj further the lines x and y determine a decomposition of the plane into four quadrants and whether r s switch or separate is completely determined by which of these quadrants contain r and s see fig using these observations we can easily extend the update procedure described in section to compute the weight w l as we cross from e into cost of computing and storing the augmented arrangement in this section we prove theorem which bounds the cost of computing and storing m recall that theorem is stated for persistence modules arising as the ith persistent homology of a bifiltration using language of we may state the result in a more general algebraic form proposition let m be persistence module of coarseness and let be a of m of dimensions letting m we have that m is of size o our algorithm computes m using o log elementary operations and requires o storage to see that theorem follows from proposition let f be a or multicritical bifiltration of size l and recall from section section that using the construction of in o l log l time we can compute a of hi f of dimensions with o l size of the augmented arrangement we prove proposition i first as noted in section the dcel representation of a m is of size o at each of a m we store the barcode template t e by considering the ru decomposition we see that if is a of a persistence module n of dimensions then n hence proposition implies that e for all therefore our representation of m in memory is of total size o o as claimed cost of computing the augmented arrangement we now turn to the proof of proposition ii as we have seen our algorithm for computing m involves several for each a row in table lists the data computed by this a bound on the number of elementary operation required and the sections in this paper where the details were discussed the bounds in the first four rows of table were explained earlier so it remains to analyze the cost of our algorithm for computing the barcode templates t e the computation of t e itself involves a number of steps whose individual time complexities we again list in table table cost of augmented arrangement data elem operations details in set s supp m supp m o m section template points p stored in tptsmat and list anchors o section arrangement a m constructed via algorithm o log section data structures for point location o log barcode templates t e section o m m log section the bounds in all but the last four rows in table above the double horizontal line were either explained earlier or are clear from the discussion presented in the remainder of this section we verify the last four bounds cost of updates of the levsetj and liftsj at ei i in the notation of section to update the lists levsetj to their proper values at ei our algorithm considers performing an update for each element k in the lists levsetj u and levsetj in the worst case for each cell ei there are o m such elements k to consider in total for each k the updates of levsetj and liftsj take constant time thus the total amount of work we need to do at cell ei is o m the path p contains o so the total work to perform the updates over all is o as claimed a bound on the total number of transpositions to establish the next two bounds we take advantage of the following result which we prove in section proposition our algorithm for computing all barcode templates performs a total of o transpositions on and cost of updates of sigj siginvj at cells ei i the cost of updating sig is proportional to the cost of updating siginvj thus it is immediate from proposition that the total cost of updating these arrays is o cost of updates of ru at cells ei i there are o cells to consider this gives us the o term for each transposition performed on sigj we call the vineyard algorithm described in section at most twice each call to the vineyard algorithm takes time o m by proposition then the total cost of all vineyard updates performed is o this gives the desired bound table cost of barcode template data elem operations details in trimming the o m section path found via the algorithm for the optimal path using kruskal s mst algorithm o log section levsetj u liftsj sigj siginvj at cell o m log m section appendix of o m section reading the barcode template t off of the ru of for all i o section section levsetj u and liftsj at all cells ei i o section e sigj siginvj at all ei i o m section at all ei i o section weights w l for all anchor lines l o section cost of computing weights w l as explained in section we compute the edge weights w l using a variant of our algorithm for computing the barcode templates t e using lemma below it can be checked that computing all edge weights takes time o storage requirements by proposition i m itself is of size o so our algorithm for computing m requires at least this much storage our algorithm for computing the betti numbers requires o storage as do the persistence algorithm and the vineyard updates to ru the algorithm requires o storage as does kruskal s algorithm constructing the search data structures used for queries of m also requires o storage from our descriptions of the data structures used in the remaining parts of our algorithm it is clear that other steps of our algorithm for computing m do not require more than o storage the bound of proposition ii on the storage requirements of our algorithm follows bounding total number of transpositions required to compute all barcode templates to complete our proof of proposition ii it remains to prove proposition to this end for r box s let p u p u r lift r u p u lifte r for some e we leave to the reader the proofs of the following two lemmas lemma for e a l a line with d l e and u p lifte r u if and only if the following two conditions hold pushl u is the minimum element of pushl p for all p u with pushl pushl u we have lemma for u p we have that u lift r if and only if the following two conditions hold there exists no w p with and there exist no pair v w p with and fig illustrates the shape of the set lift r as described by lemma r figure the shape of a set lift r as described by lemma the next lemma shows that the number of anchor lines at which a given pair of points in box s can switch or separate is at most two it is the key step in our proof of proposition lemma for r s box s incomparable i there is at most one anchor line l at which r and s switch and if such l exists then there is no anchor line at which r and s separate ii there are at most two anchor lines at which r and s separate proof assume without loss of generality that then since r and s are incomparable let r x y lift r x s x y lift s y q x y lifte r lifte s for some e a m the following observations illustrated in fig follow from lemma if r is nonempty there is an element r such that for all u r and symmetrically if s is nonempty there is an element r such that for all u s and if q is nonempty there is an element q x q such that for all u q and symmetrically if q is nonempty there is an element q y q such that for all u q and clearly q x and q y are unique when they exist using lemma it is straightforward to check that u q if and only if u lift r lift s and one of the following is true u is incomparable to every element of r r is and there is no v lift s with and s is and there is no v lift r with and see fig for an illustration of r s and q to finish the proof of lemma we consider seven cases for each we explicitly describe the lines where r and s either switch or separate the verification of the claimed behavior in each case which uses lemma and the observations above is left to the reader fig illustrates case r and s empty lifte r lifte s for every e a m therefore r and s never switch or separate r nonempty s and q empty lifte r lifte s for every e a m again no switches or separations s nonempty r and q empty symmetric to the above no switches or separations r and s nonempty q empty lifte r lifte s whenever e lies below l dp lub and lifte s lifte r whenever e lies above hence r and s switch at r and q nonempty s empty lifte r lifte s whenever e lies below l dp lub qx and lifte r lifte s whenever e lies above hence r and s separate at s and q nonempty r empty dp lub qy symmetric to the above r and s separate at r s and q all nonempty lifte r lifte s whenever e lies below l dp lub qx lifte r lifte s whenever e lies above l and below dp lub qy and lifte s lifte r whenever e lies above hence r and s separate at l and r qx q qy r s s figure an illustration of the case where r s and q are each case points in lift r lift s are drawn as black dots observe that r and s separate at dp lub qx and dp lub qy and at no other anchor line do r and s either switch or separate proof of proposition fix j and let k k mj first we note that if grj k and grj k are comparable then as we pass from cell to cell ei our algorithm for computing barcode templates never performs an transposition of the values k and k in siginvj this is because our initialization procedure at cell described in section and appendix chooses siginvj such that if grj k grj k then sigj k sigj k since lifte k lifte k for all e there thus is never any need to swap k and k therefore as we pass from cell to cell ei our algorithm performs an transposition of the values k and k in siginvj only if grj k and grj k either switch or separate at ei clearly the number of pairs k k mj such that grj k and grj k either switch or separate at any anchor line is less than the total number of pairs the path constructed via the minimum spanning tree construction in section crosses each anchor line at most times by lemma then for each pair k k mj the component of our algorithm performs a total of at most transpositions of that pair in siginvj hence the total number of transpositions performed by the algorithm altogether is at most speeding up the computation of the augmented arrangement in this section we describe several simple practical strategies to speed up the runtime of our computation of m used together these strategies allow us to the compute augmented arrangements of the persistent homology modules of much larger datasets than would otherwise be possible persistence computation from scratch when ru are too slow three options for computing a barcode template while an update to an ru decomposition involving few transpositions is very fast in practice an update to an ru decomposition requiring many transpositions can be quite slow when many transpositions are required it is sometimes much faster to simply recompute the ru from scratch using the standard persistence algorithm in our setting the practical performance of our algorithm can be greatly improved if for consecutive ei with the edge weight w ei greater than some suitably chosen threshold t we simply compute the ru of from scratch directly from and moreover we can obtain significant additional speedups by avoiding the computation of the full ru of altogether at some cells ei to explain this we first note that to obtain t ei via eq we do not need the full ru but only pe pairs ess in particular we do not need and the algorithm for computing barcode templates described in section maintains the full ru of each because the vineyard algorithm requires this but if we are willing to compute pe from scratch then it is not necessary to compute the full ru of it suffices to compute pe further if in this case we have that ei ej for some j i then we do not even need to compute pe at all at cell ei since we have already done so at an earlier step in recent years several algorithms for computing barcodes have been introduced which are much faster than the standard persistence algorithm for example a few such algorithms are implemented in the software library phat given a as input these algorithms compute pe but do not compute the full ru of let us restrict our attention a single such algorithm say the clear and compress algorithm implemented in phat to compute pe then we have three options available to us a use the clear and compress algorithm if ei ej for all j i do nothing if ei ej for some j i b compute the full ru of from scratch using the standard persistence algorithm c use vineyard updates this option is only available if we chose option b or c at cell so that the full ru of was computed clearly there is a tradeoff between options a and b option a is much faster but choosing option a at cell ei precludes the use of option c at cell how then do we choose between these three options at each cell ei we formulate this problem as a discrete optimization problem which can solved efficiently by reduction to a problem estimating runtimes of the different options our formulation of the problem requires us to first estimate the respective runtimes ci a ci b and ci c of options a b and c at each cell ei in the path ew we will describe a simple strategy for this here and then explain below how to modify our approach to correct for a drawback of the strategy we take ci a if ei ej for some j i and otherwise we take ci a to be some constant c a independent of i similarly we take ci b c b to be independent of i to compute c a we compute pe using option a and set c a to be the runtime of this computation similarly to compute c b we compute the the full ru of from scratch and take c b to be the runtime we set c arbitrarily say c to compute ci a for each i we perform several thousand random vineyard updates to the ru of using timing data from these computations we compute for j the average runtime cvine of an j however some fast algorithms for persistence computation can be readily adapted to compute and for example as explained to us by ulrich bauer this is true for the twist variant of the standard persistence algorithm update to the ru corresponding to a transposition of adjacent elements in mj letting l denote the anchor line containing the shared boundary of and ei and recalling the notation of section we take vine ci cvine swl sepl swl sepl some motivation for this choice of ci is provided by remark the optimization problem to decide between options a b and c at each cell ei we solve the following optimization problem minimize w x ci xi subject to xi a b c xi a c clearly this problem is equivalent to the integer linear program ilp minimize w x ci a xi ci b yi ci c zi subject to xi yi zi xi yi zi xi using the constraints xi yi zi we can eliminate the variables yi from this ilp to obtain an equivalent ilp with a simpler set of constraints minimize w x ci a ci b xi ci c ci b zi subject to xi zi xi the constraint matrix associated to the latter ilp is of a standard form well known to be totally modular while an ilp with totally unimodular constraint matrix can always be solved directly via linear programming relaxation it is often the case that such an ilp can be cast as a network flow problem in which case we can take advantage of very efficient specialized algorithms in fact as explained to us by john carlsson the simplified ilp above can be cast as the problem of finding a minimum cut in a network first the ilp can be cast as a independent set problem in a bipartite graph with vertex weights in any graph the complement of an independent set is a vertex cover and vice versa so the latter problem is in turn equivalent to a vertex cover problem in a bipartite graph it is well known that such a problem can be solved by computing a minimum cut in a flow network section dynamic updates to our estimates of runtime cost as mentioned above there is a drawback to the approach to barcode template computation we have proposed here c a and c b may not be very good estimates of the respective average costs of option a and option b after all our estimates c a and c b are computed using very little data here is one way to correct for this we first solve the optimization problem above for just the first few cells say in the path using the solution we then compute the barcode template for each of these cells as we do this we record the runtime of the computation at each cell we next update the value of c a to the average run time of all vine in computations performed using option a thus far we also update c b cvine and the analogous way we then use these updated values as input to the optimization problem for the next of cells in we continue in this way until the estimates of c a c b vine have stabilized finally we solve the optimization problem for all of the cvine and remaining cells in and use the solution to compute the remaining barcode templates coarsening of persistence modules we have seen that size of the augmented arrangement m depends quadratically on the coarseness of m and that computing m requires o elementary operations thus to keep our computations small we typically want to limit the size of by coarsening our module as we now explain doing this quite simple a similar coarsening scheme is mentioned in let g gn be any grid function as defined in section g extends to a functor zn rn which we also denote by for m a persistence module let m g be a continuous extension of m in view of proposition the coarseness of the grid g controls the coarseness of the module m g let di denote the multidimensional interleaving distance as defined in as explained there di is a particularly metric on persistence modules the following proposition whose easy proof we omit makes precise the intuitive idea that a small amount of coarsening leads to a small change in our persistence module proposition if z z for i and all z z then di m m g the external stability theorem of landi mentioned earlier in section shows that if persistence modules m and n are close in the interleaving distance then the fibered barcodes b m and b n will be close in a precise sense this justifies the use of coarsening in conjunction with our visualization paradigm coarsening free implicit representations as explained in section in practice we typically have access to m via a for m since our algorithm for computing an augmented arrangement takes a as input to compute m g we want to first construct a of m g from let clg rn g be the function which takes each a rn to the minimal z g with a z define clg clg we leave the proof of the following to the reader proposition h m g thus to obtain a of a coarsening of m it suffices to simply coarsen the grade functions in our of m parallelization the problem of computing the barcode templates t e is embarrassingly parallelizable here is one very simple parallelization scheme given processors where l is less than or equal to the number of anchors we can choose the l anchor lines l with the largest values of w l these lines divide a m into at most polygonal cells ck with disjoint interiors and the remaining anchor lines induce a line arrangement a m k on each ck on processor k we can run our main serial algorithm described above to compute the barcode templates at each of a m k for this we need to make just one modification to the algorithm when choosing our path through the of a m k we generally can not choose our initial cell e to be the cell described in section since may not be be contained in a m k instead we choose the initial cell e arbitrarily this means we can not use the approach of section to initialize the data structures sigj siginvj liftsj and levsetj at cell one way to initialize these data structures is to chose an arbitrary affine line l with d l e and consider the behavior of the map pushl on s im im using exact arithmetic where necessary preliminary runtime results we now present runtimes for the computation of augmented arrangements arising from synthetic data we emphasize that these computational results are preliminary as our implementation of rivet does not yet take advantage of some key optimizations first the code that produced these results employs a highly simplified variant of the scheme detailed in section for computing barcode templates this variant chooses only between options b and c at each edge crossing which is less efficient than what is proposed in section secondly this code stores the columns of our sparse matrices using linked lists it is known that a smarter choice of data structure for storing columns can lead to major speedups in persistence computation finally as mentioned in section our current implementation runs only on a single processing core we expect to see substantial speedups after parallelizing the computation of barcode templates as proposed in section our computations were run on a single slow mhz core in a server with gb of ram however for the computations reported here only a fraction of the memory was required for example rivet used approximately gb of ram for our largest computation noisy circle data data sets we consider are point clouds sampled with noise from an annulus such as the center point cloud in fig specifically of the points in each data set x are sampled randomly from a thick annulus in a plane and are sampled randomly from a square containing the annulus we define a codensity function x r by taking p to be equal to the number of points of x within some fixed distance to we then construct the bifiltration f rips described in section taking the metric on x to be the euclidean distance with the scale parameter for the complexes capped at a value slightly larger than the inner diameter of the annulus computing the graded betti numbers table displays the average runtimes for computing the graded betti numbers and of hi f for i each row gives the averages from three point clouds of the specified size for example we generated three point clouds of points and the average number of in the resulting bifiltrations was so computing homology required working with a bifiltration of average size simplices building the augmented arrangement table displays the average runtimes in seconds to build f as before each row gives the averages from three point clouds of the specified size the average runtimes for computing f for each of four different coarsenings are displayed in the table similarly table displays the average runtimes in seconds to build f table average runtimes for computing the bigraded betti numbers of the noisy circle data points simplices runtime sec table runtimes for computing the augmented arrangement for homology of the noisy circle data runtimes seconds points simplices bins table runtimes for computing the augmented arrangement for homology of the noisy circle data runtimes seconds points simplices bins conclusion in this paper we have introduced rivet a practical tool for visualization of persistence modules rivet provides an interactive visualization of the barcodes of affine slices of a persistence module as well as visualizations of the dimension function and bigraded betti numbers of the module we have presented a mathematical theory for our visualization paradigm centered around a novel data structure called an augmented rangement we have also introduced and analyzed an algorithm for computing augmented arrangements and described several strategies for improving the runtime of this algorithm in practice in addition we have presented timing data from preliminary experiments on the computation of augmented arrangements though we have yet to incorporate several key optimizations into our code the results demonstrate that our current implementation already scales well enough to be used to study bifitrations with millions of simplices with more implementation work we expect rivet to scale well enough to be used in many of the same settings where persistence is currently used for exploratory data analysis from here there are several natural directions to pursue beyond continuing to improve our implementation of rivet we would like to apply rivet to the exploratory analysis of scientific data develop statistical foundations for our data analysis methodology adapt the rivet paradigm in the setting of homology to develop a tool for hierarchical clustering and interactive visualization of bidendrograms extend the rivet methodology to other generalized persistence settings such cosheaves of vector spaces over or cosheaves of persistence modules over we hope that rivet will prove to be a useful addition to the existing arsenal of tda tools regardless of how it ultimately fares in that regard however we feel that the broader program of developing practical computational tools for multidimensional persistence is a promising direction for tda and we hope that this work can draw attention to the possibilities for this we believe that there is room for a diverse set of approaches acknowledgements this paper has benefited significantly from conversations with john carlsson about pointline duality and discrete optimization we also thank ulrich bauer magnus botnan and dmitriy morozov and francesco vaccarino for helpful discussions the bulk of the work presented in this paper was carried out while the authors were postdoctoral fellows at the institute for mathematics and its applications with funds provided by the national science foundation some of the work was completed while mike was visiting raul rabadan s lab at columbia university thanks to everyone at the ima and columbia for their support and hospitality a appendix details of the rivet interface expanding on section we now provide some more detail about rivet s graphical interface as discussed in section the module m is input to rivet as a free implicit representation rivet uses and to choose the bounds for the line selection window and persistence diagram window to explain let a and b denote the greatest lower bound and least upper bound respectively of im im in order to avoid discussion of uninteresting edge cases we will assume that and choice of bounds for the line selection window we take the lower left corner and upper right corner of the line selection window to be a and b by default the line selection window is drawn to scale a toggle switch rescales normalizes the window so that it is drawn as a square on the screen parameterization of lines for plotting persistence diagrams we next explain how given a line l rivet represents b m l as a persistence diagram let us first assume that the line selection window is unnormalized we treat the case where it is normalized at the end of this section further by translating the indices of m if necessary we may assume without loss of generality that a as noted in section to plot b m l as a persistence diagram we need to first choose a parameterization r l of we choose to be the unique isometry such that if l has finite positive slope then is the unique point in the intersection of l with the union of the portions of the coordinate axes if l is the line x a then a if l is the line y a then a for a more intrinsic choice of bounds for the line selection window one could instead take the lower left and upper right corners of the window to be the greatest lower bound and least upper bound respectively of the set a m a for some i however we feel that for a typical tda application the extrinsic bounds for the line selection window we have proposed provide a more intuitive choice of scale the choice of bounds for the persistence diagram window the bounds for the persistence diagram window are are chosen statically depending on m but not on the choice of rivet chooses the viewable region of the persistence diagram to be representation of points outside of the viewable region of the persistence diagram it may be that for some choices of l b m l contains intervals for or so that falls outside of the viewable region of the persistence diagram indeed the coordinates of some points in the persistence diagram can become huge but finite as the slope of the l approaches or thus our persistence diagrams include some information on the top of the diagram not found in typical persistence diagram visualizations to record the points in the persistence diagram which fall outside of the viewable region above the main square region of the persistence diagram are two narrow horizontal strips separated by a dashed horizontal line the upper strip is labeled inf while the lower is labeled inf in the higher strip we plot a point with for each interval with in the lower strip we plot a point with for each interval b m l with and just to the right of each of the two horizontal strips is a number separated from the strip by a vertical dashed line the upper number is the count of intervals b m l with the lower number is the count of intervals b m l with persistence diagrams under rescaling if we choose to normalize the line selection window then rivet also normalizes the persistence diagrams correspondingly to do this it computes respective affine normalizations of so that after normalization a and b rivet then chooses the parameterizations of lines l and computes bounds on the persistence diagram window exactly as described above in the unnormalized case but taking the input to be computing the lists levsetj u at cell as mentioned in section to efficiently compute the lists levsetj u at cell we use a sweep algorithm as we did in section to keep notation simple we assume that and are the identity maps on nx and ny respectively so that p we assume that to start and are both increasing with respect to colexicographical order if this is not the case then by lemma we can apply a sorting algorithm to modify so that the assumption does hold this sorting can be done in o m log m time our sweep algorithm maintains a linked list frontier of pointers to elements of these elements are stored in tptsmat both the and of the entries of frontier are always strictly decreasing the algorithm iterates through the rows of the grid nx ny from top to bottom the list frontier is initially empty and is updated at each row s with the help of tptsmat if row s is empty then no update is necessary at row otherwise let u be the rightmost entry of p in row s and let r be the column containing u if the last element of frontier is also in column r then this element is removed from frontier and replaced by u otherwise we append u to the end of frontier the algorithm then inserts each k mj with grj k r s for some r into the appropriate list levsetj since grj is assumed to be increasing with respect to colexicographical order we have immediate access to all such specifically k is added to the list levsetj u for u the leftmost element of frontier with r the lists levsetj u are maintained in lexicographical order it is easy to check that u grj k as desired the algorithm is stated in pseudocode in algorithm updating frontier at each row of tptsmat takes constant time so the total cost of updating frontier is o ny for each row s we must iterate over frontier once to identify the lists levsetj into which we insert elements of mj there are ny rows and the length of frontier is o so the total cost of these iterations over frontier is o ny inserting each k mj into the appropriate list levsetj takes constant time so the total cost of such insertions is o m thus the total number of operations for the algorithm including the cost of the initial sorting to put and in the right form is o m log m ny o m log m algorithm algorithm for building the lists levsetj u input grj represented as a list in colexicographical order tptsmat output tptsmat updated so that for each u p levsetj u grj u with each list sorted in lexicographical order initialize frontier as an empty linked list for s ny to do if tptsmat has an entry in row s then update frontier for row s let r be the column containing rightmost element of p in row s if last element of frontier is in column r then remove the last entry from frontier append the entry of p at row s column r to the end of frontier for each entry u in frontier do add elements with grades in row s to the lists levsetj if u is the last element of frontier then add all k mj such that grj k r s with r to levsetj u else let v be the element after u in frontier add all k mj such that grj k r s with r to levsetj u references atallah algorithms and theory of computation handbook crc press atiyah on the theorem with application to sheaves bulletin de la de france bauer kerber and reininghaus clear and compress computing persistent homology in chunks in topological methods in data analysis and visualization iii pages springer bauer kerber reininghaus and wagner phat persistent homology algorithms toolbox in mathematical software icms volume of lecture notes in computer science pages springer berlin heidelberg bauer and lesnick induced matchings of barcodes and the algebraic stability of persistence in proceedings of the annual symposium on computational geometry page acm biasotti cerri frosini and giorgi a new algorithm for computing the matching distance between size functions pattern recognition letters fabri giezeman hert hoffmann kettner pion and schirra and geometry kernel in cgal user and reference manual cgal editorial board edition http bubenik de silva and scott metrics for generalized persistence modules foundations of computational mathematics pages carlsson topological pattern recognition for point cloud data acta numerica may carlsson de silva and morozov zigzag persistent homology and functions in proceedings of the annual symposium on computational geometry pages acm carlsson and multiparameter hierarchical clustering methods classification as a tool for research pages carlsson singh and zomorodian computing multidimensional persistence algorithms and computation pages carlsson and zomorodian the theory of multidimensional persistence discrete and computational geometry cerri b di fabio ferri frosini and landi betti numbers in multidimensional persistent homology are stable functions mathematical methods in the applied sciences cerri b di fabio jablonski and medri comparing shapes through approximations of the matching distance computer vision chacholski scolamiero and vaccarino combinatorial resolutions of multigraded modules and multipersistent homology arxiv preprint chazal glisse guibas and oudot proximity of persistence modules and their diagrams in proceedings of the annual symposium on computational geometry pages acm chazal guibas and oudot stable signatures for shapes using persistence in proceedings of the symposium on geometry processing pages eurographics association chazal and geometric inference for probability measures foundations of computational mathematics pages chazal de silva glisse and oudot the structure and stability of persistence modules arxiv preprint chazal de silva and oudot persistence stability for geometric complexes geometriae dedicata chen and kerber persistent homology computation with a twist in proceedings european workshop on computational geometry volume christofides analysis of a new heuristic for the travelling salesman problem report graduate school of industrial administration cmu edelsbrunner and harer stability of persistence diagrams discrete and computational geometry edelsbrunner and morozov vines and vineyards by updating persistence in linear time in proceedings of the annual symposium on computational geometry pages acm cormen leiserson rivest and stein introduction to algorithms mit press decomposition of pointwise persistence modules journal of algebra and its applications curry sheaves cosheaves and applications dissertation university of pennsylvania de berg cheong van kreveld and overmars computational geometry algorithms and applications springer derksen and weyman quiver representations notices of the ams edelsbrunner and harer computational topology an introduction american mathematical society edelsbrunner letscher and zomorodian topological persistence and simplification discrete and computational geometry eisenbud commutative algebra with a view toward algebraic geometry volume springer science business media eisenbud the geometry of syzygies a second course in algebraic geometry and commutative algebra volume springer science business media hoogeveen analysis of christofides heuristic some paths are more difficult than cycles operations research letters kreuzer and robbiano computational commutative algebra volume springer la scala and stillman strategies for computing minimal free resolutions journal of symbolic computation landi the rank invariant stability via interleavings arxiv preprint lang algebra revised third edition graduate texts in mathematics lesnick the theory of the interleaving distance on multidimensional persistence modules foundations of computational mathematics lesnick and wright computing multigraded betti numbers of persistent homology modules in cubic time in preparation otter porter tillmann grindrod and harrington a roadmap for the computation of persistent homology arxiv preprint schrijver theory of linear and integer programming john wiley sons toth o rourke and goodman handbook of discrete and computational geometry crc press wasserman all of statistics a concise course in statistical inference springer verlag webb decomposition of graded modules proceedings of the american mathematical society zomorodian and carlsson computing persistent homology discrete and computational geometry notation index m augmented arrangement of m page a m line arrangement page b m barcode of a persistence module page fibered barcode of a persistence module page box s quadrant in with upper right corner lub s page d dp duality transforms page e cell in a m page eg f continuous extension functor page e template map page free implicit representation page flg floor function for im g page g grid function page g dual graph of a m page path through all of g page grw grade function of the set w page hi ith homology functor page k integers k page coarseness of a persistence module page space of affine lines in with slope page l space of affine lines in with finite slope page space of affine lines in with finite positive slope page lifte lift map at cell e page lub least upper bound page m multidimensional persistence module page mj dimension of page integers page n e ord order map at cell e page p set of all template points page p e set of template points at cell e page pointwise finite dimensional page a free implicit representation page pushl r n push map page poset category of rn page rank m rank invariant of m page s union of the supports of the and bigraded betti numbers of m page se totally ordered partition of s page permutation of mj page e t barcode template at cell e page m ith graded betti number of m page map from positive integers to template points at cell e page n z poset category of zn page
| 0 |
hasan ali erkan jmti vol issue the journal of macrotrends in technology and innovation macrojournals automatic knot adjustment using dolphin echolocation algorithm for curve approximation hasan ali erkan necmettin erbakan university school of applied sciences department of management information sciences selcuk university faculty of engineering department of computer engineering abstract in this paper a new approach to solve the cubic curve fitting problem is presented based on a algorithm called dolphin echolocation the method minimizes the proximity error value of the selected nodes that measured using the least squares method and the euclidean distance method of the new curve generated by the reverse engineering the results of the proposed method are compared with the genetic algorithm as a result this new method seems to be successful keywords curve approximation cubic data parameterization on dolphin echolocation algorithm knot adjustment introduction curve fitting is a classical problem for computer aided geometric design for example de facto for the cad cam and related graphic design industries and in most geometric modeling areas is that parametric curves should be transformed into rational similarly in vector font modeling problems fonts are often fitted with a in practical applications the distance between the target curve and the fitted curve must be less than a predetermined tolerance and the resulting curve is called an approach euclidean distance method is used to measure the value corresponding to the distance between two curves hasan ali erkan jmti vol issue curve fitting problem the problem of curve fitting is expressing the target curve with minimum tolerance through curves the target curve can be two or three dimensional the scope of this paper the parameterization of the target data points the convergence of the minimum error tolerance with the curves using the automatically placed minimum control point constitute a curve is expressed as equation where is the ith control point and is the main function of curves the main function of curve for given knot vector t with degree p is expressed as equation further information on the curves can be found a methods on data parameterization because of curves are parametric curves the target data points need to be parameterized in the curve fitting however calculation of optimum data parameterization is theoretically quite difficult different ways of data parameterization are used in applications three methods of uniform parameterization parameterization and centripetal parameterization are emerging in researches based on previous studies in this study centripetal parameterization method is used euclidean distance minimization the euclidean distance is used to calculate the error between the target curve and the bspline fitted curve the euclidean distance is calculated by an equation where is the ith data in original dataset is the ith data in the fitted curve the general hasan ali erkan jmti vol issue approach of this paper is to minimize this distance and express the curve with minimum control point at the same time thus the euclidean distance and the number of control points is treated together in the fitness function dolphin echolocation algorithm the dolphin echolocation algorithm presented by kaveh and ferhoudi is an optimization algorithm that is inspired by the hunting principles of bottlenose dolphins through sonar waves the dolphins explore the entire search area for a specific effect to hunt as they approach their prey they try to focus on the target by limiting the number of waves they send by limiting their search this algorithm implements search by reducing the distance to the target the search space must be sorted before beginning to search the alternatives of each variable to be optimized must be sorted in ascending or descending order if these alternatives have more than one characteristic they should be sorted according to the most important one in the use of this technique for example for the variable j the vector aj in length laj forms the columns of the alternatives matrix in addition a convergence curve is used to change the convergence factor during the optimization process the variation of this trend throughout the iterations is calculated by equation where pp is the probability of being is randomly selected probability for the first iteration loopi is the number of the current iteration power is the rank of the curve and loopsnumber is the total number of iterations algorithm requires a location matrix lnl nv in the variable number nv at the location count nl the main steps of dolphin echolocation de for discrete optimization are as follows create nl locations randomly for dolphin pp of current iteration is calculated using the equation fitness is calculated for every location calculate cumulative fitness according to the following dolphin rules a for i to the number of locations for j to the number of variables find the position of l i j in jth column of the alternatives matrix and name it as for k to re where af j is the cumulative fitness for the jth variable of the selected alternative the numbering of the alternatives is the same as the ordering of the alternative matrix re is end end end hasan ali erkan jmti vol issue the diameter of the influence of the neighbor affected by the cumulative fitness of alternative a it is recommended that this diameter should not be more than of the search space fitness i is the fitness of the ith location the fitness should be defined as the best answers will get higher value in other words the goal of optimization should be to maximize the fitness af must be calculated using a reflective property by adding alternatives near the edges if a k is not valid a k or a k laj in this case if the distance of the alternative to the edge is small the same alternatives appear in the mirror as if a mirror were placed on the edge b a small value is added to af sequences in order to distribute probabilities uniformly in search space here should be chosen according to the way of describing the fitness the best choice is lower than the lowest fitness value achieved c find the best location for this loop and call it as best location find the alternatives assigned to the best location variables and set their af to zero in another saying for j number of variables for i number of alternatives if i the best location j for the variable j j to nv calculate the probability by choosing alternative i i to alj endequation according to the end end assign pp probability to all alternatives of all selected variables for the best location and distribute the remaining probability to other alternatives according to the form below for j to number of variables for i to number of alternatives if i the best location j calculate the else next step locations according to the assigned probability to each alternate repeat steps maximum iteration number times end end end hasan ali erkan jmti vol issue automatic knot adjustment by dolphin echolocation algorithm in the problem of curve fitting the fitted curve is tried to converge to the target curve with minimum tolerance and with minimum control point in that case such nodes must be selected for the given n points so that the error tolerance and the number of control points of the nearest curve are minimum thus an array of n bits is expressed as selected nodes and as thus the alternatives for each variable are each location for a dolphin echolocation is called as solution these solutions can be illustrated as figure figure sample solution illustration for example it is possible to express points in this way with the control points to be calculated for the selected nodes the aim of dolphin echolocation is maximizing the fitness for equation can be used as fitness function the curve fitting process with the dolphin echolocation algorithm is as follows create random solutions for the startup population calculate the pp of current iteration calculate the fitness value for all possible solutions calculate the cumulative fitness of the variables in each possible solution find the best solution according to maximum fitness set the cumulative fitness of all solutions variables to which variables equal to the variables of the best solution calculate the probabilities of alternatives for each variable in all solutions set the probabilities of all alternatives equal to the variables of the best solution to probability of the current iteration hasan ali erkan jmti vol issue find the possible solutions to be used in the next iteration by the probabilities of the alternatives for each variable repeat steps for the number of iteration times experimental results a experimental curve the target is a curve of points the approximation results of the degree curves are as shown in table genetic algorithm iteration rmse euclidean number distance of control point dolphin echolocation algorithm fitness rmse euclidean number distance of control point fitness table experimental results for different number of iteration plotted experimental results is shown in figure figure a original curve b genetic algorithm c dolphin echolocation algorithm hasan ali erkan jmti vol issue epitrochoid curve the target is a curve of points the curve equation is as follows cos cos sin sin for the parameters a b and h the approximation results of the degree curve for the curve calculated at t are as in table genetic algorithm iteration rmse euclidea n distance number of control point dolphin echolocation algorithm fitnes s rmse euclidea n distance number of control point fitness table experimental results for different number of iteration hasan ali erkan jmti vol issue plotted experimental results is shown in figure figure a original curve b genetic algorithm c dolphin echolocation algorithm archimedean spiral the target is a curve of points the curve equation is as follows cos sin for the a the approximation results of the degree curve for the curve calculated at t are as shown in table genetic algorithm iteration rmse euclidean distance dolphin echolocation algorithm number fitness rmse euclidean number of distance of control control point point table experimental results for different number of iteration fitness hasan ali erkan jmti vol issue plotted experimental results is shown in figure figure a original curve b genetic algorithm c dolphin echolocation algorithm vivaldi curve the target curve is a curve of points the curve equation is as follows cos sin sin for a the approximation results of the degree curve for the curve calculated at t are as shown in table genetic algorithm iteration rmse euclidea n distance number of control point dolphin echolocation algorithm fitness rmse euclidea n distance table experimental results for different number of iteration plotted experimental results is shown in figure number of control point fitness hasan ali erkan jmti vol issue figure a original curve b genetic algorithm c dolphin echolocation algorithm conclusion and feature work this paper addresses the problem of curve fitting of noisy data points by using curves given a set of noisy data points the goal is to compute all parameters of the approximating polynomial curve that best fits the set of data points in the sense this is a very difficult overdetermined continuous multimodal and multivariate nonlinear optimization problem our proposed method solves it by applying the dolphin echolocation algorithm our experimental results show that the presented method performs very well by fitting the data points with a high degree of accuracy a comparison with the most popular previous approach genetic algorithm to this problem is also carried out it shows that our method outperforms previous approaches for the examples discussed in this paper future work includes the extension of this method to other families of curves such as nurbs and the parametric curves the extension of these results to the case of explicit surfaces is also part of our future work references park lee curve fitting based on adaptive curve refinement using dominant points design de boor de boor de boor de boor a practical guide to splines vol new york piegl tiller the nurbs book springer science business media park choosing nodes and knots in closed curve interpolation to point data computeraided design vassilev i fair interpolation and approximation of by energy minimization and points insertion design wang cheng barsky b a energy and interproximation design kaveh farhoudi a new optimization method dolphin echolocation advances in engineering software
| 9 |
on integer programming and the of the constraint matrix fedor fahad and saket nov department of informatics university of bergen norway technische wien vienna austria ramanujan the institute of mathematical sciences hbni chennai india saket abstract in the classic integer programming ip problem the objective is to decide whether for a given m n matrix a and an b bm there is a integer x such that ax b solving ip is an important step in numerous algorithms and it is important to obtain an understanding of the precise complexity of this problem as a function of natural parameters of the input two significant results in this line of research are the time algorithms for ip when the number of constraints is a constant papadimitriou acm and when the of the corresponding to the constraint matrix is a constant cunningham and geelen ipco in this paper we prove matching upper and lower bounds for ip when the of the corresponding is a constant these lower bounds provide evidence that the algorithm of cunningham and geelen are probably optimal we also obtain a separate lower bound providing evidence that the algorithm of papadimitriou is close to optimal introduction in the classic integer programming problem the input is an m n integer matrix a and an b bm the objective is to find a integer x if one exists such that ax b solving this problem denoted by ip is an important step in numerous algorithms and it is important to obtain an understanding of the precise complexity of this problem as a function of natural parameters of the input in papadimitriou showed that ip is solvable in time on instances for which the number of constraints m is a constant his proof consists of two steps the first step is combinatorial showing that if the entries of a and b are from and ip has a solution then there is also a solution which is in n md n the second algorithmic step shows that if ip has a solution with the maximum entry at most b then the problem is solvable in time o nb in particular when the matrix a happens to be his algorithm for ip runs in time o nd where d max bm a natural question therefore is whether the algorithm of papadimitriou can be improved significantly in general and in particular for the case when a is our first theorem provides a conditional lower bound indicating that any significant improvements are unlikely to be precise we prove the following theorem theorem unless the exponential time hypothesis eth fails ip with m n matrix a o m can not be solved in time n log m do m where d max bm even when the constraint matrix a is and each entry in any feasible solution is at most eth is the conjecture that can not be solved in time n on formulas due to theorem the simple dynamic programming algorithm for ip when the maximum entry in a solution as well as in the constraint matrix is bounded is already close to optimal in fact when the constraint matrix is our lower bound asymptotically almost matches the o nd running time of papadimitriou s algorithm hence we conclude that obtaining a significant improvement over the algorithm of papadimitriou for matrices is at least as hard as obtaining a time n algorithm for in fact observe that based on the setting of the parameters m d n our lower bound rules out several interesting running times for instance if m n and d o we immediately get a n lower bound continuing the quest for faster algorithms for ip cunningham and geelen suggested a new approach for solving ip which utilizes a branch decomposition of the matrix a they were motivated by the fact that the result of papadimitriou can be interpreted as a result for matrices of constant rank and is a parameter which is upper bounded by rank plus one robertson and seymour introduced the notion of branch decompositions and the corresponding notion of for graphs and more generally for matroids branch decompositions have immense algorithmic significance because numerous problems can be solved in polynomial time on graphs or matroids of constant branchwidth for a matrix a the of a denotes the matroid whose elements are the columns of a and whose independent sets are precisely the linearly independent sets of columns of a we postpone the formal definitions of branch decomposition and till the next section for ip with a matrix a cunningham and geelen showed that when the of the of a is constant ip is solvable in time theorem cunningham and geelen ip with m n matrix a given together with a branch decomposition of its column matroid of width k is solvable in time o d mn n where d max bm upper bounds o nd matrix a o d mn n theorem matrix a lower bounds m o log m do m no n algorithm under eth theorem even for matrix a no f pw d pw mn o algorithm under seth theorem even for matrix a no f d d pw mn o algorithm under seth theorem even for matrix a figure a summary of our lower bound results in comparison to the upper bound results for a here n and m are the number of variables and constraints respectively pw denotes the of the column matroid of a and d denotes a bound on the largest entry in b in fact they also show that the assumption of is unavoidable without any further assumptions such as a bounded domain for the variables in this setting because ip is when the constraint matrix a is allowed to have negative values in fact even when restricted to and the branchwidth of the column matroid of a is at most a close inspection of the instances they construct in their reduction shows that the column matroids of the resulting constraint matrices are in fact direct sums of circuits implying that even their is bounded by the parameter is closely related to the notion of of a linear code which is a parameter commonly used in coding theory for a matrix a computing the of the column matroid of a is equivalent to computing the of the linear code generated by a roughly speaking the of the column matroid of a is at most k if there is a permutation of the columns of a such that in the matrix obtained from a by applying this for every i n the dimension of the subspace of rm obtained by taking the intersection of the subspace of rm spanned by the first i columns and the subspace of rm spanned by the remaining columns is at most k the value of the parameter is always at least the value of and at most as a result any upper bounds on the complexity of ip in terms of will translate to upper bounds in terms of the larger parameter rank number of constraints and any lower bounds on the complexity of ip in terms of will translate to lower bounds in terms of the smaller parameter motivated by this fact we study the question of designing an optimal time algorithm for ip when the column matroid of a has constant we first obtain the following upper bound theorem ip with matrix a given together with a path decomposition of its column matroid of width k is solvable in time o d mn n where d max bm as mentioned earlier the of ip on constant instances also holds for constant instances and hence the assumption of is unavoidable here as well furthermore while the proof of this theorem is not hard and is in fact almost identical to the proof of theorem this upper bound becomes really interesting when placed in context and compared to the tight lower bounds we provide in our next two theorems which form the main technical part of the paper in these theorems we provide tight conditional subject to strong eth lower bounds for ip matching the running time of the algorithm of theorem see figure strong eth seth is the conjecture that can not be solved in time n mo on formulas for any constant both eth and seth were first introduced in the work of impagliazzo and paturi which built upon earlier work of impagliazzo paturi and zane we obtain the following lower bounds for ip the first result shows that we can not relax the d k factor in theorem even if we allow in the running time an arbitrary function depending on the second result shows a similar lower bound in terms of d instead of put together the results imply that no matter how much one is allowed to compromise on either the or the bound on d it is unlikely that the algorithm of theorem can be improved theorem unless seth fails ip with even a m n constraint matrix a can not be solved in time f k k mn o for any function f and where d max bm and k is the of the column matroid of theorem unless seth fails ip with even a m n constraint matrix a can not be solved in time f d k mn o for any function f and where d max bm and k is the of the column matroid of a although the proofs of both lower bounds have a similar structure we believe that there are sufficiently many differences in the proofs to warrant stating them separately finally since the of a matroid never exceeds its our lower bounds hold when the parameter of interest is chosen to be the of the column matroid of a as well that is under seth there is no f bw d bw mn o or f d d bw mn o algorithm for ip with constraint matrices where bw denotes the branchwidth of the column matroid of a almost matching the upper bound of o d mn n from theorem related work currently eth is a commonly accepted conjecture and it serves as the basic tool used for establishing asymptotically optimal lower bounds for various parameterized and exact exponential algorithms while there is no such consensus on seth the hypothesis has already played a crucial role in the recent spectacular and rapid development in the analyses of polynomial parameterized and exact exponential algorithms in particular seth was used to establish conditional tight lower bounds for a number of fundamental computational problems including the diameter of sparse graphs dynamic connectivity problems the frechet distance computation string editing distance dynamic programming on graphs of bounded and steiner tree and subset sum finding the longest common subsequence and the dynamic time warping distance and matching regular expressions for further overview of applications of eth and seth we refer to surveys as well as chapter our work extends this line of research by adding the fundamental ip problem to the classes of and problems organization of the paper the remaining part of the paper is organized as follows the main technical part of the paper is devoted to proving theorem and theorem therefore once we have set up the requisite preliminary definitions we begin with section where we prove theorem the first part of this section contains an overview of both reductions and could be helpful for the reader in navigating the paper we then prove theorem in section and theorem in section completing the results for constant and that of theorem in section preliminaries we assume that the reader is familiar with basic definitions from linear algebra matroid theory and graph theory notations we use and r to denote the set of non negative integers and real numbers respectively for any positive integer n we use n and zn to denotes the sets n and n respectively for convenience we say that for any two vectors b rm and i m we use b i to denote the ith coordinate of b and we write b if i b i for all i m we often use to denote the whose length will be clear from the context for a matrix a i m and j n a i j denote the submatrix of a obtained by the restriction of a to the rows indexed by i and columns indexed by j for an m n matrix a and p v we can write av ai v i where ai is the ith column of a here we say that v i is a multiplier of column ai of matroids the notion of the of graphs and implicitly of matroids was introduced by robertson and seymour in let m u f be a matroid with universe set u and family f of independent sets over u we use rm to denote the rank function of m that is for any s u rm s maxs s for x u the connectivity function of m is defined as x rm x rm u x rm u for matrix a we use m a to denote the of a in this case the connectivity function a has the following interpretation for e n and x e we define s a x span span x where is the set of columns of a restricted to x and span is the subspace of rm spanned by the columns it is easy to see that the dimension of s a x is equal to a x a tree is cubic if its internal vertices all have degree a branch decomposition of matroid m with universe set u is a cubic tree t and mapping which maps elements of u to leaves of t let e be an edge of t then the forest t e consists of two connected components and thus every edge e of t corresponds to the partitioning of u into two sets xe and u xe such that xe are the leaves of and u xe are the leaves of the width of edge e is xe and the width of branch decomposition t is the maximum edge width where maximum is taken over all edges of t finally the of m is the minimum width taken over all possible branch decompositions of m the of a matroid is defined as follows let us remind that a caterpillar is a tree which is obtained from a path by attaching to some vertices of the paths some leaves then the of a matroid is the minimum width of a branch decomposition t where t is a cubic caterpillar let us note that every mapping of elements of a matroid to the leaves of a cubic caterpillar correspond to their ordering jeong kim and oum gave a constructive tractable algorithm to construct a path decomposition of width at most k for a column matroid of a given matrix eth and seth for q let be the infimum of the set of constants c for which there exists an algorithm solving with n variables and m clauses in time mo the hypothesis eth and strong hypothesis seth are then formally defined as follows eth conjectures that and seth that proof of theorem in this section we prove that unless seth fails ip with matrix a can not be solved in time f k d k mn o for any function f and where d max b b m and k is the of the column matroid of a in subsection we give an overview of our reductions and in subsection we give a detailed proof of theorem overview of our reductions we prove theorems and by giving reductions from where the parameters in the reduced instances are required to obey certain strict conditions for example the reduction we give to prove theorem must output an instance of ip where the of the column matroid m a of the constraint matrix a is a constant similarly in the reduction used to prove theorem we need to construct an instance of ip where the largest entry in the target vector is upper bounded by a constant these stringent requirements on the parameters make the reductions quite challenging however reductions under seth can take super polynomial can even take n time for some where n is the number of variables in the instance of this freedom to avail exponential time in reductions is used crucially in the proofs of theorems and now we give an overview of the reduction used to prove theorem let be an instance of with n variables and m clauses given and a fixed constant c we construct an instance a c x b c x of ip satisfying certain properties since for every c we have a different a c and b c this can be viewed as a family of instances of ip in particular our main technical lemma is the following lemma let be an instance of with n variables and m clauses let c be a n fixed integer then in time o c we can construct an instance a c x b c x of ip with the following properties a is satisfiable if and only if a c x b c x is feasible n b the matrix a c is and has dimension o m o c the of the column matroid of a c is at most c n the largest entry in b c is at most c e once we have lemma the proof of theorem follows from the following observation if we have an algorithm a solving ip in time f k d k mn a for some a then we can use this algorithm to refute seth in particular given an instance of we choose an appropriate c depending only on and a construct an instance a c x b c x of ip and run a on it our careful choice of c will imply a faster algorithm for refuting seth more formally we choose c to be an integer such that ac then the total c running time to test whether is satisfiable is the time require to construct a c x b c x plus the time required by a to solve the constructed instance of ip that is the time required to test whether is satisfiable is n n o c f c c c mo ac c n mo n mo where is a constant depending on the choice of it is important to note that the utility of the reduction described in lemma is extremely sensitive to the value of the numerical parameters involved in particular even when the blows up slightly say up to or when the n largest entry in b c blows up slightly say up to c for some then the calculation above will not give us the desired refutation of seth thus the challenging part of the reduction described in lemma is making it work under these strict restrictions on the relevant parameters as stated in lemma in our reduction we need to obtain a constraint matrix with small an important first step towards this is understanding what a matrix of small looks like we first give an intuitive description of the structure of such matrices let a be a m n matrix of small and let m a be the column matroid of a for any i n bm a a matrix b for which of its column matroid is b a pictorial representation of the matrix a c figure comparison of a c with a low matrix let i denote the set of columns or vectors in a whose index is at most i that is the first i columns and let i n denote the set of columns with index strictly greater than i the of m a is at most max dimhspan i span i n i i hence in order to obtain a bound on the pathwidth it is sufficient to bound dimhspan i span i n i for every i n consider for example the matrix b given in figure the of m b is clearly at most in our reduced instance the constructed constraint matrix a c will be an appropriate extension of b that is a c will have the same form as b but with each replaced by a submatrix of order o c for some see fig for a pictorial representation of a c the construction used in lemma takes as input an instance of with n variables and a fixed integer c and outputs an instance a c x b c x of ip that satisfies all four properties of the lemma let x denote the set of variables in the input cnfformula cm for the purposes of the present discussion we assume that c divides we partition the variable set x into c blocks each of size nc let xi i c denote the set of assignments of variables corresponding to xi set nc and n l clearly the size of xi is upper bounded by c we denote the assignments in xi by xi xi xi to construct the matrix a c we view each of these assignments as a different assignment for each clause in other words we have separate sets of column vectors in the constraint matrix a c corresponding to different pairs cr xi where cr is a clause and xi is a block in the partition of x all the values set in these columns are based on the assignments of xi and the clause cr that is based on the clause cr and assignments in xi in total we have columns corresponding to cr xi the set of columns corresponding to cr that is the set of columns corresponding to cr xi for all i together forms a bigger block of columns denoted by cr r ac c in a c the columns of a c appears consecutively in a c in other words the clauses r of partition the set of columns of a c into ac c r m where columns in each of cr r the parts a c occur consecutively thus we can identify each column in the matrix ac c with a pair cr xi i c and j l for a pair cr xi we refer r to xi as the assignment part of the pair the values in ac c are covered by specific consecutive rows these rows are divided into parts according to their roles in the reduction the first rows comprise the predecessor matching part the middle row is called the evaluation part and the rows after the evaluation part comprise the successor matching part the entries in the row corresponding to the evaluation part get values or depending on whether the assignment part of the pair cr xi satisfies cr or not the matrix a c and the target vector b c are constructed in such a way that all the feasible solutions to a c x b c are from the set n where is the set of columns in a c hence setting a coordinate of x to corresponds to choosing a particular column from a c in our reduction we use a selector gadget to enforce that any feasible solution will choose exactly one column from the set of columns corresponding to a pair cr xi that is it corresponds to choosing a column identified by cr xi thus it results in choosing an assignment xi to the variables in the set xi note that this implies that we will choose an assignment in xi for each clause cr that way we might choose m assignments from xi corresponding to m different clauses however for the backward direction of the proof it is important that we choose the same assignment from xi for each clause this will ensure that we have selected an assignment to the variables in xi towards this we assign values in each of the columns in a way that all the assignments chosen by a feasible solution for a particular block across different clauses are the same then choosing two columns one from the set of columns corresponding to cr xi and the other from the columns of xi in a feasible solution would imply that both of these columns correspond to one particular assignment of xi in this case we say that these two columns are consistent we enforce these consistencies in a sequential manner that is for any block xi we make sure that the two columns chosen among the columns corresponding to cr xi and xi are consistent for any r m as opposed to checking the consistency for every pair cr xi and xi for r thus in some sense these consistencies propagate such a propagation of consistencies is realized through rows corresponding to the predecessor matching part and the successor matching part for that r the rows corresponding to predecessor matching part of ac c will be the same as the successor c r matching part of a c and the rows corresponding to the successor matching part of ac c will c be the same as the predecessor matching part of a c both the predecessor matching part as well as the successor matching part contain designated rows for each block xi of variables to handle consistencies between cr xi and xi recall that xi denotes the set of assignments of xi and furthermore assignments in xi are denoted by xi xi thus we can identify the assignment xi by a integer j l these values are assigned in a manner at designated places in the predecessor matching part as well as in the successor matching part enabling us to argue consistency the largest entry in b c is upper bounded by l furthermore the idea of making consistency in a sequential manner also allows us to bound the of column matroid of a c by c the proof technique for theorem is similar to that for theorem this is achieved by modifying the matrix a c constructed in the reduction described for lemma the largest n entry in a c is c so each of these values can be represented by a binary string of length at most nc we remove each row say row indexed by with entries greater than and replace it with nc rows where for any j if the value a c j w then a c j where is the k th bit in the binary representation of w this modification reduces the largest entry in a c to and increases the from constant to approximately finally we set all the entries in b c to be this concludes the overview of our reductions and we now proceed to a detailed exposition detailed proof of theorem towards the proof of theorem we first present the proof of our main technical lemma lemma which we restate here for the sake of completeness lemma let be an instance of with n variables and m clauses let c be a n fixed integer then in time o c we can construct an instance a c x b c x of ip with the following properties a is satisfiable if and only if a c x b c x is feasible n b the matrix a c is and has dimension o m o c the of the column matroid of a c is at most c n the largest entry in b c is at most c e let cm be an instance of with variable set x xn and let c be a fixed constant given in the statement of lemma we construct the instance a c x b c x of ip as follows construction let c cm without loss of generality we assume that n is divisible by c otherwise we add at most c dummy variables to x such that is divisible by we divide x into c blocks that is xi x x x for each i zc let c c c nc and l for each block xi there are exactly assignments we denote these assignments by xi xi xi now we create m matrices one for each clause c these matrices will be submatrices of the constraint matrix a c for each clause cr c we create a c matrix br for each block xi and all possible assignments to the variables of xi we allocate columns in br for each assignment xi there are two columns in br corresponding to it then the first columns of br correspond to assignments of the second columns correspond to assignments of etc matrices br for r we first define br for indices r matrices and bm have a slightly different structure compared to the other matrices and so we define them separately the values of br are defined as follows each assignment xi is identified by the number j each xi defines entries in br four in the column numbered i and four in the column numbered i the rows of br are partitioned into parts the part composed of the first rows is called the predecessor matching part the part composed of the row indexed by is called the evaluation part and the part composed of the last rows is called the successor matching part see fig the predecessor matching part is defined by br i br i j i zc for i zc the evaluation part is defined by br i and br i if xi satisfies cr otherwise predecessor matching part evaluation part successor matching part evaluation part successor matching part a parts of br for r b parts of figure parts of bm the successor matching part for br is defined for i zc as br i br i l j br i br i all other entries in br which are not defined above are set to zero that is for all i zc and a such that i br a br a br i a and br i a br i a before describing the construction of and bm we provide a brief description of certain desirable properties possessed by br we have designated set of columns per pair cr xi which are indexed by i i and ensures that at most one of the columns from this set is chosen by a feasible solution this will be forced by putting in the corresponding coordinate of vector b c in the construction of a c we will only add zeros to the entries in the row of a c corresponding to the th row of br but outside the submatrix br of a c this guarantees that exactly one of them is chosen by a feasible solution the purpose of is to ensure consistency with the column selected from xi and purpose of is to ensures consistency with the column selected from xi we construct the matrix a c in such a way that the row of br indexed by and the row of indexed by are equal in a c suppose that this row is indexed by h in a c then and ensure that if we choose consistent columns from the columns of xi and cr xi then the sum of the values in coordinate h of the selected columns will be equal to l so we will set b c h l in the target vector b c for each assignment xi we have two designated columns in br they are indexed by i and i the reason for creating two columns instead of just one per xi is the following the coordinate j of the target vector b c corresponding to the row which contains the row of br indexed by will be set to for any satisfying assignment of of more than one partial assignments of assignments of restricted to different blocks of x may satisfy the clause cr so among the pairs of columns corresponding to these satisfying partial assignments a feasible solution will choose the first column from the pair for all but one for a partial assignment assignment of a block of x which satisfies cr the feasible solution will choose the second column corresponding to it equations and make sure that the entries corresponding to the coordinate j from the set of chosen columns by a feasible solution will add up to hence at least one selected column would correspond to an assignment of a block of x satisfying clause cr figure let n c and cr the assignments are the entries defined according to and are colored red and blue respectively if r m then the matrix on the left represents br and if r then br can be obtained by deleting the yellow colored portion from the left matrix the matrix on the right represents bm sometimes it is helpful to focus on the positions containing elements this can be found in the two matrices at the bottom of the figure matrices and bm the matrix is created as above but with the exception that we remove the predecessor matching part see fig the matrix bm is created as above with the exception that we remove the rows numbered an illustration of bm is given in fig formally the entries of and bm defined by xi are given below for we define its entries as i i l j i i i and i if xi satisfies otherwise for bm bm i bm i j bm i i bm i i bm i bm i and if xi satisfies cm otherwise all other entries in and bm which are not defined above are set to zero that is for all i zc and a such that i a a bm i a bm i bm i a a bm i i a matrix a c and vector b c now we explain how to construct the constraint matrix a c and vector b c which would serve as instance of ip in what follows we simplify the notation by using a instead of a c and b instead of b c the matrices bm are disjoint submatrices of a and they cover all non zero entries of a informally the submatrices bm form a chain such that the rows corresponding to the successor matching part of br will be the same as the rows in the predecessor matching part of a pictorial representation of a can be found in fig formally a is a m c m c matrix let and im m c m for every r m let ir r and for r m let jr now for each r m we put matrix a ir jr br all other entries of a not belonging to any of the submatrices a ir jr are set to zero this completes the construction of a now we define the m c vector b let p r r m j zc in other words p contains the indices of some rows for each r m alternating rows in the successor matching part and thus the alternating rows in the predecessor matching part of a ir jr belong to p refer to fig again then the entries of b are defined as b q l if q p otherwise this completes the construction of the matrix a and vector b which together make up the required instance of ip correctness now we prove that is satisfiable if and only if there is a integer vector x such that ax b we start with some notations we partition the set of columns of a into m parts jm we have already defined these sets with one part per clause for each r m jr is the set of columns associated with cr we further divide jr into c equal parts one per variable set xi these parts are pr i r c i r c i i zc in other words pr i is the set of columns associated with the tuple cr xi and i the set pr i is divided into parts of size two each one per tuple cr xi where cr c j zl and i zc the two columns associated with tuple cr xi are indexed by and r i in a we also put m c to be the number of columns in a lemma formula is satisfiable if and only if there exists such that b proof suppose is satisfiable and let be its satisfying assignment there exists zl such that is the union of each clause c c c is satisfied by at least one of the assignments for each c we fix an arbitrary i zc such that the assignment xi satisfies clause let be a function which fixes these assignments for each clause that is c zc such that the assignment c c satisfies the clause c for every c now we define and prove that b let r i r m i zc cr i r i r m i zc cr i and q then the vector is defined by setting if q q otherwise x q the entry in is the multiplier of column q and so we say that entry q corresponds to column q for each tuple c xi one entry of among the two entries corresponding to the columns associated with c xi is set to if c i then the second column corresponding to c xi is set to otherwise the first column corresponding to c xi is set to all other entries are set to zero also note that for every r m and i zc we have that i and let qr i pr i q here notice that among the columns of pr i exactly one column which is indexed by qr i belongs to q the column qr i corresponds to one of the two columns corresponding to cr xi we need the following auxiliary claims claim for every r m and i zc such that i we have a p a r z x z l ji b p a r z x z c p a r h z x z for any h i i i proof first consider the case when r let pr i q i g where g then x a z x z z x i i i g l ji by and a follows to prove b we have x i a z x z x z i i g by to show c observe that for h x a h z x z h z x i i h i g by now consider the case when r let pr i q r c i g where g for this case we have a r z x z x br z r c x i i br i g l ji by a r z x z br i g x i by finally for h x a r h z x z br h z r c x i i br h i g by claim for every r m and i zc such that i we have a p a r z x z ji b p a r z x z c p a r h z x z where h i i i proof the proof of the claim is similar to the proof of claim let i q rc i g where g then a r z x z i g x i ji x a r z x z by x z rc i i i g by or for any h x a r h z x z h z rc x i i h i g by or now we show that b recall that p r r m j zc let m c be the number of rows in a to prove b we need to show that x l if q p a q j x j otherwise we consider the following exhaustive cases case q p let q r for some r m and i zc notice that q ir q and q for every m r r this implies that for every j jr a q j then n x a q j x j x x a q j x j m x a q j x j x x by a q j x j a q j x j x x x c a q j x j i x a q j x j x a q j x j by claims c and c i l ji ji by claims a and a l case q p we partition p into and consider based on these parts let r r m j zc r r m m c m case a q let q r for some r m and i zc notice that q ir q and q for any m r r this implies that for any j jr a q j hence n x a q j x j x x a q j x j m a q j x j x x x a q j x j x by a q j x j x c a q j x j x x i a q j x j x a q j x j by claims c and c i by claims b and b case b q let q r for some r m by construction of a we have that for all j jr a q j this implies that n x a q j x j x a q j x j we consider two cases based on r or r when r a r jr br jr recall the function that is if cr g then xg satisfies cr by equation n x a q j x j x br j x j x x br j x j x br cr j x cr j x x br j x j cr x br cr j x cr j by and definition of q br cr cr by using the fact that cr satisfies cr when r we have that q and a hence by equation n x a q j x j x j x j x j x j x x j x j x x j x j j x j x by and definition of q by and definition of q by using the fact that satisfies case c q let q m i where i c by the definition of we have that for every j m c i m c i a q j that is for j pm a q j let pm q m c i g where g hence n x a q j x j x a q j x j x bm i j m bm i i g by lemma the of the column matroid of a is at most c proof recall that m c be the number of columns in a and be the number of rows in a to prove that the of a is at most c it is sufficient to show that for all j dimhspan j span j i c the idea for proving equation is based on the following observation for v j and v j let i q there exists v v and v v such that v q v q then the dimension of span v span v is at most thus to prove for each j we construct the corresponding set i and show that its cardinality is at most c we proceed with the details let be the column vectors of a let j let vj and we need to show that dimhspan span i c let i q there exists v and v such that v q v q we know that is partitioned into parts m zc fix r m and i zc such that j pr i let j r c i g where j zl and g let max r r r c and r c then ir and jr recall the definition of sets ir and jr from the construction of matrix a the way we constructed matrix a for every q and for every vector v we have v q also for every q and for any v we have that v q this implies that i ir now we partition ir into parts r and these parts are defined as follows if r r zc otherwise if r r zc otherwise r r if r m r zc otherwise r c if r m r zc otherwise and claim for each r m q ir and j jr vj q proof the entries in a are covered by the disjoint a ir jr br r m hence the claim follows claim c i proof when r and the claim trivially follows let r and let q be such that q then q for some i notice that q for every s s by claim for every v r v q now consider the vector vj r notice that j j and j jr let j j a r c i g a for some a j from the construction of a vj q br i g a by thus for every q r q r and v v q this implies that q r c i claim proof when r and the claim holds so now assume that r consider any q let zc be such that q r notice that q for any r and hence s by claim for any v r v q now consider any j jr let j r c a for some a c from the construction of a vj q br a by or this completes the proof of the claim claim i proof when r m and the claim trivially holds so now let r m and consider any q q r let i such that q r notice that s for any hence by claim for any v r v q now consider any vector s vj r notice that j j and j jr let j r c a for some a and i from the construction of a vj q br a by hence we have shown that for any q r q r and v v q this implies that q r i claim proof consider the case when r consider q we claim that if q i then q m i suppose q i and q m i let q m where i then by the construction of a for any j j vj q bm j m bm a where c i and a thus by vj q bm a this contradicts the assumption that q i suppose that q i and q m i let q m where i c then by the construction of a for any j j vj q bm j m bm a where i a thus by vj q bm a this contradicts the assumption that i i hence in this case we have proved that so now assume that r m consider any q let zc such that q r s notice that q for any r and hence by claim for any v r s v q also notice that q for any r and hence by claim for any v v q so the only potential j for which vj q are from jr let j then by the definition of a vj q j by or or or hence we conclude that the only possible j for which vj q are from jr now the proof is similar to case when r we claim that if q i then q r suppose q i and q r let q r where i then by the construction of a for any j j vj q br j r br a where c i and a thus by vj q br a this contradicts the assumption that q i suppose q i and q let q where i then by the construction of a for any j j vj q br j r br a where i a thus by vj q br a this contradicts the assumption that i i hence in this case as well this completes the proof of the claim therefore we have ir because i ir by and claims and c i i this completes the proof of the lemma proof of theorem we prove the theorem by assuming a fast algorithm for ip and use it to give a fast algorithm for refuting seth let be an instance of with variables and clauses we choose a sufficiently large constant c such that ac holds c we use the reduction mentioned in lemma and construct an instance a c x b c x of ip which has a solution if and only if is satisfiable the reduction takes time o c let d the constraint matrix a c has dimension c c and the largest entry in vector b c does not exceed the of m a c is at most c assuming that any instance of ip with constraint matrix of k is solvable in time f k d k mn a where d is the maximum value in an entry of b and a are constants we have that a c x b c x is solvable in time o f c c c o here the constant f c is subsumed by the term whether is satisfiable or not is o c where c a c ac c o o ac c o hence the total running time for testing ac c o o this completes the proof of theorem proof sketch of theorem in this section we prove that ip with matrix a can not be solved in time f d d k mn o for any function f and unless seth fails where d max b b m and k is the of the column matroid of a in section we gave a reduction from to ip however in this reduction the values n in the constraint matrix a c and target vector b c can be as large as c e where n is the number of variables in the and c is a constant let m be the number of clauses in in this section we briefly explain how to get rid of these large values at the cost of making large but still bounded from a we construct a matrix a a c as described in section the only rows in a which contain values strictly greater than values other than or are the rows indexed from the set p r r m i zc that is the values greater than are in the alternate rows in colored portion except the last c rows in a in figure recall that d nc e and the largest value in a is any number less than or equal to can be represented by a binary string of length nc now we construct a new matrix from a by replacing each row of a whose index is in the set p with rows and for any value a i j i p we write its binary representation in the column corresponding to j and the newly added rows of that is for any p we replace the row with rows where for any j if the value a j w then j where is the k th bit in the binary representation of w let be the number of rows in now the target vector is defined as i for all i this completes the construction of the reduced ip instance x the correctness proof of this reduction is using arguments similar to those used for the correctness of lemma lemma the of the column matroid of is at most c nc proof we sketch the proof which is similar to the proof of lemma we define and for any r m like ir and jr in section in fact the rows in are the rows obtained from ir in the process explained above to construct from a we need to show that dimhspan j span j i c nc for all j where is the number of columns in the proof proceeds by bounding the number of indices i such that for any q i there exist vectors v j and u j with v q u q by arguments similar to the ones used in the proof of lemma we can show that for any j the corresponding set i of indices is a subset of for some r m recall the partition of ir into r and in lemma we partition into parts s and here s r and notice that p where p is the set of rows which covers all values strictly greater than the set and are obtained from and respectively by the process mentioned above to construct from a that is each row in ri i is replaced by rows in si this allows us to bound the following terms for some i zc c i c i i and by using the fact that i and the above system of inequalities we can show that n dimhspan j span j i c d e c this completes the proof of the lemma now the proof of the theorem follows from lemma and the correctness of the reduction it is similar to the arguments in the proof of theorem proof of theorem in this section we sketch how the proof of cunningham and geelen of theorem can be adapted to prove theorem recall that a path decomposition of width k can be obtained in f k no time for some function f by making use of the algorithm by jeong et al however we do not know if such a path decomposition can be constructed in time o d no so the assumption that a path decomposition is given is essential roughly speaking the only difference in the proof is that when parameterized by the branchwidth the most operation is the merge operation when we have to construct a new set of partial solutions with at most d k vectors from two already computed sets of sizes d k each thus to construct a new set of vectors one has to go through all possible pairs of vectors from both sets which takes time roughly d for parameterization the new partial solution set is constructed from two sets but this time one set contains at most d k vectors while the second contains at most d vectors this allows us to construct the new set in time roughly d recall that for x n we define s a x span span x where e n the key lemma in the proof of theorem is the following lemma let a d and x n such that a x then the number of vectors in s a x d m is at most d to prove theorem without loss of generality we assume that the columns of a are ordered in such a way that for every j n dimhspan i span i n i k let a b that is is obtained by appending the b to the end of a then for each i n dimhspan i span i n i now we use dynamic programming to check whether the following conditions are satisfied for x n let b x be the set of all vectors zm such that b there exists z such that z and s x then ip has a solution if and only if b b n initially the algorithm computes for all i n b i and by lemma we have that i d in fact b i a v v is the ith column vector of and a d then for each j n the algorithm computes b j in increasing order of j and outputs yes if and only if b b n that is b j is computed from the already computed sets b j and b j notice that b j if and only if a there exist b j and b j such that b b and c s j so the algorithm enumerates vectors satisfying condition a and each such vector is included in b j if satisfy conditions b and c since by and lemma j d k and j d the number of vectors satisfying condition a is d k and hence the exponential factor of the required running time follows this provides the bound on the claimed exponential dependence in the running time of the algorithm the bound on the polynomial component of the running time follows from exactly the same arguments as in proof of theorem o m in this section we prove that unless eth fails ip can not be solved in time n log m do m where d max b b m even when the constraint matrix is and all entries in any feasible solution is at most our proof is by a reduction from a sat to ip from a formula on n variables and m clauses we create an equivalent ip instance x x where is a integer matrix of order n m n and the largest entry in is our reduction work in polynomial time let be the input of sat let x xn be the set of variables in and c cm be the set of clauses in now we create number of vectors of length n two per variable and two per clause for each xi x we make two vectors vxi and vxi they are defined as follows for j m we set if xi cj otherwise if xi cj otherwise vxi j and vxi j for j m i we put vxi j vxi j and for all j m n m i we define vxi j vxi j for every clause cj c we define two vectors vcj and as follows for i m we define vcj i and if i j otherwise i for i m n j we set vcj i i for every m n j i m n we put vcj i i matrix is constructed using these vectors as columns the columns of are ordered as vxn vxn vcm vcm vector is defined as follows i if i m if i m n m if i n m n lemma formula is satisfiable if and only if x x is feasible proof suppose that the formula is satisfiable and let be a satisfying assignment of we define a and prove that for any i zn x if otherwise and x if otherwise this completes the definition of first entries of the other entries the last entries of is defined as follows for every i zm we define if the number of literals set to in by is if the number of literals set to in by is x otherwise and if the number of literals set to in by is if the number of literals set to in by is otherwise we now proceed to prove that is indeed a feasible solution claim proof towards this we need to show that for every i n we consider the following exhaustive cases i j j i case i m the fact that each clause has literals along with the definition of implies that the number of entries set to in i j j is also the indices j for which i j is set to one correspond to a literal in ci by the definition of and the fact that is a satisfying p assignment we have that i j x j let r i j x j hence x i j j x i j j x i j j x vci j j vci i i r r by and construction of by and i case i m n m by the definition of and vectors vxj i vxj i j we have that i j j if and only if j i i by the definition of exactly p one from i i is set to this implies that i j x j by the construction of we have that i j for every j therefore i j j i case i n m n let i m n from the construction we have that i j is set to zero for all j and for any j i j is set to if and only if j this implies that x i j j i by and this completes the proof of the claim for the converse direction of the statement of the lemma suppose that there exists such that now we need to show that is satisfiable we first argue that for any i n exactly one of i i is set to with the other set to this follows from the fact that i m i i m i i ii for all j i i m i j and iii m i now we define an assignment and prove that is a satisfying assignment for for i n we define xi if i if i we claim that satisfies all the clauses consider a clause cj where j m since m n j j m n j and is a feasible solution we have that m n j j j m n j this implies that j let cj x y z where x y z xi xi i n notice that from the construction there are distinct columns ix iy iz such that ith w column of is same as the vector vw where w x y z from the construction of the only entries in row numbered j are j ix j iy j iz and j j we have proved that j and notice that j this implies that at least one among ix iy iz is and the corresponding entry in row j is this implies that satisfies cj this completes the proof of the lemma now we show that for every feasible solution x the largest entry in x is at most notice that i for all i m and i for all i n m this implies that for any feasible solution x x i for all i n m from the construction of we have that for i j i m there exists an n m such that j this along with the fact that implies that in every feasible solution x x i for all i n m hence the largest entry in any feasible solution is at most the following lemma completes the proof of the theorem lemma if there is an algorithm for ip runs in time n then eth fails m o log m do m where d max b b m proof by the sparsification lemma we know that sat on variables and clauses where c is a constant can not be solved in time n time suppose there is an algorithm alg for m o ip running in time n log m do m then for a formula with variables and cn clauses we create an instance x x of ip as discussed in this section in polynomial time where is a matrix of dimension and the largest entry in is the rank of is at most n then by lemma we can run alg to test whether is satisfiable or not this takes time o log cn n n hence refuting eth conclusion we conclude with several open questions first of all while our lower bounds for ip with constraint matrix are tight for parameterization there is a d k to d gap between lower and upper bounds for parameterization closing this gap is the first natural question the proof of of theorem consists of two parts the first part bounds the number of potential partial solutions corresponding to any edge of the branch decomposition tree by d k the second part is the dynamic programming over the branch decomposition using the fact that the number of potential partial solutions is bounded the bottleneck in s algorithm is the following subproblem we are given two vector sets a and b of partial solutions each set of size at most d k we need to construct a new vector set c of partial solutions where the set c will have size at most d k and each vector from c is the sum of a vector from a and a vector from b thus to construct the new set of vectors one has to go through all possible pairs of vectors from both sets a and b which takes time roughly d a tempting approach towards improving the running time of this particular step could be the use of fast subset convolution or matrix multiplication tricks which work very well for join operations in dynamic programming algorithms over tree and branch decompositions of graphs see also chapter unfortunately we have reason to suspect that these tricks may not help for matrices solving the above subproblem in time d no for any would imply that is solvable in time which is believed to be unlikely the problem asks whether a given set of n integers contains three elements that sum to zero indeed consider an equivalent version of named which is defined as follows given sets of integers a b and c each of cardinality n and the objective is to check whether there exist a a b b and c c such that a b then is solvable in time if and only if is as well see theorem in however the problem is equivalent to the most time consuming step in the algorithm of theorem where the integers in the input of can be thought of as vectors while this observation does not rule out the existence of an algorithm solving ip with constraint matrices of k in time d no it indicates that any interesting improvement in the running time would require a completely different approach our final open question is to obtain a refined lower bound for ip with bounded rank recall that the constraint matrix of the algorithm of papadimitriou can contain negative values and improving the running time of his algorithm or showing that its running time is tight up to seth is still a very interesting question references abboud backurs and williams tight hardness results for lcs and other sequence similarity measures in proceedings of the annual symposium on foundations of computer science focs ieee computer society pp abboud and williams popular conjectures imply strong lower bounds for dynamic problems in proceedings of the annual symposium on foundations of computer science focs ieee computer society pp backurs and indyk edit distance can not be computed in strongly subquadratic time unless seth is false in proceedings of the annual acm symposium on theory of computing stoc acm pp which regular expression patterns are hard to match in proceedings of the annual symposium on foundations of computer science focs ieee computer society to appear bringmann why walking the dog takes time frechet distance has no strongly subquadratic algorithms unless seth fails in proceedings of the annual symposium on foundations of computer science focs ieee computer society pp bringmann and quadratic conditional lower bounds for string problems and dynamic time warping in proceedings of the annual symposium on foundations of computer science focs ieee computer society pp cook and seymour tour merging via informs journal on computing pp cunningham and geelen on integer programming and the of the constraint matrix in proceedings of the international conference on integer programming and combinatorial optimization ipco vol of lecture notes in comput springer pp curticapean and marx tight conditional lower bounds for counting perfect matchings on graphs of bounded treewidth cliquewidth and genus in proceedings of the annual symposium on discrete algorithms soda siam pp cygan dell lokshtanov marx nederlof okamoto paturi saurabh and on problems as hard as in proceedings of the ieee conference on computational complexity ccc ieee pp cygan fomin kowalik lokshtanov marx pilipczuk pilipczuk and saurabh parameterized algorithms springer cygan kratsch and nederlof fast hamiltonicity checking via bases of perfect matchings in proceedings of the annual acm symposium on theory of computing stoc acm pp cygan nederlof pilipczuk pilipczuk van rooij and wojtaszczyk solving connectivity problems parameterized by treewidth in single exponential time in proceedings of the annual symposium on foundations of computer science focs ieee pp dorn dynamic programming and fast matrix multiplication in proceedings of the annual european symposium on algorithms esa vol of lecture notes in comput springer berlin pp fomin golovach lokshtanov and saurabh almost optimal lower bounds for problems parameterized by siam computing pp fomin and thilikos dominating sets in planar graphs and exponential siam computing pp gajentaan and overmars on a class of o problems in computational geometry comput pp parse trees and monadic logic for matroids combinatorial theory ser b pp the tutte polynomial for matroids of bounded combinatorics probability computing pp horn and kschischang on the intractability of permuting a block code to minimize trellis complexity ieee trans information theory pp impagliazzo and paturi on the complexity of j computer and system sciences pp impagliazzo paturi and zane which problems have strongly exponential complexity j computer and system sciences pp jeong kim and oum constructive algorithm for of matroids in proceedings of the annual symposium on discrete algorithms soda siam pp lokshtanov marx and saurabh known algorithms on graphs on bounded treewidth are probably optimal in proceedings of the annual symposium on discrete algorithms soda siam pp lokshtanov marx and saurabh lower bounds based on the exponential time hypothesis bulletin of the eatcs pp papadimitriou on the complexity of integer programming acm pp and williams on the possibility of faster sat algorithms in proceedings of the annual symposium on discrete algorithms soda siam pp robertson and seymour graph minors obstructions to combinatorial theory ser b pp roditty and williams fast approximation algorithms for the diameter and radius of sparse graphs in proceedings of the annual acm symposium on theory of computing stoc acm pp van rooij bodlaender and rossmanith dynamic programming on tree decompositions using generalised fast subset convolution in proceedings of the annual european symposium on algorithms esa vol of lecture notes in comput springer pp williams hardness of easy problems basing hardness on popular conjectures such as the strong exponential time hypothesis invited talk in proceedings of the international symposium on parameterized and exact computation ipec vol of leibniz international proceedings in informatics lipics dagstuhl germany schloss fuer informatik pp
| 8 |
dimension rigidity of lattices in semisimple lie groups jul cyril lacoste abstract we prove that if is a lattice in the group of isometries of a symmetric space of type without euclidean factors then the virtual cohomological dimension of equals its proper geometric dimension introduction let be a discrete virtually group there exist several notions of dimension for one of them is the virtual cohomological dimension vcd which is the cohomological dimension of any torsionfree finite index subgroup of due to a result by serre it does not depend on the choice of such a subgroup see another one is the proper geometric dimension a x is said to be a model for if the stabilizers of the action of on x are finite and for every finite subgroup h of the fixed point space x h is contractible note that two models for are homotopy equivalent to each other the proper geometric dimension gd of is the smallest possible dimension of a model for these two notions are related in fact we always have the inequality vcd gd but this inequality may be strict see for instance the construction of leary and nucinkis in or other examples in however there are also many examples of virtually groups with vcd gd for instance in degrijse and martinezperez prove that this is the case for a large class of groups containing all finitely generated coxeter groups other examples for equality can be found in and in this paper we will prove that equality holds for groups acting by isometries discretely and with finite covolume on symmetric spaces of type without euclidean factors theorem let s be a symmetric space of type without euclidean factors then gd vcd for every lattice isom s cyril lacoste recall that a symmetric space of type without euclidean factors is of the form where g is a semisimple lie group which can be assumed to be connected and centerfree and k g is a maximal compact subgroup then isom s aut g aut g where g is the lie algebra of g and note that this group is semisimple linear and algebraic but may be not connected in the authors prove theorem for lattices in classical simple lie groups we will heavily rely on their results and techniques we discuss now some applications of theorem first note that the symmetric space s is a model for theorem yields then that corollary if s is a symmetric space of type and without euclidean factors and if isom s is a lattice then s is homotopy equivalent to a proper cocompact complex of dimension vcd we stress again that in the setting of theorem we are considering the full group of isometries of this has the consequence that we are able to deduce that there is equality between the virtual cohomological dimension and the proper geometric dimension not only for lattices in isom s but also for groups abstractly commensurable to them here two groups and are said abstractly commensurable if for f of finite index in such that f i there exists a subgroup i i f is isomorphic to then we obtain from theorem that corollary if a group is abstractly commensurable to a lattice in the group of isometries of a symmetric space of type without euclidean factors then gd vcd remark note that in general the equality between the proper geometric dimension and the virtual cohomological dimension behaves badly under commensuration for instance the fact that there exist virtully groups with vcd and such that vcd gd proves that if is a subgroup of of finite index then vcd cd gd gd whereas is commensurable to and vcd gd in fact we have concrete exemples of groups for which corollary fails among familiar classes of groups for instance in the authors prove that if is a finitely generated coxeter group then vcd gd and in the authors construct finite extensions of certain coxeter groups such that vcd gd returning to the applications of theorem we obtain from corollary that lattices in isom s are dimension rigid in the sense of we say that a virtually group is dimension rigid if e vcd e for every group e which contains as a finite one has gd index normal subgroup dimension rigidity of lattices in semisimple lie groups dimension rigidity has a strong impact on the behaviour of the proper geometric dimension under group extensions and we obtain from corollary and cor that corollary if is a lattice in the group of isometries of a symmetric space of type without euclidean factors and is a short exact sequence then gd g gd gd q we sketch now the strategy of the proof of theorem to begin with note that while symmetric spaces both riemannian and nonriemannian will play a key role in our considerations most of the time we will be working in the ambient lie group in fact it will be convenient to reformulate theorem as follows main theorem let g be a semisimple lie algebra then gd vcd for every lattice aut g the key ingredient in the proof of the main theorem and hence of theorem is a result of and meintrup which basically asserts that the proper geometric dimension gd equals the bredon cohomological dimension cd see theorem for a precise statement in the light of this theorem it suffices to prove that the two cohomological notions of dimension vcd and cd coincide in the authors noted that to prove the equality vcd cd it suffices to ensure that the fixed point sets s of finite order elements are of small dimension see section for details still in the authors checked that this was the case for lattices contained in the classical simple lie groups we will use a similar strategy to prove the main theorem for lattices in groups of automorphisms of all simple lie algebras recall that any finite dimensional simple lie algebra over r is either isomorphic to one of the classical types or to one of the exceptional ones the classical lie algebras are the complex ones sl n c so n c sp c and their real forms sl n r sl n h so p q su p q sp p q sp r similarly the exceptional lie algebras are the five complex ones and their twelve real forms cyril lacoste here the number in brackets is the difference between the dimension of the adjoint group and twice the dimension of a maximal compact subgroup which equals for a complex lie group we illustrate now the basic steps of the proof of the main theorem in the example of g sl n c suppose that aut g is a lattice and consider the symmetric space s psl n c to prove that gd vcd it will suffice to establish that dim s dim s rkr psl n c n for every of finite order and non central see lemma first note that is the composition of an inner automorphism and an outer automorphism since every element in out g has order it follows that is an inner automorphism if it is non trivial then we use the results of section in in general if ad a is a inner automorphism of sl n c we get from that holds we are reduced to the case where is trivial meaning that is of order then the automorphism aut g is induced by an automorphism of the adjoint group gad psl n c which is still denoted and is also an involution the fixed point set s is the riemannian symmetric space associated to where is the set of fixed points of now notice that the quotient gad is a nonriemannian symmetric space the symmetric spaces associated to simple groups have been classified by berger in in the case of gad psl n c we obtain from this classification that the lie algebra of is either compact or isomorphic to so n c s gl k c gl n k c sp n c sl n r su p n p or sl h where sp n c and sl h only appear if n is even armed with this information we check for every involution which leads to the main theorem for g sl n c the argument we just sketched will be applied in section to all complex simple lie algebras and in section to the real ones since the arguments are similar and since the complex case is somewhat easier we advise the reader to skip section in a first reading having dealt with the simple lie algebras we treat in section the semisimple case the method for the simple algebras will not work at first sight but the proof will eventually by simpler the idea is to restrict to irreducible lattices those who can not be decomposed into a product then we will show that the rational rank of an irreducible lattice is lower than the real rank of any factor of the adjoint group meaning that we get a much improved bound than in this fact will lead rapidly to the main theorem finally note that in the proof of the main theorem we do not construct a concrete model for of dimension vcd we just prove its dimension rigidity of lattices in semisimple lie groups existence it is however worth mentioning that in a few cases such models are known for instance if sl n z the symmetric space s sl n r admits a deformation retract of dimension vcd called the retract see and it will be interesting to do the same for groups such as sp z acknowledgements the author thanks dave witte morris for his help dieter degrijse for interesting discussions and juan souto for his useful advice and instructive discussions preliminaries in this section we recall some basic facts and definitions about algebraic groups lie groups and lie algebras symmetric spaces lattices and arithmetic groups virtual cohomological dimension and bredon cohomology algebraic groups and lie groups an algebraic group is a subgroup g of sl n c determined by a collection of polynomials it is defined over a subfield k of c if those polynomials can be chosen to have coefficients in the galois criterion see prop says that g is defined over k if and only if g is stable under the galois group gal if g is an algebraic group and r c is a ring we note gr the set of elements of g with entries in if g is an algebraic group defined over r it is that the groups gc and gr are lie groups with finitely many connected components in fact g is zariski connected if and only if gc is a connected lie group whereas gr may not be connected in this case a algebraic group or lie group is said simple if every connected normal subgroup is trivial and semisimple if every connected normal abelian subgroup is trivial note that if g is a semisimple algebraic group defined over k r or c then gk is a semisimple lie group any connected semisimple complex linear lie group is algebraic and any connected semisimple real linear lie group is the identity component of the group of real points of an algebraic group defined over recall that two lie groups and are isogenous if they are locally isomorphic meaning that there exist finite normal subgroups and of the identity components of and such that is isomorphic to a semisimple linear lie group is isogenous to a product of simple lie groups the center z g of a semisimple algebraic group g is finite it is also the case for semisimple linear lie groups but not for semisimple lie groups in general and the quotient g is again a semisimple algebraic group see thm moreover if g is defined over k then so is g cyril lacoste a connected algebraic group t sl n c is a torus if it is diagonalizable meaning there exists a sl n c such that for every b t is diagonal a torus is in particular abelian and isomorphic as an algebraic group to a product if t is defined over k it is said to be if the conjugating element a can be chosen in sl n k a torus in an algebraic group g is a subgroup that is a torus it is said to be maximal if it is not strictly contained in any other torus an important fact is that any two maximal tori in g are conjugate in g and that if g is defined over k then any two maximal tori are conjugate by an element in gk the of g or of gk denoted by rkk g or rkk gk is the dimension of any maximal torus in g and the rank of g is just the we refer to and for basic facts about algebraic groups and lie groups lie algebras and their automorphisms recall that the lie algebra g of a lie group g is the set of vector fields a subalgebra of g is a subspace closed under lie bracket an ideal is a subalgebra i such that g i i the lie algebra g is simple if it is not abelian and has no ideals and semisimple if it has no abelian ideals a lie group is simple resp semisimple if and only if its lie algebra is simple resp semisimple a semisimple lie algebra is isomorphic to a finite direct sum of simple ones by lie s third theorem if g is a finite dimensional real lie algebra which will be always the case here there exists a connected lie group unique up to covering whose lie algebra is this means that their exists a unique simply connected lie group g associated to g and every other connected lie group whose lie algebra is g is a quotient of g by a subgroup contained in the center in particular gad g is the unique connected centerfree lie group associated to the group gad is called the adjoint group of the adjoint group is a linear algebraic group whereas its universal cover may be not linear see for instance the universal cover of psl r it follows that the classification of simple lie algebras is in correspondance with that of simple lie groups a lie algebra is said compact if the adjoint group is an automorphism of a lie algebra g is a bijective linear endomorphism which preserves the lie bracket the group of automorphisms of g is denoted aut g it is linear and algebraic but not connected in general if g is a lie group associated to g then the differential of a lie group automorphism of g is an automorphism of conversely if g is either simply connected or connected and centerfree any automorphism of g comes from an automorphism of in this case we will often identify these two automorphisms and denote them by the same letter an inner automorphism is the derivative of the conjugation in g by an element a g we denote it ad a the group dimension rigidity of lattices in semisimple lie groups inn g of inner automorphisms is a normal subgroup of aut g it is also the identity component of aut g and is isomorphic to the adjoint group gad if g is semisimple the subgroup inn g is of finite index in aut g and the quotient aut g g is the finite group of outer automorphisms out g moreover if g is simple out g can be seen as a subgroup of aut g and aut g is the semidirect product of out g and inn g that is aut g inn g o out g see note that even if g is complex we let aut g be the group of real automorphisms if g is complex and simple then aut g contains the complex automorphism group autc g as a subgroup of index the quotient being generated by complex conjugation see prop recall that if g is a complex lie algebra a real form of g is a real lie algebra whose complexification is any real form is the group of fixed points by a conjugation of g meaning an involutive real automorphism which is antilinear over we refer to and for other facts about lie algebras and their automorphisms simple lie groups simple lie algebras and their outer automorphisms as mentioned in the previous section the classification of simple lie groups up to isogeny and of simple lie algebras are in correspondance both are due to by cartan we will now see that of simple lie groups every linear simple lie group is isogenous to either a classical group or to one of the finitely many exceptional groups we denote the transpose of a matrix a by at and its conjugate transpose by and we consider the particular matrices jn idn qp q idq the classical simple lie groups are the groups in the following list sl n c a gl n c det a n so n c a sl n c a id n n sp c a sl c jn a jn n sl n r a gl n r det a n sl n h a gl n h det a n so p q a sl p q r qp q a qp q p q p q su p q a sl p q c qp q a qp q p q p q sp p q a gl p q h qp q a qp q p q p q sp r a sl r jn a jn n t so a su n n qn n jn a qn n jn n similarly we give the list of the compact ones son a sl n r a id sun a sl n c a id on a gl n r a id un a gl n c a id cyril lacoste spn a gl n h a id the compact exceptional lie groups are and the ones are the complex ones which are the complexifications of the previous compact groups and their real forms we refer to for definitions and complete descriptions of the simply connected versions of the exceptional lie groups note that in this paper we will always consider the centerless versions with the same notations as usual the simple lie algebra associated to a simple lie group will be denoted by gothic caracters for instance sl n r is the lie algebra of sl n r note that the adjoint group of sl n r is psl n r the classification of simple lie algebras runs in parallel to that of simple lie groups the following table summarizes the structure of the outer automorphisms groups of simple lie algebras see section we denote by sn the symmetric group and the dihedral group dimension rigidity of lattices in semisimple lie groups g sl n c n so c so c n all others complex lie algebras sl r sl n r n odd sl n r n even su p q p q su p p p sl n h so p q p q odd so p q p and q odd p q so p q p and q even p q so p p p odd so p p p even so sp r sp p p j j j j all others real lie algebras table outer automorphisms groups of out g simple lie algebras note that we have the isomorphisms so c sl c sp c so c sp c so c sl c and the corresponding ones between their real forms and that so c is not simple but isomorphic to sl c sl c symmetric spaces let g be a lie group a symmetric space is a space of the form where is an involutive automorphism of g and its fixed points set it is said irreducible if it can not be decomposed as a product from an algebraic point of view the irreducibility of implies that the lie algebra h of h is a maximal subalgebra of the lie algebra g of equivalently the irreducibility of implies that the identity component of h is a maximal connected lie subgroup of the identity component of another point of view on symmetric spaces is based on lie algebras if is a symmetric space and g is the lie algebra of g the involutive automorphism induces an involutive automorphism of g whose fixed point set is the lie algebra h of h we can thus always associate to a symmetric space a linear space called a local symmetric space the lie subalgebra h is called the isotropy algebra of cyril lacoste more generally we say that h is an isotropy algebra if it is the fixed point set of an involutive automorphism conversely if g is a lie algebra g a simply connected or connected and centerless lie group whose lie algebra is g and h g an isotropy algebra the local symmetric space lifts to a symmetric space because aut g aut g so the classification of symmetric spaces are in correspondance with those of local symmetric spaces and has been done by berger in note that if g is simple and complex and if aut g is an involution then is either and in this case h is also complex or is that means it is a conjugation and h is a real form of note also that is g is real and is an involution then can be extended to a involution of the complexification gc of g c and the isotropy algebra gc is the complexification of h that c is gc hc we give now the list of the isotropy algebras of the local symmetric spaces associated to sl n c and its real forms gc sl n c g sl n r n h gl c table isotropy algebras of sl n c and its real forms table is organized as follows in the first line we give the complex isotropy algebras hc of sl n c fixed by a complex involution each column consists of real forms of the complex algebra in the first entry the local symmetric spaces associated to sl n c are then those of the form sl n c for instance sl n c n c or of the form sl n c for instance sl n c n r the ones associated to a real form g are of the form for instance sl n r k l with k l the following tables summarize the classification for other simple lie algebras they are organized in a similar way gc so n c hc so k c so l c g so p q h so kp kq so lp lq p q h so p h so g h so p q h s u kp kq u lp lq h sp p q h so h gl p c h sp r n k l h so h s gl h gl h h sp g su p q g sl hc s gl k c gl l c hc sp n c h s gl k r h sp n r gl l r n h gl c hc so n c h so k l n h so n c n c u hc gl h gl p r k l h u h gl n h dimension rigidity of lattices in semisimple lie groups table isotropy algebras of so n c and its real forms gc sp c hc sp c sp c g sp p q h sp kp kq sp lp lq p q h sp p c g sp r h sp r sp r h sp n c table isotropy algebras and its real forms hc gl n c h u p q h gl p h h u k l h gl n r of sp c gc hc sl c sl c g h sl r sl r table isotropy algebras of and its real form hc sp c sp c hc so c h sp r sp r h so h sp sp g h sp sp h so table isotropy algebras of and its real forms gc g hc sp c hc sl c sl c hc so c so c hc h sp h sl r sl r h so so h h sp r h sl h su g h sp h su su h so so h h sp r h su sl r h so so g h sp h su su h so so h h su sl r h so so g h sp h sl h sp h so so h c table isotropy algebras of and its real forms gc g hc sl c hc so c sl c hc so c h su h so sl r h so h sl r h sp h so h sl h g h su h so su h so h su h so sl r h so g h sl h h so sl r h so h su h so sp h so c table isotropy algebras of and its real forms gc g cyril lacoste hc sl c hc so c h sl r h so h su h g h sl r h so h su h table isotropy algebras of and its real forms gc g note that not all the symmetric spaces given in these tables are irreducible for instance sl n c gl k c gl l c is not the results of are more precise and we refer to them for the list of the irreducible symmetric spaces and the ones we refer to and for facts about symmetric spaces and local symmetric spaces riemannian symmetric spaces we stress that the symmetric spaces associated to the isotropy subalgebras h of g in tables to are we discuss now a few features about riemannian symmetric spaces which are of the form with compact the symmetric spaces which are riemannian spaces of nonpositive curvature are called symmetric spaces of type they are all of the form s where g isom s and k is a maximal compact subgroup if it has no euclidean factors then g is semisimple linear and centerless recall that if g is a lie group all maximal compact subgroups are conjugated if g is semisimple or more generally reductive and if k is a maximal compact subgroup then the symmetric space is called the riemannian symmetric space associated to it follows that we can identify the smooth manifold s with the set of all maximal compact subgroups of remark that isogenous lie groups have isometric associated riemannian symmetric spaces in particular if g is a semisimple linear lie group the associated riemannian symmetric space is the same as that associated to its identity component or to g we can thus assume that g is connected and centerless in this case as the image of a maximal compact subgroup by an automorphism of g is again a maximal compact subgroup we have an action of aut g aut g by isometries on s finally we have that the group of isometries of a symmetric space s of type without euclidean factors is aut g where g is the lie algebra of an important part of our work will be to compute dimensions of fixed point sets s x s x x dimension rigidity of lattices in semisimple lie groups where isom s aut g assuming that g is connected and centerless the fixed point set s is the riemannian symmetric space associated to recall that we denote by the same letter the automorphism of g and that of g if a g we will denote by s a the fixed point set of the inner automorphism ad a in the case where a is of finite order it can be conjugated in the maximal compact subgroup then the fixed point set of g by ad a is the centralizer of a in g that is gad a cg a b g ab ba a maximal compact subgroup of cg a is ck a the centralizer of a in so we can identify s a with cg a a and we can write dim s a dim cg a dim ck a we refer to for other facts about riemannian symmetric spaces lattices and arithmetic groups a discrete subgroup of a lie group g is said to be a lattice if the quotient has finite haar measure it is said uniform or cocompact if this quotient is compact and otherwise the borel density theorem see cor says that if g is the group of real points of a connected semisimple algebraic group defined over r and if a lattice g projects densely into the maximal compact factor of g then is in for instance if g is a connected semisimple algebraic group defined over q then the group gz is a lattice in gr and thus zariskidense the group gz is the paradigm of an arithmetic group which will be defined now let g be a semisimple lie group with identity component and g a lattice the lattice is said to be arithmetic if there are a connected algebraic group g defined over q compact normal subgroups k k and a lie group isomorphism such that is commensurable to gz where and gz are the images of and gz in and recall that two subgroups h and h of g are commensurable if their intersection is of finite index in both subgroups we say that the lattice g is irreducible if is dense in g for every closed normal subgroup n of the margulis arithmeticity theorem see ch ix and thm tells us that in a way most irreducible lattices are arithmetic theorem margulis arithmeticity theorem let g be the group of real points of a semisimple algebraic group defined over r and g an irreducible lattice if g is not isogenous to so n or su n for any compact group k then is arithmetic cyril lacoste observe that so n k and su n k have real rank so the arithmeticity theorem applies to every irreducible lattice in a group of real rank at least the definition of arithmeticity can be simplified in some cases if g is connected centerfree and has no compact factors the compact subgroup k in the definition must be trivial moreover if is nonuniform and irreducible then the compact subgroup k is not needed either see cor under the same assumptions we can also assume that the algebraic group g is centerfree and in this case the commensurator of gz in g is gq and gq under the same hypotheses on g if is non irreducible it is almost a product of irreducible lattices in fact see prop there is a direct decomposition g gr such that is commensurable to where gi is an irreducible lattice in gi the rational rank of the arithmetic group denoted by rkq is by definition the of the algebraic group g in the definition of arithmeticity and we have rkq rkr note that rkq if and only if is cocompact see thm we refer to and for other facts about lattices and arithmetic groups virtual cohomological dimension and proper geometric dimension recall that the virtual cohomological dimension of a virtually discrete subgroup is the cohomological dimension of any subgroup of finite index of that is vcd cd max n h n a for a certain a if x is a cocompact model for we can compute the virtual cohomological dimension of as vcd max n n hnc x where hnc x denotes the compactly supported cohomology of x see cor the proper geometric dimension gd is the smallest possible dimension of a model for if g is the group of real points of a semisimple algebraic group k g a maximal compact subgroup s the associated riemannian symmetric space and g a uniform lattice of g s is a model for and has dimension vcd so we have vcd gd that is why we will be mostly interested in lattices we will also rule out the case when the adjoint group gad of g has real rank in fact we have the following see cor proposition let g be an algebraic group defined over r and gr a lattice if rkr g then vcd gd dimension rigidity of lattices in semisimple lie groups for the case of higher real rank recall that by margulis arithmeticity theorem is arithmetic as long as it is irreducible if is s is not compact however borel and serre constructed in a bordification of s called the bordification x which is a cocompact model for see th using their bordification borel and serre proved in the following theorem which links the virtual cohomological dimension and the rational rank of such an arithmetic lattice theorem let g be a semisimple lie group k g a maximal compact subgroup and g an arithmetic lattice then vcd dim rkq in particular vcd dim rkr before moving on note that we will often in this article consider groups up to isogeny and the philosophy behind it is that normal finite subgroups do not change the dimensions indeed we have lemma let be an infinite discrete group and n a finite normal subgroup then gd gd vcd vcd proof for the first equality if x is a model for it follows easily that x n is a model for e of dimension lower than those of x so gd gd reciprocally a model for e is also a model for and we have the other inequality for the second equality it suffices to recall that vcd cd where is a subgroup of finite index and in this case is a subgroup of finite index of isomorphic to we refer to and for facts about the virtual cohomological dimension and geometric dimension bredon cohomology the bredon cohomological dimension cd is an algebraic counterpart to the proper geometric dimension gd we recall how cd is defined and a few of its properties let be a discrete group and f be the family of subgroups of the orbit category of is the category whose objects are left coset spaces with h f and where the morphisms are all maps between them an of is a contravariant functor m of cyril lacoste to the category of the category of of denoted by has as objects all the of and all the natural transformations between them as morphisms one can show that is an abelian category and that we can construct projective resolutions on it the bredon cohomology of with coefficients in m denoted by m is by definition the cohomology associated to the cochain of complexes homof m where z is a projective resolution of the functor z which maps all objects to z and all morphisms to the identity map if x is a model for the augmented cellular chain complexes x h z of the fixed points sets x h for h f form such a projective even free resolution x z thus we have hnf m hn homof x m the bredon cohomological dimension of for proper actions denoted by cd is defined as cd sup n n hnf m as we said above this invariant can be viewed as an algebraic counterpart to gd indeed and meintrup proved in the following theorem theorem if is a discrete group with cd then gd cd we explain now the strategy to prove that vcd cd beginning with some material and definitions recall that if g is the group of real points of a semisimple algebraic group and g a lattice then the x is a model for note also that if h is a finite subgroup of dim x h dim s h if we denote the family of finite subgroups of containing properly the kernel of xsing the subspace of the x consisting of points whose stabilizer is stricly larger than the kernel of and s x h h and h with x h x h then we have xsing xh x h also every fixed point set x h s is of the form x where is of finite order and in general computing cd is not an easy task however if admits a cocompact model x for then there is a version of the formula for the bredon cohomological dimension in fact from th we get that k cd max n n hnc x k xsing dimension rigidity of lattices in semisimple lie groups k where x k is the fixed point set of x under k and xsing is the subcomk plex of x consisting of those cells that are fixed by a finite subgroup of that strictly contains using the above caracterisations of vcd and cd one can show see prop proposition let g be the group of real points of a semisimple algebraic group g of real rank at least two g a lattice of g k g a maximal compact subgroup and s the associated riemannian symmetric space if dim x vcd for every x s and xsing is surjective x hvcd the homomorphism hvcd c c then vcd cd note that in the authors assume that g is connected but this hypothesis is not needed as the bordification is still a model for if g is not connected see th as dim x dim s we have immediately the following lemma as a corollary of the previous proposition see cor lemma with the same notations as above if dim s vcd for all of finite order and non central then cd vcd this lemma will be the key argument to prove the main theorem however as it is the case in in some cases we will need the following result see cor lemma with the same notations as above suppose that dim s vcd for every finite order element dim s s vcd for any distinct s s s and for any finite set of finite order elements with s s for i j dim s vcd and such that is a cocompact lattice in cg there exists a rational flat f in s that intersects s in exactly one point and is disjoint from s for i n then vcd cd we refer to and for other facts about bredon cohomology complex simple lie algebras in this section we prove the main theorem for all complex simple lie algebras proposition let g be a complex simple lie algebra g aut g its group of automorphisms and s the associated riemannian symmetric space we assume that rkr g then dim s dim s rkr g cyril lacoste for every g of finite order and non central in particular gd vcd for every lattice recall that the adjoint group gad is the identity component of g aut g it agrees with the group of inner automorphisms that is gad inn g note that gad is centerfree has the same dimension and real rank as g and their associated riemannian symmetric spaces agree the quotient aut g g is the group of outer automorphism out g and can be realized as a subgroup of aut g the group aut g is then the product of inn g and out g see recall also that if a gad s a is the fixed point set of the inner automorphism ad a for further use note that if aut g is an involution then it is induced by an involution on gad that will be still denoted the group of its fixed points has lie algebra and the fixed point set s is the associated riemannian symmetric space in particular dim s if is compact the proof of proposition relies on the following lemmas lemma let g be a semisimple lie algebra such that every element of out g has order at most let g be the group of automorphisms of g and let s be the associated riemannian symmetric space if dim s a dim s rkr g and dim s dim s rkr g for all a gad non trivial of finite order and for all involutions g then we also have dim s dim s rkr g for every g of finite order and non central proof every element aut g is of the form ad a where a gad and out g aut g we know that is of order at most by hypothesis then ad a is an inner automorphism and we have the inclusion s s s a so if a is not central in gad then we have dim s dim s a dim s rkr note now that if a is central then it is actually the identity because gad is centerfree this means that id in other words is an involution and we again have dim s dim s rkr g by assumption we have proved the claim to check the first part of we will use the following dimension rigidity of lattices in semisimple lie groups lemma let g be the group of complex points of a semisimple connected algebraic group and k g a maximal compact subgroup suppose that there exists a group h isogenous to a subgroup h of k such that is an irreducible symmetric space rkk rkh dim h dim k rkr g and satisfying dim ch a dim h rkr g for all a h of finite order and non central then we have dim s a dim s rkr g for every a g of finite order and non central proof as all maximal compact subgroups are conjugated we can conjugate such an a g into since k is connected a is then contained in a maximal torus since all maximal tori are conjugated we can conjugate a into any one of them since the subgroup h has the same rank as k a maximal torus in h is also maximal in we can then assume up to replacing a by a conjugate element that a h taking now into account that g is the group of complex points of a reductive algebraic group cg a is the complexification of its maximal compact subgroup ck a and then its dimension is twice that of ck a as a result we get dim s a dim cg a dim ck a dim ck a because s a cg a a as seen in section similarly we have dim s dim dim in particular the claim follows once we show that dim ck a dim k rkr now as h and h are isogenous assume for simplicity that h h with f a finite normal subgroup of h we denote by a the class of a in h as f is finite we have dim ch a dim ch a and in particular a is central in h if and only if a is central in suppose for a moment that a is non central in h then we write dim ck a dim ch a h dim ch a h and by assumption we have dim ch a dim h rkr g and finally dim ck a dim k rkr it remains to treat the case that a is central in h but not in k that is h ck a since the symmetric space is irreducible cyril lacoste it follows that the identity component of h is a maximal connected lie group of k so dim ck a dim h dim h and we have dim h dim k rkr g by assumption in the course of the proof of proposition the subgroups h will all be classical groups of the forms so n or su n and we will need the following bounds for the dimension of centralizers in those groups see section let a so n n of finite order and non central then n n let a su n n of finite order and non central then dim cso n a csu n a n for more simplicity we will sometimes consider h as a subgroup of k and denote the symmetric space for the convenience of the reader we summarize in the following table the informations we need to prove proposition for exceptional lie algebras gad k h dim k dim h rkh rkk rkr gad so c so u so c su c so table exceptional complex simple centerless lie groups gad maximal compact subgroups k classical subgroups h dimensions and ranks we are now ready to launch the proof of proposition proof of proposition the second claim follows from lemma because we have vcd dim s rkr g for every lattice g by theorem so it suffices to prove the first claim recall that every complex simple lie algebra is isomorphic to either one of the classical algebras sl n c so n c and sp c with conditions on n to ensure simplicity or to one of the exceptional ones and to prove proposition we will consider all these cases individually classical complex simple lie algebras let g be a classical complex simple lie algebra and g aut g a lattice from a brief inspection of the table in section we obtain dimension rigidity of lattices in semisimple lie groups that unless g so c every outer automorphism of g has order we will assume that g so c for a while treating this case later to begin with note that we get from parts and of that dim s a dim s rkr g for all a gad non trivial and of finite order in other words the first part of condition in lemma holds to check the second part we make use of the classification of local symmetric spaces in for instance if g sl n c with n because of our assumption on the rank and if g is an involution then the lie algebra is isomorphic to either a lie algebra whose adjoint group is compact or to one of the follows so n c s gl k c gl n k c sp n c sl n r su p n p and sl h where sp n c and sl h only appear if n is even the associated riemannian symmetric space s is obtained by taking the quotient of the adjoint group by a maximal compact subgroup for example in the case of so n c it is pso n c the lie algebra for which dim s is maximal is s gl c gl n c for n where dim s and sp c for n where dim s in all these cases we have dim s n dim s rkr we get then by lemma that the first claim of proposition holds the cases of sp c and so n c for n are similar we leave the details to the reader now we treat the case of the lie algebra so c its group of complex outer automorphisms is isomorphic to the symmetric group and contains an order element called triality see section in for an interpretation of triality in terms of octonions the group out so c of real outer automorphisms is then isomorphic to where the second factor corresponds to complex conjugation consequently the only order outer automorphisms are and and those of order are their compositions with complex conjugation if aut so c is of order and ad a then is the composition of an inner automorphism and as is of order and we have the inclusions s s we can consider instead of and we just have to treat the cases when is of order or if is of order we apply the same method than for other classical simple lie algebras using the classification of local symmetric spaces it remains to treat the case when the triality automorphism or its inverse in this case ad a is a complex automorphism proceeding like in the proof of lemma is an inner automorphism and the result follows if it is non trivial if then belongs to the set so c of complex automorphisms of order a result of gray and wolf see thm says that if is the equivalence relation of conjugation by an inner automorphism in cyril lacoste so c then so c contains besides the classes of inner automorphisms four other classes those of and and two others represented by order automorphisms and the lie algebra of the fixed point set of triality is the exceptional lie algebra and those of the fixed point set of is isomorphic to the lie algebra sl c in both cases we have dim s dim s rkr g and proposition holds for g so c lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad a connected algebraic group of real rank and complex dimension the compact group is a maximal compact subgroup of the group contains a subgroup h isomorphic to so fixed by an involution of which extends to a conjugation of giving the split real form see section of for an explicit description then is an irreducible symmetric space rk rk so and dim h rkr gad dim moreover if a h is of finite order and non central in h so we have by inequality dim ch a rkr gad dim so by lemma the first part of in lemma holds to check the second part we have to list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to sl c sl c or the associated riemannian symmetric spaces are s psl c psl c and and we have in both cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra we proceed like previously for the simple lie algebra g with gad of maximal compact subgroup k rkr gad and dim k we know that there exists a subgroup h k isogenous to so with rkh rkk and such that is an irreducible symmetric space in addition to that dim h rkr gad dim k dimension rigidity of lattices in semisimple lie groups and if a h is of finite order and non central in h we have dim ch a rkr gad dim h by inequality so by lemma the first part of in lemma holds then by the classification of local symmetric spaces the ones we have to study is those when is isomorphic to sp c sp c so c or in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra for the algebra g the outer automorphism group is a product of two groups of order gad of maximal compact subgroup k rkr gad and dim k we know there exists h k isogenous to u so with rkh rkk is an irreducible symmetric space and we have the following dim h rkr gad dim k and if a h is of finite order and non central in h by inequality we have dim ch a rkr gad dim so by lemma the first part of in lemma holds then by the classification of local symmetric spaces the ones we have to study is those when is isomorphic to sp c sl c sl c so c so c or in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra consider now the simple lie algebra g of order outer automorphism group and of adjoint group gad whose compact maximal subgroup is k rkr gad and dim k we know there exists h k isogenous to su with rkh rkk and is an irreducible symmetric space we have the inequality dim h rkr gad dim k and if a h is of finite order and non central in h by inequality we have dim ch a rkr gad dim so by lemma the first part of in lemma holds cyril lacoste then by the classification of local symmetric spaces the ones we have to study is those when is isomorphic to sl c so c sp c so c or in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra the last exceptional lie algebra is g again its outer automorphism group is of order its adjoint group is gad of maximal compact subgroup k rkr gad and dim k we know there exists h k isogenous to so with rkh rkk and is an irreducible symmetric space we have also the inequality dim h rkr gad dim k and if a h is of finite order and non central in h by inequality we have dim ch a rkr gad dim so by lemma the first part of in lemma holds then by the classification of local symmetric spaces the ones we have to study is those when is isomorphic to so c sp c or in all cases dim s dim s rkr so by lemma proposition holds for g aut and it concluded its proof real simple lie algebras we will in this section extend the previous proposition to the real simple lie algebras they are the real forms of the complex ones studied in the previous section the ideas of the proof are similar to those of the complex case although we face some additional difficulties maybe the reader can skip this section in a first reading proposition let g be a real simple lie algebra g aut g its group of automorphisms and s the associated riemannian symmetric space then gd vcd for every lattice moreover dim s dim s rkr g for every g of finite order and non central dimension rigidity of lattices in semisimple lie groups we will again use lemma but in the case of exceptional real simple lie algebras we can not use lemma to establish inequalities of the form dim s a dim s rkr g for a in the adjoint group gad of the difficulty is that the dimension of gad is not anymore twice that of a maximal compact subgroup to some extent we will bypass this problem using the following lemma lemma let g be a connected lie group which is the group of real points of a semisimple algebraic group defined over r and k g a maximal compact subgroup suppose there exists a subgroup g g such that is an irreducible symmetric space and whose compact maximal subgroup k k has the same rank as let s and s be the associated riemannian symmetric spaces if we have dim s dim s rkr g and a dim s dim s rkr g for every a k of finite order non central then we also have dim s a dim s rkr g for every a g of finite order and non central proof as in the proof of lemma we can conjugate such an a into a maximal torus in if a is central in g as is irreducible we have that the identity component of g is a maximal connected lie subgroup of it follows thus from g cg a g that the riemannian symmetric spaces of cg a and g are the same that is s a then the result follows from the assumption dim s dim s rkr suppose now that a is not central in g then we have dim s dim s a dim s dim s a a as s s s a then the result follows because a dim s dim s rkr g by assumption in all cases if interest the group g will be a classical group we will a use the following inequalities to majorate dim s see sections and let a su p q p q p q of finite order non central and s su p q u p u q the associated symmetric space then dim s a q cyril lacoste let a sp p q p q p q of finite order non central and s sp p q sp p sp q the associated symmetric space then dim s a q let a n of finite order non central et s n the associated symmetric space then dim s a n n the tables below list exceptional real simple lie groups the subgroups g we will use and the informations we need to know for the proof of proposition note that for more simplicity the compact maximal subgroups k are given up to isogeny gad k g k sp sp sp sp su su so u so so so so so u so sp sp sp su so su su so so su su s u u so su s u u so so u su u so sl r sl r so so sp sp so s o o table real exceptional simple centerless lie groups gad certain classical subgroups g gad and the respective maximal compact subgroups gad dim s dim s rkk rkk rkr gad table with the same notations as in table dimensions of the riemannian symmetric spaces associated to gad and g together with the ranks of k k and gad dimension rigidity of lattices in semisimple lie groups we are now ready to prove proposition proof of proposition recall that the first claim holds when the adjoint group has real rank by proposition that is when g is isomorphic to sl r sp r sl h so n su n sp n and the second claim is also true because if aut g is of finite order and non central then s is a strict submanifold of s so we have dim s dim s we suppose from now on that rkr g by inspection of the table of outer automorphisms in section we see that every outer automorphism of g has order except if g so p p with p even as in the proof of proposition we will again do a analysis classical real simple lie algebras other that sl n r and so p q we start dealing with the classical lie algebras su p q sl n h sp r and note that we rule out su so and sp r so we use again lemma we want then to establish dim s a dim s rkr g and dim s dim s rkr g for every a in the adjoint group gad of finite order and non central and for every involution g aut g the first condition holds by the computations in sections to of using the classification of local symmetric spaces we can check the second condition as we did in the complex case for instance if g sp r with n then is either compact or isomorphic to one of the following sp r sp n k r u k n k gl n r or sp n c the last case only appearing if n is even the lie algebra for which dim s is maximal is sp n r sp r for which we have dim s n dim s rkr hence by lemma and lemma proposition holds for g aut sp r the cases of su p q sl n h and are similar lie algebras sl n r and so p q the remaining classical cases are sl n r and so p q if p is even then out so p p is isomorphic to so every outer automorphism has order or the only case where we have order outer automorphisms is so as out so is isomorphic to as already noted in where the argument for lattices in sl n r and so p q was more involved than in the other cases we face the cyril lacoste problem that there exists aut g such that dim s dim s rkr our next goal is to characterize the automorphisms for which this happens lemma if g sl n r with n or so p q with p q n and p q and aut g then dim s dim s rkr g with equality if and only if n is odd and s s a with a conjugated to in psl n r or pso p q or n is even and is conjugated to the outer automorphism corresponding to the conjugation by by abusing notations we will still denote s a the fixed point set by the automorphism corresponding to the conjugation by a with a conjugated to even if it is not an inner automorphism proof we begin with the case g so and we use the same strategy that for the proof of lemma an automorphism of g is the composition of an inner automorphism and an outer automorphism of order or if has order then is the composition of an inner automorphism and which is of order and we have the inclusion s s so we can replace by and it will suffice to consider the outer automorphisms of order then if has order is an inner automorphism that is ad a with a gad if a is not trivial by the computations in sections and in we have dim s dim s a dim s rkr g with equality in the last inequality if and only if n is odd and a conjugated by the first inequality is an equality if and only if s s a we have proved the claim if a is not trivial if a is trivial then is an involution so we use the classification of local symmetric spaces for instance if g sl n r with n the associated isotropy algebra h are so k s gl k r r gl c or sp n r the last two cases only appearing if n is even in all theses cases we have dim s dim s rkr g with equality if and only if h s gl r gl n r which corresponds to an automorphism conjugated to the inner automorphism ad if n is odd or to the outer automorphism of conjugation by if n if even it remains to consider the case g so in this case the group of outer automorphism is isomorphic to the symmetric group so we dimension rigidity of lattices in semisimple lie groups can have elements of order or if is an outer automorphism of order or and ad a we apply the same method using the classification of local symmetric spaces and we see that dim s dim s rkr g with equality if and only if is an outer automorphism corresponding to the conjugation by a matrix conjugated to if is of order and ad a then is inner and we have just to treat the case where it is trivial that is is of order then its complexification is an order complex automorphism of gc so c a case already treated c in the previous section we know that the fixed point set gc is c isomorphic to sl c or is compact as is a real form of gc it is isomorphic to sl r su s u dp dq o sp sq with sp and sq or is compact in all these cases we have dim s dim s rkr g so we have proved the claim let us assume for a while that g sl r and g sl r so we will conclude that vcd gd using lemma the first condition in the said lemma holds by lemma to check the second condition take s and s maximal and distinct we want to establish dim s s vcd first remark that by maximality s and s are not contained in each other if one of them is not of the form s a with a conjugated to let us say s then dim s vcd and s s is a strict submanifold of s so the result holds if we have s s a and s s b with a and b conjugated to then we refer to the computations in the proofs of lemma and lemma in the proof of the third point is the same as for lemma and lemma in note that in the authors consider only inner automorphisms so the case n odd but their argument also works without modifications of any kind for n even it must be enlighted why the argument we just gave fails for sl r and sl r so for sl r the second condition of lemma does not hold anymore in the case that sl r so the conclusion of lemma does not apply because we have that dim s dim s rkr g when is either the conjugation by in pso or the conjugation by in psl r these two are not conjugated as the conjugation by in pso corresponds in psl r to an outer automorphism whose fixed point set is isomorphic to psp r however the proof of lemma in concerning lattices in sl r can be adapted to aut sl r and aut sl r in fact it can be cyril lacoste adapted to aut sl n r for all n because a lattice in psl n r of can be conjugated to a lattice commensurable to psl n z see the classification of arithmetic groups of classical groups in section in as a result proposition holds for all real classical simple lie algebras lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to sp we will use lemma to check the first condition of lemma the group gad contains a subgroup g isogenous to sp whose maximal compact subgroup is k sp sp we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to sp sp r sl r sl r sl h su so so or we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to su su we will use lemma to check the first condition of lemma dimension rigidity of lattices in semisimple lie groups the group gad contains a subgroup g isogenous to so whose maximal compact subgroup is k u so we see in that su is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to sp sp r su su su sl r so so so or we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to so so we will use lemma to check the first condition of lemma the group gad contains a subgroup g isogenous to so whose maximal compact subgroup is k u so we see in that su is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of cyril lacoste berger in the only non compact cases are when is isomorphic to sp su su su sl r so so so or we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to we will use lemma to check the first condition of lemma the group gad contains a subgroup g isogenous to sp whose maximal compact subgroup is k sp sp we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to sp sl h sp so so or we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to su we will use lemma to check the first condition of lemma the group gad contains a subgroup g isogenous to so whose maximal compact subgroup is k su dimension rigidity of lattices in semisimple lie groups we see in that so is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from the results about a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to su sl r sl h so sl r sp so or so we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is trivial so g aut g is equal to the adjoint group gad which is the group of real points of an algebraic group of real rank thus we only have to check the first condition of lemma and we will again use lemma the group gad contains a maximal compact subgroup k isogenous to so su it also contains a subgroup g isogenous to su whose maximal compact subgroup is k s u u we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad so by lemma and lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is of order and its adjoint group is cyril lacoste gad which is the group of real points of an algebraic group of real rank this group contains a maximal compact subgroup k isogenous to so we will use lemma to check the first condition of lemma the group gad contains a subgroup g isogenous to su whose maximal compact subgroup is k s u u we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad lemma applies and shows that the first condition of lemma holds to check the second condition we list the local symmetric spaces associated to an involution aut by the classification of berger in the only non compact cases are when is isomorphic to su sl h so sl r sp so or so we have in all cases dim s dim s rkr so by lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is trivial so g aut g is equal to the adjoint group gad which is the group of real points of an algebraic group of real rank thus we only have to check the first condition of lemma and we will again use lemma the group gad contains a maximal compact subgroup k isogenous to so it also contains a subgroup g isogenous to whose maximal compact subgroup is k u we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad dimension rigidity of lattices in semisimple lie groups so by lemma and lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is trivial so g aut g is equal to the adjoint group gad which is the group of real points of an algebraic group of real rank thus we only have to check the first condition of lemma and we will again use lemma the group gad contains a maximal compact subgroup k isogenous to su it also contains a subgroup g isogenous to whose maximal compact subgroup is k u we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr gad where s and s are the riemannian symmetric spaces associated to respectively gad and moreover if a k is of finite order and non central we get from inequality a dim s dim s rkr gad so by lemma and lemma proposition holds for g aut lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is trivial so the group g aut g equals the adjoint group gad which is the group of real points of an algebraic group of real rank thus we only have to check the conditions of lemma the group contains a maximal compact subgroup k isomorphic to so it also contains a subgroup g isogenous to sl r r whose maximal compact subgroup is k so so we see in that sl r sl r is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr g where s and s are the riemannian symmetric spaces associated to respectively g gad and moreover if a k is of finite order and non central we have a dim s dim s rkr cyril lacoste the equality case in the last inequality happens when a is conjugated to a matrix of the form with cos sin sin cos assume that a is of this form and that the first block is we will prove directly that dim s a dim s rkr first of all ck a cso a so so is of dimension to study cg a we have to know to which element of g this matrix corresponds to recall that is the group of automorphisms of the non associative algebra of split octonions which is of dimension over r and equiped with a quadratic form of signature see section of in we can decompose into the direct sum h where h vect is the quaternion algebra so g is a subgroup of the special orthogonal group so which preserves the standard form of signature over and fixes the maximal compact subgroup k corresponds to the stabilizer of h meaning the elements g such that h automatically we have as this is the orthogonal of k is isomorphic to so via the isomorphism who sends to its restriction to consequently the matrix a we consider corresponds to the matrix of the restriction of an element k to this element is entirely determined by the matrix a indeed for example we have so we deduce similarly we find cos sin sin cos knowing we have completely described the matrix of so which corresponds to is r ae e so then we remark that cg a cso a e dim s o u dim cg a dim cso a finally dim s a dim s rkr dimension rigidity of lattices in semisimple lie groups thus we have that dim s a dim s rkr g for every a g of finite order and non central then by lemma proposition holds for g lie algebra here we consider the simple exceptional lie algebra g its outer automorphism group is trivial so the group g aut g equals the adjoint group gad which is the group of real points of an algebraic group of real rank thus we only have to check the conditions of lemma the group contains a maximal compact subgroup k isogenous to sp sp it also contains a subgroup g isogenous to so whose maximal compact subgroup is k s o o we see in that is an irreducible symmetric space furthermore we have rk k rk k and dim s dim s rkr g where s and s are the riemannian symmetric spaces associated to respectively g gad and moreover if a k is of finite order and non central we have by the computations in section of a dim s dim s rkr the equality case in the last inequality happens when a is conjugated to the matrix so so assuming that a is of this form the conjugation by a is an involutive automorphism of g gad so the quotient is a symmetric space and we know by the classification in that ga is isogenous to either sp r sp r sp sp or so in all these cases the inequality dim s a dim s rkr g holds in fact ga is isogenous to so thus we have that dim s a dim s rkr g for every a g of finite order and non central then by lemma proposition holds for g and it concludes the proof cyril lacoste semisimple lie algebras we prove in this section the main theorem main theorem let g be a semisimple lie algebra and g aut g then gd vcd for every lattice recall that if g is semisimple it is isomorphic to a sum of simple lie algebras gr the adjoint group gad of g is then isomorphic to a product of simple lie groups that is gad gn where the gi are the adjoint groups of the gi we can also assume that gad has no compact factors indeed the symmetric spaces s and s do not change if we replace gad by its quotient by the compact factors an automorphism of g is the composition of a permutation of the isomorphic factors of g and a diagonal automorphism of the form with aut gi we explain now why does the strategy used in the previous sections not work the point is that the inequality dim s dim s rkr g for aut g needed to apply lemma does not hold even simplest cases in fact if gad sl r sl r and a a we have dim s dim s rgr we bypass this this problem by improving the lower bound in the vcd dim s rkr g used above remember that by theorem vcd dim s rkq as long as is arithmetic so we want to majorate rkq to do that we will restrict our study to irreducible lattices recall that in this context a lattice in g is irreducible if is dense in g for every closed normal subgroup h of gad we prove the following result which is probably known to experts proposition let g gn be a semisimple lie algebra and gi the adjoint group of gi for i then rkq min rkr gi n for every irreducible arithmetic lattice g aut g proposition will follow from the following theorem proved in dimension rigidity of lattices in semisimple lie groups theorem let g gn n be a product of noncompact connected simple lie groups the following statements are equivalent g contains an irreducible lattice g is isomorphic as a lie group to gr where g is an qsimple algebraic group g is isotypic that is the complexifications of the lie algebras of the gi are isomorphic in addition to that in this case g contains both cocompact and non cocompact irreducible lattices recall that an algebraic group g defined over q is said to be qsimple if it does not contain connected normal subgroups defined over q then we can prove proposition proof of proposition remark that rkq rkq moreover if is an irreducible lattice of g then gad is an irreducible lattice of gad so we can assume that gad remember that gad gn and that we assumed that none of the gi is compact if n the result is trivial so assume that n then rkr gad so is arithmetic by theorem and there exists a lie group isomorphism gad gr where g is by theorem then we have rkq rkq the algebraic group g is isomorphic to a product where the gi are with gi r isomorphic to g for i qi n we can define gi as the centralizer in g of the product gk we note the canonical projection of g on gi let t g be maximal torus our goal is to prove that the restriction is of finite kernel on the one hand ker gq is a normal subgroup of gq and g is defined over q so the zariski closure of ker is defined over q by the galois rationality criterion however it is a non trivial normal subgroup of g which is so ker is finite it may be not connected so ker is finite too but ker is a subgroup of the torus t so its identity component is a torus and we have just seen its group of rational points is finite so ker is finite too then the image of t by is a torus of gi of the same dimension than t see cor it may not be because the projection is not defined over q but it is as the projection is defined over r so rgq g dim t rgr gi rgr gi we can now conclude the proof of our main theorem cyril lacoste proof of the main theorem if g is simple then the result follows from propositions and then we assume that g gn with n we can also assume that the adjoint group gad of g is of the form gad gn where the gi are simple and gi is the adjoint group of gi we begin with the case where is irreducible then we have rkq min rkr gi r as rkr g is arithmetic by theorem remember that gad is also an irreducible arithmetic lattice of gad we can then assume that gad gr where g gn is a semisimple which is by theorem as gad has trivial center we can assume that g is centerfree in this case we have gad gq we want to use lemma let of finite order non central then is of the form where is a permutation of the isomorphic factors of g and with aut gi assume for a while that is trivial we identify aut g resp aut gi with the corresponding automorphism of gad resp gi the key point is to remark that for all i between and n the automorphism is not trivial in fact if a gad gq we can identify it with the inner automorphism ad a and we have ad a ad a gad gq so a lies also in gq recall that we have seen in the proof of proposition that the projections gq gi are injective so if is trivial we have a a for each a gad gq which leads to a a then is trivial on gad which is zariskidense in gad so is trivial finally where each is a non trivial automorphism of gi by proposition we also have rkq min rkr gi n then if we note s the symmetric space associated to g and si those associated to gi and we have by propositions and dim s n x dim n x dim si rkr gi dim s n x rkr gi dim s rkq as we assumed n by theorem dim s a vcd and lemma gives us the result if is not trivial the fixed point set will be even smaller indeed assume for simplicity that g where and are isomorphic dimension rigidity of lattices in semisimple lie groups and aut g is of the form with aut aut then the fixed point set s is where is the symmetric space associated to in fact the elements fixed by are of the form where is a fixed point of so we have dim s dim dim s rkr dim s rkq vcd as dim s dim and dim rkr the same argument works for a higher number of summands by decomposing the permutation into disjoint cycles finally if is reducible there exists a decomposition of g such that the projections and are lattices in and and then see the proof of prop it follows by induction that is contained in a product of irreducible lattices of factors of we will treat the case where g and and are irreducible lattices of and as is of finite index in g it is also of finite index in so vcd vcd if we note s the symmetric spaces associated to g by theorem we have vcd vcd dim dim rkq rkq vcd vcd finally we have gd gd gd gd because if and are models for and is a model for e as and are irreducible we have gd vcd and gd vcd so gd vcd the other inequality is always true so it concludes the proof of the main theorem we will end with the proof of corollaries and proof of corollary the case of real rank is treated in proposition in for higher real rank we know by the main theorem that there exists a model for of dimension vcd we also know that the bordification is a cocompact model for then using the same construction as in the proof of corollary in one has a cocompact model for of dimension vcd as all models of are homotopy equivalent and the symmetric space s is also a model for we conclude that s is homotopy equivalent to a cocompact model for of dimension vcd cyril lacoste proof of corollary we have to prove that if aut g and e of finite index then gd vcd to have a common subgroup that end we will prove that is essentially also a lattice in aut g e is a lattice in aut g so we can assume that e first note that we can also assume that is a normal finite index subgroup of then acts by conjugation on by mostow rigidity theorem see for example thm automorphisms of can be extended to automorphisms of gad so we have a morphism aut gad the kernel n of this morphism does not intersect since is centerfree as it is a lattice and thus it is in gad and is of finite index in so n is finite then is isomorphic to a lattice in aut gad the result follows now from the main theorem and lemma note that mostow rigidity theorem does not apply to the group psl r whose associated symmetric space is the hyperbolic plane in this case the lattice is either a virtually free group or a virtually surface group in the first case the group is also virtually free so there exists a model for which is a tree see and gd vcd in the second case acts as a convergence group on so it is also a fuchsian group see that is is isomorphic to a cocompact lattice of psl r finally we have gd vcd references aramayona degrijse and souto geometric dimension of lattices in classical simple lie groups aramayona and the proper geometric dimension of the mapping class group algebraic and geometric toplogy ash classifying spaces for arithmetic subgroups of general linear groups duke math j no berger les espaces non compacts annales scientifiques de l ens borel introduction aux groupes hermann borel linear algebraic groups springer borel and serre corners and arithmetic groups brady leary and nucinkis on algebraic and geometric dimensions for group with torsion london math soc brown cohomology of groups graduate texts in mathematics springer degrijse and dimension invariants for groups admitting a cocompact model for proper actions journal reine und angewandte mathematik crelle s journal degrijse and petrosyan geometric dimension of groups for the family of virtually cyclic subgroups topol degrijse and souto dimension invariants of outer automorphism groups dimension rigidity of lattices in semisimple lie groups djokovic on real form of complex semisimple lie algebras aequationes math gabai convergence groups are fuchsian groups annals of mathematics a gray and wolf homogeneous spaces defined by lie groups automorphisms diff geom the component group of the automorphism group of a simple lie algebra and the splitting of the corresponding short exact sequence journal of lie theory classification and structure theory of lie algebras of smooth section logos verlag berlin gmbh helgason differential geometry lie groups and symmetric spaces american mathematical society ji integral novikov conjectures and arithmetic groups containing torsion elements communications in analysis and geometry volume number johnson on the existence of irreducible discrete subgroups in isotypic lie groups of classical type karass pietrowski and solitar finite and infinite cyclic extensions of free group austral math soc knapp lie groups beyond an introduction progress in mathematics leary and nucinkis some groups of type vf invent math leary and petrosyan on dimensions of groups with cocompact classifying spaces for proper actions survey on classifying spaces for families of subgroups infinite groups geometric combinatorial and dynamical aspects springer and meintrup on the universal space for group actions with compact isotropy proc of the conference geometry and topology in aarhus margulis discrete subgroups of semisimple lie groups euler classes and bredon cohomology for groups with restricted families of finite torsion math z witte morris introduction to arithmetic groups arxiv onishchik and vinberg lie groups and algebraic groups springerverlag pettet and the spine which was no spine l enseignement mathematique pettet and souto minimality of the retract geometry and topology vogtmann automorphisms of free groups and outer space geometriae dedicata yokota exceptional lie groups irmar de rennes address
| 4 |
oct fractal sequences and hilbert functions giuseppe favacchio abstract we introduce the fractal expansions sequences of integers associated to a number these can be used to characterize the we generalize them by introducing numerical functions called fractal functions we classify the hilbert functions of bigraded algebras by using fractal functions introduction in commutative algebra and other fields of pure mathematics it often happens that easy numerical conditions describe some deeper algebraic results a significant example are the let s k xn be the standard graded polynomial ring and let i s be a homogeneous ideal the quotient ring is called a standard graded the hilbert function of is defined as n n such that t dimk t dimk st dimk it a famous theorem due to macaulay cf and pointed out by stanley cf characterizes the numerical functions that are hilbert functions of a standard graded the functions h such that h for some homogeneous ideal i to introduce this fundamental result we need some preparatory material let h i be integers we can uniquely write h as mj mi j i where mi mj j this expression is called the expansion of the integer if h has expansion as in then we set mj mi mi hhii i we use the convention that for example since the expansion of is and definition a sequence of integers is called an if i hii ii hi for all i hii an is said to have maximal growth from degree i to degree i if hi mathematics subject classification key words and phrases hilbert function multigraded albegra numerical function version october giuseppe favacchio we are now ready to enunciate the macaulay s theorem it characterizes the hilbert function of standard graded bounding the growth from any degree to the next the proof of this theorem and more details about are also discussed in chapter we represent as a sequence of integers where ht t theorem macaulay let h be a sequence of integers then the following are equivalent h is the hilbert function of a standard graded h is an it is therefore interesting to find an extension of the above theorem to the case multigraded hilbert functions arise in many contexts and properties related to the hilbert function of multigraded algebras are currently studied see for instance and for several examples the generalization of macaulay s theorem to rings is an open problem a first answer was given by the author in where the hilbert functions of a bigraded algebra in k are classified the goal of this work is to generalize the macaulay s theorem to any bigraded algebras in order to reach our purpose we first in section introduce n a list of finite sequences called fractal expansion of then we define a coherent truncation of these vectors and we show that these objects are strictly related to the indeed in section we show that they also characterize the hilbert function of standard graded furthermore we show that these sequences can be used to compute the betti numbers of a lex ideal in section we extend some of these results and we classify the hilbert function of bigraded algebras theorem the computer program cocoa was indispensable for all the computations expansion of a fractal sequence in this section we describe a new approach to classify the hilbert functions of standard graded algebras we introduce a sequence of tuplas called coherent fractal growth and we study its properties the main result of this section is theorem we prove that these sequences have the same behavior of the roughly speaking a numerical sequence is called fractal if once we delete the first occurrence of each number it remains identical to the original such property thus implies we can repeat this process indefinitely and contains infinitely many copy of itself it has something like a fractal behavior see for a formal definition and further properties for instance one can show that the sequence is fractal indeed after removing the first occurrence of each number we get a sequence that it is the same as the starting one we introduce some notation given a positive integer a n we denote by a a na the tupla of length a consisting af all the positive integers less then or equal to a written in increasing order given a finite or infinite sequence of positive integers we construct a new sequence named the expansion of denoted by if we set where the symbol denotes the associative operation of concatenation of two vectors this construction can be recursively applied we denote by d where we set for a positive integer a we also denote by a d a where a a for instance we have fractal sequences and hilbert functions lemma let be a sequence of positive integers then d d d d proof if d the statement is true assume d by definition d then by the inductive hypothesis we have d d d d corollary let a n be a positive integer then a d a proof the statement follows by lemma since a d a remark the sequence n n is a fractal sequence given a sequence the and i both denote the entry of if if finite then denotes the number of entries and their sum we use the convention that p these values are for infinite sequences of positive integers for instance and throughout this paper we use the convention that for a finite sequence the notation or a implies a remark p note that for a finite sequence of positive integers the definition of easily implies the equality given a positive integer n we define the fractal expansion of n as the set n n n n d each element in n is a finite sequence of positive integers in the following lemma we compute the number of their entries p lemma let n be a positive integer then n d and n d d n proof by definition we have n and n by lemma we have n d n therefore n n x x d n j moreover by remark we have p n d n next lemma introduces a way to decompose a number as sum of binomial coefficients that is slight different to the macaulay decomposition we use the convention that ab whenever a lemma let d be a positive integer any a n can be written uniquely in the form kd d where kd proof in order to prove the existence we choose kd maximal such that kdd a by the inductive hypothesis on d kdd kii where since kdd moreover since a it follows that kd kd kd kd d d d giuseppe favacchio hence kd uniqueness follows the induction on if d it is trivial now assume that d and let a kdd be a decomposition of a then kd is the maximal integer such that kdd a otherwise if a we get a kdd remark the decomposition in lemma is different from the macaulay decomposition since it is always required that moreover for any j we only have that kj j thus some binomial coefficient could be zero for instance we have d where the first d binomial coefficients in the sum are equal to zero definition we refer to equation as the decomposition of a we denote by a d kd d d nd we call these numbers the coefficients of a d a d is a not increasing sequence of positive integers indeed a d and by construction moreover for j since kj we have kj j j next result explains the name decomposition we show that is the entry in n d we need the convention that d is the empty sequence and d d theorem n da a d proof if d then n a and a thus a a n we now assume d let a kdd be the decomposition of a by lemma we have kd d d a kd d d and n da kd d since is d the d fractal decomposition of a kdd by the inductive hypothesis we have n kd d given nd then lex iff for some i d we have for j i and the following lemma is crucial for our intent we prove that the coefficients have a good behavior with respect the lex order lemma a d lex b d iff a b proof if d the assertion is trivial let d and let a b be two integers with fractal decomposition ad ad d and bd bd d d d d d if a d lex b d then there is an index j such that a i b i for any i j and a j b j hence ai bi aj bj i i for any i j and j j if j d then easily b a otherwise ai ad bj bd a b j d j d vice versa let b a we claim that bd ad indeed if bd ad we get a add b contradicting b a so if bd ad we are done otherwise the statement follows by induction fractal sequences and hilbert functions given two sequences and we say that is a truncation of if and j j for any j next definition introduces the main tool of the paper the coherent fractal growths are suitable truncations the elements in the fractal expansion of definition we say that t is a coherent fractal growth if n and is a truncation of for each j for instance is a coherent fractal growth indeed one can check that each elements is truncation of the expansion of the previous one on the other hand for instance is not a coherent fractal growth indeed is not a truncation of remark note that in a coherent fractal growth consists of the first elements in n d moreover the length of the elements in t a coherent fractal growth is bounded for any indeed by remark we have x for each d in the next part of this section we prove that the bound in remark is equivalent to the binomial expansion for a in order to the coherent fractal growth with we need the following lemma it uses the equality ab b lemma let a d cd be the coefficients of a then the d coefficients of ahdi are ahdi cd proof the decomposition of a is by definition cd d d d if we get the macaulay decomposition of a by removing the binomials ji equal to since j i implies we have cd d cd d ahdi d since we are done now we consider the case then we can write the following decomposition of a cd d d d thus if this representation is the macaulay decomposition of a once we remove the binomials ji equal to cd d d ahdi d cd d d d cd d d d the proof follows in a finite number of steps by iterating this argument giuseppe favacchio the following theorem is the main result of this section we show that the length of the elements in a coherent fractal growth is an theorem let t be a list of truncations of n n n n then the following are equivalent i t is a coherent fractal growth ii is an proof in order to prove i ii we need to show for each d set a and take the decomposition of a kd d if a n d the statement follows by lemma assume now a d d truncation of n n and by lemma kd kd d kd d d kd d d d d since is a we get denoted by kd d d where is a truncation of kd d therefore reiterating this argument we have kd d d d by equation in remark we have d x x ki kd d the last sum by lemma is equal to ahdi vice versa to prove ii i we have to prove that for each d the sequence p is a truncation of it follows by using the same argument as above indeed by hypothesis we have the bound and by remark we know that let s check for instance that h is an we write a sequence of truncations of of length respectively we get t it is a coherent fractal growth indeed by definition each sequence is a truncation of the previous one and on the other hand we can also check that h is not an indeed in a coherent fractal growth such need to be a truncation of that has length therefore that is the maximal growth allowed fractal sequences and hilbert functions fractal expansion and homological invariants in section we introduced a novel approach to describe the in this section we show the algebraic meaning of a coherent fractal growth we directly relate this sequences with lex segment ideals and its homological invariants in particular the formula is naturally applied to our case therefore the fractal expansion of n is used in proposition to compute the betti numbers of a lex algebra let a and d be positive integers let a d cd be the coefficient of a see d lemma and definition we associate to a and d a monomial xa of degree d in the variables x xn in such a way xa d xcd vice versa a monomial t xcd of degree d in the variables in x identifies a cd such that ci for any i d d remark an immediate consequence of lemma is that xa lex xb iff a b with respect d the lex order induced by xn therefore with respect the same order xa is the greatest monomial of degree d in the set of variables x let s k x k xn be the standard graded polynomial ring we set d g x xa d a t and d g x t xa d a t d d sd is spanned by the monomials in g x g x t d by remark g x t is a lex set of monomials of degree d with respect the degree lexicographic order xn d given t a coherent fractal growth we set i t d hg x ik the space d spanned by the monomial in g x then by theorem and theorem the following result holds proposition i t id t r is a lex segment ideal and d given a minimal free resolution of an lex segment ideal m m s s s j j the betti numbers can be computed by the formula see see also equation of section theorem formula let i be a lex segment ideal for u g i a monomial minimal generator of i let m u denotes the largest index j such that xj divides u let mkj be the number of monomials u g i with m u then n x u x mkj i i j this result can be written in terms of coherent fractal growth giuseppe favacchio proposition given t a coherent fractal growth then x n j x a j t wkj i where wkj is the number of occurrence of k in with proof it is an immediate consequence of theorem and theorem the hilbert function of a bigraded algebra and the fractal functions let k be an infinite field and let r k xn ym be the polynomial ring in n m indeterminates with the grading defined by deg xi and deg yj then r i j r i j where r i j denotes the set of all homogeneous elements in r of degree i j moreover r i j is generated jm as a space by the monomials xinn ym such that in i and jm j an ideal i r is called a bigraded ideal if it is generated by homogeneous elements with respect to this grading a bigraded algebra is the quotient of r with a bigraded ideal i the hilbert function of a bigraded algebra is defined such that n and i j dimk i j dimk r i j dimk i i j where i i j i r i j is the set of the bihomogeneous elements of degree i j in i from now on we will work with the degree lexicographical order on r induced by xn ym with this ordering we recall the definition of bilex ideal introduced and studied in we refer to for all preliminaries and for further results on bilex ideals definition definition a set of monomials l r i j is called bilex if for every monomial uv l where u r and v r j the following conditions are satisfied if r and u then v l if v r j and v v then uv a monomial ideal i r is called a bilex ideal if i i j is generated as space by a bilex set of monomials for every i j bilex ideals play a crucial role in the study of the hilbert function of bigraded algebras theorem theorem let j r be a bigraded ideal then there exists a bilex ideal i such that in was solved the question of characterize the hilbert functions of bigraded algebras of k by introducing the ferrers functions in this section we generalize these functions by introducing the fractal functions see definition we prove theorem that these classify the hilbert functions of bigraded algebras we need some preparatory material we denote by u the set of all the matrices with size a b a rows and b columns and entries in a set u given a matrix m mij u we denote by x xx m mi j named the weight of next definition introduces the objects we need in this section definition a ferrers matrix of size a b is a matrix m mij m such that fractal sequences and hilbert functions if mij then j for any j i j we set by f the family of all the ferrers matrices of size a b in the next definition we introduce expansions of a matrix definition let m m u be a matrix of size a b and let v va na and p w b hv wb n be vectors of non negative integers we denote by m an element in m u constructed by repeating vi times the row of m for i a we denote by m wi an element in m u p constructed by repeating wj times the column of m for j b remark the expansions of a ferrers matrix are m set v and w then f m m i also ferres matrices take for instance f f given m n f we define a new matrix m n mij nij min mij nij f we say that m n iff mij nij for any i j we are ready to introduce the fractal functions definition let h n be a numerical function we say that h is a fractal function if p h and for any i j there exists a matrix of mij f with mij h i j and such that all the matrices satisfy the condition h n if i mij m i mij if j remark let h n be the numerical function h i j for any i j n satisfying the condition in definition that is the there is only one element in m f ij matrix with all entries therefore h is a fractal function remark if n m the definition of fractal functions agrees with definition in indeed it is enough to write each partition as a matrix mij mhk f where mhk iff k ak otherwise mhk in this case the expansions are given by the elements in in the following we denote by x xn and y ym the set of the variables of degree and respectively next lemma is useful for our purpose it is an immediate consequence of lemma giuseppe favacchio d lemma xa xahdi to shorten the notation we set and in order to relate fractal functions and hilbert functions of bigraded algebras we need to introduce a correspondence between ferrers matrices and monomials let m mab f we denote by m ij the set of the monomials j m xa i yb proposition let m mab f mab then m is a bilex set of monomials of bidegree i j i j i i proof we use lemma and remark let xa yb be an element in m and xu xa i j since u b a b and mab we get mub xu yb m in a similar way it follows that i j xa yv m for v b let l r i j be a bilex set of monomials of bidegree i j we denote by l m the j i matrix mab such that mab iff xa yb l otherwise mab proposition let l r i j be a bilex set of monomials of bidegree i j then l f proof if follows by using lemma and remark indeed say mab for an entry of l this i j i j implies xa yb thus for u a we have xu yb l mub analogously we see that mav for v b proposition and proposition together imply the following result corollary there is a one to one correspondence between the bilex sets of monomials of degree i j and the elements in f we are ready to prove the main result of this paper theorem let h n n n be a numerical function then the following are equivalent h is a fractal function h for some bilex ideal i r k xn ym proof let h be a fractal function for each i j let i i j be the space spanned by the elements in mij then we claim that i i j i i j is an ideal of to prove the i j i j claim it is enough to show that if xa yb i i j then xu xa yb i j yv xa yb i i for any yv y we have see lemma j xu xa i yb j xa i yb i j for any xu x and j xahii yb then by definition and theorem the entry ahii b of the matrix j is and then xahii j i j yb i j and furthermore xu xa yb i i i j i j in a similar way it follows that yv xa yb let i r be a bilex ideal such that set mij iij we claim that the mij s satisfy the condition in definition by theorem it is enough to show that if mij a b the entry a b in mij is then also j ahii b the entry ahii b in is set h j xu mhj u b then the claim is an immediate consequence of the fact that j is a lex ideal of k x fractal sequences and hilbert functions the following question is motivated by the argument of section question can the bigraded betti numbers of a bilex ideal i i j be computed from the matrices i i j references aramova a crona k de negri bigeneric initial ideals diagonal subalgebras and bigraded hilbert functions journal of pure and applied algebra jul bruns w herzog hj rings cambridge university press jun cocoateam cocoa a system for doing computations in commutative algebra available at http eliahou kervaire minimal resolutions of some monomial ideals algebra favacchio the hilbert function of bigraded algebras in k journal of commutative algebra in press guardo e van tuyl arithmetically sets of points in springerbriefs in mathematics springer herzog j hibi monomial ideals springer kimberling fractal sequences and interspersions ars combinatoria macaulay fs some properties of enumeration in the theory of modular systems proceedings of the london mathematical society jan peeva i stillman open problems on syzygies and hilbert functions journal of commutative algebra stanley rp hilbert functions of graded algebras advances in mathematics apr dipartimento di matematica e informatica viale doria catania italy address favacchio url
| 0 |
noname manuscript no will be inserted by the editor a duncan pavliotis apr using perturbed underdamped langevin dynamics to efficiently sample from probability distributions received date accepted date abstract in this paper we introduce and analyse langevin samplers that consist of perturbations of the standard underdamped langevin dynamics the perturbed dynamics is such that its invariant measure is the same as that of the unperturbed dynamics we show that appropriate choices of the perturbations can lead to samplers that have improved properties at least in terms of reducing the asymptotic variance we present a detailed analysis of the new langevin sampler for gaussian target distributions our theoretical results are supported by numerical experiments with target measures introduction and motivation sampling from probability measures in spaces is a problem that appears frequently in applications in computational statistical mechanics and in bayesian statistics in particular we are faced with the problem of computing expectations with respect to a probability measure on rd we wish to evaluate integrals of the form f f x dx rd as is typical in many applications particularly in molecular dynamics and bayesian inference the density for convenience denoted by the same symbol is known only up to a normalization constant furthermore the dimension of the underlying space is quite often large enough to render deterministic quadrature schemes computationally infeasible a standard approach to approximating such integrals is markov chain monte carlo mcmc techniques where a markov process xt is constructed which is ergodic with respect to the probability measure then defining the average f t t f xs ds a duncan school of mathematical and physical sciences university of sussex falmer brighton united kingdom imperial college london department of mathematics south kensington campus london england pavliotis imperial college london department of mathematics south kensington campus london england for f the ergodic theorem guarantees almost sure convergence of the average f to f there are infinitely many markov and for the purposes of this paper diffusion processes that can be constructed in such a way that they are ergodic with respect to the target distribution a natural question is then how to choose the ergodic diffusion process xt naturally the choice should be dictated by the requirement that the computational cost of approximately calculating is minimized a standard example is given by the overdamped langevin dynamics defined to be the unique strong solution xt of the following stochastic differential equation sde dxt xt dt where v log is the potential associated with the smooth positive density under appropriate assumptions on v on the measure dx the process xt is ergodic and in fact reversible with respect to the target distribution another example is the underdamped langevin dynamics given by xt qt pt defined on the extended space phase space rd rd by the following pair of coupled sdes dqt m pt dt dpt qt dt m pt dt dwt where mass and friction tensors m and respectively are assumed to be symmetric positive definite matrices it is that qt pt is ergodic with respect to the measure b n m having density with respect to the lebesgue measure on given by b q p exp q p m p b z b is a normalization constant note that where z b has marginal with respect to p and thus for functions f we have that f qt dt f almost surely notice also that the dynamics restricted to the is no longer markovian the can thus be interpreted as giving some instantaneous memory to the system facilitating efficient exploration of the state space higher order markovian models based on a finite dimensional markovian approximation of the generalized langevin equation can also be used as there is a lot of freedom in choosing the dynamics in see the discussion in section it is desirable to choose the diffusion process xt in such a way that f can provide a good estimation of f the performance of the estimator can be quantified in various manners the ultimate goal of course is to choose the dynamics as well as the numerical discretization in such a way that the computational cost of the estimator is minimized for a given tolerance the minimization of the computational cost consists of three steps bias correction variance reduction and choice of an appropriate discretization scheme for the latter step see section and sec under appropriate conditions on the potential v it can be shown that both and converge to equilibrium exponentially fast in relative entropy one performance objective would then be to choose the process xt so that this rate of convergence is maximised conditions on the potential v which guarantee exponential convergence to equilibrium both in and in relative entropy can be found in a powerful technique for proving exponentially fast convergence to equilibrium that will be used in this paper is villani s theory of hypocoercivity in the case when the target measure is gaussian both the overdamped and the underdamped dynamics become generalized processes for such processes the entire spectrum of the generator or equivalently the operator can be computed analytically and in particular an explicit formula for the gap can be obtained a detailed analysis of the convergence to equilibrium in relative entropy for stochastic differential equations with linear drift generalized processes has been carried out in in addition to speeding up convergence to equilibrium reducing the bias of the estimator one is naturally also interested in reducing the asymptotic variance under appropriate conditions on the target measure and the observable f the estimator f satisfies a central limit theorem clt that is d f f n t t where is the asymptotic variance of the estimator f the asymptotic variance characterises how quickly fluctuations of f around f contract to consequently another natural objective is to choose the process xt such that is as small as possible it is well known that the asymptotic variance can be expressed in terms of the solution to an appropriate poisson equation for the generator of the dynamics f f dx rd techniques from the theory of partial differential equations can then be used in order to study the problem of minimizing the asymptotic variance this is the approach that was taken in see also and it will also be used in this paper other measures of performance have also been considered for example in performance of the estimator is quantified in terms of the rate functional of the ensemble measure t dx see also for a study of the nonasymptotic behaviour of mcmc techniques t x t including the case of overdamped langevin dynamics similar analyses have been carried out for various modifications of of particular interest to us are the riemannian manifold mcmc see the discussion in section and the nonreversible langevin samplers as a particular example of the general framework that was introduced in we mention the preconditioned overdamped langevin dynamics that was presented in dxt xt dt dwt in this paper the behaviour of as well as the asymptotic variance of the corresponding estimator f are studied and applied to equilibrium sampling in molecular dynamics a variant of the standard underdamped langevin dynamics that can be thought of as a form of preconditioning and that has been used by practitioners is the molecular dynamics the nonreversible overdamped langevin dynamics dxt xt xt dt dwt where the vector field satisfies is ergodic but not reversible with respect to the target measure for all choices of the vector field the asymptotic behaviour of this process was considered for gaussian diffusions in where the rate of convergence of the covariance to equilibrium was quantified in terms of the choice of this work was extended to the case of target densities and consequently for nonlinear sdes of the form in the problem of constructing the optimal nonreversible perturbation in terms of the spectral gap for gaussian target densities was studied in see also optimal nonreversible perturbations with respect to miniziming the asymptotic variance were studied in in all these works it was shown that in theory without taking into account the computational cost of the discretization of the dynamics the nonreversible langevin sampler always outperforms the reversible one both in terms of converging faster to the target distribution as well as in terms of having a lower asymptotic variance it should be emphasized that the two optimality criteria maximizing the spectral gap and minimizing the asymptotic variance lead to different choices for the nonreversible drift x the goal of this paper is to extend the analysis presented in by introducing the following modification of the standard underdamped langevin dynamics dqt m pt dt qt dt dpt qt dt m pt dt m pt dt dwt where m are constant strictly positive definite matrices and are scalar constants and are constant matrices as demonstrated in section the process defined by will be ergodic with respect to the gibbs measure b defined in our objective is to investigate the use of these dynamics for computing ergodic averages of the form to this end we study the long time behaviour of and using hypocoercivity techniques prove that the process converges exponentially fast to equilibrium this perturbed underdamped langevin process introduces a number of parameters in addition to the mass and friction tensors which must be tuned to ensure that the process is an efficient sampler for gaussian target densities we derive estimates for the spectral gap and the asymptotic variance valid in certain parameter regimes moreover for certain classes of observables we are able to identify the choices of parameters which lead to the optimal performance in terms of asymptotic variance while these results are valid for gaussian target densities we advocate these particular parameter choices also for more complex target densities to demonstrate their efficacy we perform a number of numerical experiments on more complex multimodal distributions in particular we use the langevin sampler in order to study the problem of diffusion bridge sampling the rest of the paper is organized as follows in section we present some background material on langevin dynamics we construct general classes of langevin samplers and we introduce criteria for assessing the performance of the samplers in section we study qualitative properties of the perturbed underdamped langevin dynamics including exponentially fast convergence to equilibrium and the overdamped limit in section we study in detail the performance of the langevin sampler for the case of gaussian target distributions in section we introduce a numerical scheme for simulating the perturbed dynamics and we present numerical experiments on the implementation of the proposed samplers for the problem of diffusion bridge sampling section is reserved for conclusions and suggestions for further work finally the appendices contain the proofs of the main results presented in this paper and of several technical results construction of general langevin samplers background and preliminaries in this section we consider estimators of the form where xt is a diffusion process given by the solution of the following sde dxt a xt dt xt dwt with drift coefficient a rd rd and diffusion coefficient b rd both having smooth components and where wt is a standard rm brownian motion associated with is the infinitesimal generator l formally given by l f a f f rd where bb f denotes the hessian of the function f and denotes the frobenius inner product in general is nonnegative definite and could possibly be degenerate in particular the infinitesimal generator need not be uniformly elliptic to ensure that the corresponding semigroup exhibits sufficient smoothing behaviour we shall require that the process is hypoelliptic in the sense of if this condition holds then irreducibility of the process xt will be an immediate consequence of the existence of a strictly positive invariant distribution x dx see suppose that xt is nonexplosive it follows from the hypoellipticity assumption that the process xt possesses a smooth transition density p t x y which is defined for all t and x y rd theorem the associated strongly continuous markov semigroup pt is defined by pt f x p t x y f y dy t rd suppose that pt is invariant with respect to the target distribution x dx pt f x x dx f x x dx t rd rd for all bounded continuous functions f then pt can be extended to a positivity preserving contraction semigroup on which is strongly continuous moreover the infinitesimal generator corresponding to pt is given by an extension of l rd also denoted by due to hypoellipticity the probability measure on rd has a smooth and positive density with respect to the lebesgue measure and slightly abusing the notation we will denote this density also by let be the hilbert space of integrable functions equipped with inner product and norm we will also make use of the sobolev space h f of with weak derivatives in equipped with norm kf kf a general characterisation of ergodic diffusions a natural question is what conditions on the coefficients a and b of are required to ensure that xt is invariant with respect to the distribution x dx the following result provides a necessary and sufficient condition for a diffusion process to be invariant with respect to a given target distribution theorem consider a diffusion process xt on rd defined by the unique solution to the sde with drift a c rd rd and diffusion coefficient b c rd then xt is invariant with respect to if and only if a log where bb and rd rd is a continuously differentiable vector field satisfying if additionally then there exists a matrix function c rd such that in this case the infinitesimal generator can be written as an of lf c f rd the proof of this result can be found in ch similar versions of this characterisation can be found in and see also remark if holds and l is hypoelliptic it follows immediately that xt is ergodic with unique invariant distribution x dx more generally we can consider diffusions in an extended phase space dzt b zt dt zt dwt where wt is a standard brownian motion in rn n this is a markov process with generator l b z z where z t z we will consider dynamics zt that is ergodic with respect to z dz such that x y dy x rm where z x y x rd y rm d m n there are various choices of dynamics which are invariant and indeed ergodic with respect to the target distribution x dx choosing b i and we immediately recover the overdamped langevin dynamics choosing b i and such that holds gives rise to the nonreversible overdamped equation defined by as it satisfies the conditions of theorem it is ergodic with respect to in particular choosing x x for a constant matrix j we obtain dxt i j xt dt dwt which has been studied in previous works given a target density on rd if we consider the augmented target density b on given in then choosing m p q p q and where m and are positive definite symmetric matrices the conditions of theorem are satisfied for the target density b the resulting dynamics qt pt is determined by the underdamped langevin equation it is straightforward to verify that the generator is hypoelliptic sec and thus qt pt is ergodic more generally consider the augmented target density b on as above and choose m p q q p q m p and where and are scalar constants and are constant matrices with this choice we recover the perturbed langevin dynamics it is straightforward to check that satisfies the invariance condition and thus theorem guarantees that is invariant with respect to b in a similar fashion one can introduce an augmented target density on r d with b b q p um where p q ui rd for i clearly now define r d r d by rd q b b q p um dp dum q we pp m v q uj p q p um p and b r d r d by b q p um where r and for i the resulting process is given by dqt pt dt dpt v qt dt d x uj t dt pt dt dt m dum t pt dt ut dt dwtm where wtm are independent rd brownian motions this process is ergodic b with unique invariant distribution b and under appropriate conditions on v converges exponentially fast to equilibrium in relative entropy equation is a markovian representation of a generalised langevin equation of the form dqt pt dt t dpt v qt dt f t s ps ds n t where n t is a stationary gaussian process with autocorrelation function f t e n t n s f t s and f t m x let e z exp z be a positive density on rn where n d such that x rn e x z dz where x y rd rn then choosing b and we obtain the dynamics dxt xt yt dt dyt xt yt dt then xt yt is immediately ergodic with respect to comparison criteria for a fixed observable f a natural measure of accuracy of the estimator f the mean square error mse defined by f xs ds is m se f t ex f f where ex denotes the expectation conditioned on the process xt starting at x it is instructive to introduce the decomposition m se f t f t f t where f t f f f t ex f f var f and here f t measures the bias of the estimator f and f t measures the variance of fluctuations of f around the mean the speed of convergence to equilibrium of the process xt will control both the bias term f t and the variance f t to make this claim more precise suppose that the semigroup pt associated with xt decays exponentially fast in there exist constants and c such that kpt g g kg g g remark if holds with c this estimate is equivalent to l having a spectral gap in allowing for a constant c is essential for our purposes though in order to treat nonreversible and degenerate diffusion processes by the theory of hypocoercivity as outlined in the following lemma characterises the decay of the bias f t as t in terms of and the proof can be found in appendix lemma let xt be the unique solution of such that and l where denotes the derivative of with respect to suppose that the process is ergodic with respect to such that the markov semigroup pt satisfies then for f c f t kf the study of the behaviour of the variance f t involves deriving a central limit theorem for the additive functional f xt f dt as discussed in we reduce this problem to proving of the poisson equation l f f the only complications in this approach arise from the fact that the generator l need not be symmetric in nor uniformly elliptic the following result summarises conditions for the of the poisson equation and it also provides with us with a formula for the asymptotic variance the proof can be found in appendix lemma let xt be the unique solution of with smooth drift and diffusion coefficients such that the corresponding infinitesimal generator is hypoelliptic syppose that xt is ergodic with respect to and moreover pt decays exponentially fast in as in then for all f there exists a unique mean zero solution to the poisson equation if then for all f c rd d t f f n t where is the asymptotic variance defined by l moreover if where and then holds for all f c rd clearly observables that only differ by a constant have the same asymptotic variance in the sequel we will hence restrict our attention to observables f satisfying f simplifying expressions and the corresponding subspace of will be denoted by f f if the exponential decay estimate is satisfied then lemma shows that l is invertible on so we can express the asymptoptic variance as hf l f f let us also remark that from the proof of lemma it follows that the inverse of l is given by pt dt we note that the constants c and appearing in the exponential decay estimate also control the speed of convergence of f t to zero indeed it is straightforward to show that if is satisfied then the solution of satisfies f f c kf lemmas and would suggest that choosing the coefficients and to optimize the constants c and in would be an effective means of improving the performance of the estimator f especially since the improvement in performance would be uniform over an entire class of observables when this is possible this is indeed the case however as has been observed in maximising the speed of convergence to equilibrium is a delicate task as the leading order term in m se f t it is typically sufficient to focus specifically on the asymptotic variance and study how the parameters of the sde can be chosen to minimise this study was undertaken in for processes of the form perturbation of underdamped langevin dynamics the primary objective of this work is to compare the performances of the perturbed underdamped langevin dynamics and the unperturbed dynamics according to the criteria outlined in section and to find suitable choices for the matrices m and that improve the performance of the sampler we begin our investigations of by establishing ergodicity and exponentially fast return to equilibrium and by studying the overdamped limit of as the latter turns out to be nonreversible and therefore in principle superior to the usual overdamped limit this calculation provides us with further motivation to study the proposed dynamics for the bulk of this work we focus on the particular case when the target measure is gaussian when the potential is given by v q q t sq with a symmetric and positive definite precision matrix s the covariance matrix is given by s in this case we advocate the following conditions for the choice of parameters m s s under the above choices we show that the large perturbation limit exists and is finite and we provide an explicit expression for it see theorem from this expression we derive an algorithm for finding optimal choices for in the case of quadratic observables see algorithm if the friction coefficient is not too small and under certain mild nondegeneracy conditions we prove that adding a small perturbation will always decrease the asymptotic variance for observables of the form f q q kq l q c d f and f see theorem in fact we conjecture that this statement is true for arbitrary observables f but we have not been able to prove this the dynamics used in conjunction with the conditions proves to be especially effective when the observable is antisymmetric when it is invariant under the substitution q or when it has a significant antisymmetric part in particular in proposition we show that under certain conditions on the spectrum of for any antisymmetric observable f it holds that numerical experiments and analysis show that departing significantly from in fact possibly decreases the performance of the sampler this is in stark contrast to where it is not possible to increase the asymptotic variance by any perturbation for that reason until now it seems practical to use as a sampler only when a reasonable estimate of the global covariance of the target distribution is available in the case of bayesian inverse problems and diffusion bridge sampling the target measure is given with respect to a gaussian prior we demonstrate the effectiveness of our approach in these applications taking the prior gaussian covariance as s in remark in rem another modification of was suggested albeit with the simplifications i and m i dqt j m pt dt dpt j qt dt m pt dt dwt j again denoting an antisymmetric matrix however under the change of variables p j the above equations transform into dqt pt dt qt dt dt p where j m j and j j since any observable f depends only on q the are merely auxiliary the estimator f as well as its associated convergence characteristics asymptotic variance and speed of convergence to equilibrium are invariant under this transformation therefore reduces to the underdamped langevin dynamics and does not represent an independent approach to sampling suitable choices of m and will be discussed in section properties of perturbed underdamped langevin dynamics in this section we study some of the properties of the perturbed underdamped dynamics first note that its generator is given by l m p v m p p z z z lham ltherm z lpert decomposed into the perturbation lpert and the unperturbed operator which can be further split into the hamiltonian part lham and the thermostat part ltherm see lemma the infinitesimal generator l is hypoelliptic t u proof see appendix b an immediate corollary of this result and of theorem is that the perturbed underdamped langevin process is ergodic with unique invariant distribution b given by as explained in section the exponential decay estimate is crucial for our approach as in particular it guarantees the of the poisson equation from now on we will therefore make the following assumption on the potential v required to prove exponential decay in assumption assume that the hessian of v is bounded and that the target measure dq dq satisfies a poincare inequality there exists a constant such that ze rd rd holds for all h sufficient conditions on the potential so that s inequality holds the criterion are presented in theorem under assumption there exist constants c and such that the semigroup pt generated by l satisfies exponential decay in as in proof see appendix b remark the proof uses the machinery of hypocoercivity developed in however it seems likely that using the framework of the assumption on the boundedness of the hessian of v can be substantially weakened the overdamped limit in this section we develop a connection between the perturbed underdamped langevin dynamics and the nonreversible overdamped langevin dynamics the analysis is very similar to the one presented in section and we will be brief for convenience in this section we will perform the analysis on the torus td d we will assume q td consider the following scaling of m pt dt v qt dt v dt m dt m dt dwt valid for the small momentum regime m m pt equivalently those modifications can be obtained from subsituting and t t and so in the limit as the dynamics describes the limit of large friction with rescaled time it turns out that as the dynamics converges to the limiting sde dqt v qt dt v qt dt dwt the following proposition makes this statement precise proposition denote by the solution to with deterministic initial conditions qinit pinit and by the solution to with initial condition qinit for any t converges to in c t td as lim e sup remark by a refined analysis it is possible to get information on the rate of convergence see the limiting sde is nonreversible due to the term v qt dt and also because the matrix is in general neither symmetric nor antisymmetric this result together with the fact that nonreversible perturbations of overdamped langevin dynamics of the form are by now to have improved performance properties motivates further investigation of the dynamics remark the limit we described in this section respects the invariant distribution in the sense that the limiting dynamics is ergodic with respect to the measure dq dq to see this we have to check that we are using the notation instead of where refers to the rd of the generator of to the associated operator indeed the term vanishes because of the antisymmetry of therefore it remains to show that that the matrix is antisymmetric clearly the first term is symmetric and furthermore it turns out to be equal to the symmetric part of the second term so is indeed invariant under the limiting dynamics sampling from a gaussian distribution in this section we study in detail the performance of the langevin sampler for gaussian target densities first considering the case of unit covariance in particular we study the optimal choice for the parameters in the sampler the exponential decay rate and the asymptotic variance we then extend our results to gaussian target densities with arbitrary covariance matrices unit covariance small perturbations in our study of the dynamics given by we first consider the simple case when v q the task of sampling from a gaussian measure with unit covariance we will assume m i and j so that the and are perturbed in the same way albeit posssibly with different strengths and using these simplifications reduces to the linear system dqt pt dt dt dpt dt dt dt p the above dynamics are of type we can write p dxt dt with x q p t i and denoting a standard wiener process on the generator of is then given by l we will consider quadratic observables of the form f q q kq l q c d with k sym l r and c r however it is worth recalling that the asymptotic variance does not depend on we also stress that f is assumed to be independent of p as those extra degrees of freedom are merely auxiliary our aim will be to study the associated asymptotic variance see equation in particular its dependence on the parameters and this dependence is encoded in the function r assuming a fixed observable f and perturbation matrix j in this section we will focus on small perturbations on the behaviour of the function in the neighbourhood of the origin our main theoretical tool will be the poisson equation see the proofs in appendix anticipating the forthcoming analysis let us already state our main result showing that in the neighbourhood of the origin the function has favourable properties along the diagonal note that the perturbation strengths in the first and second line of coincide theorem consider the dynamics dqt pt dt dt dpt dt dt dt p with and an observable of the form f q q kq l q if at least one of the conditions j k and l ker j is satisfied then the asymptotic the unperturbed sampler is at a local maximum independently of k and j and as long as and purely quadratic observables let us start with the case l f q q kq the following holds proposition the function satisfies and hess proof see appendix tr jkjk tr j k tr j k tr jkjk tr jkjk tr j k tr j k tr jkjk tr jkjk t u the above proposition shows that the unperturbed dynamics represents a critical point of independently of the choice of k j and in general though hess can have both positive and negative eigenvalues in particular this implies that an unfortunate choice of the perturbations will actually increase the asymptotic variance of the dynamics in contrast to the situation of perturbed overdamped langevin dynamics where any nonreversible perturbation leads to an improvement in asymptotic variance as detailed in and furthermore the nondiagonality of hess hints at the fact that the interplay of the perturbations and or rather their relative strengths and is crucial for the performance of the sampler and consequently the effect of these perturbations can not be satisfactorily studied independently example assuming j and j k it follows that tr k for all nonzero therefore in this case a small perturbation of only or only will increase the asymptotic variance uniformly over all choices of k and however it turns out that it is possible to construct an improved sampler by combining both perturbations in a suitable way indeed the function can be seen to have good properties along we set s s s s and compute hess tr jkjk tr j k tr jkjk tr j k tr jkjk tr j k tr jkjk tr jkjk tr j k the last inequality follows from and tr jkjk tr j k both inequalities are proven in the appendix lemma where the last inequality is strict if j k consequently choosing both perturbations to be of the same magnitude and assuring that j and k do not commute always leads to a smaller asymptotic variance independently of the choice of k j and we state this result in the following corrolary corollary consider the dynamics dqt pt dt dt dpt dt dt dt p and a quadratic observable f q q kq if j k then the asymptotic variance of the unperturbed sampler is at a local maximum independently of k j and and remark as we will see in section more precisely example if j k the asymptotic variance is constant as a function of the perturbation has no effect example let us set s s and s this corresponds to a small perturbation with qt dt in q and dt in p in this case we get tr j k tr jkjk tr j k z z which changes its sign depending on j and k as the first term is negative and the second is positive whether the perturbation improves the performance of the sampler in terms of asymptotic variance therefore depends on the specifics of the observable and the perturbation in this case linear observables here we consider the case k f q l q c where again l rd and c we have the following result proposition the function satisfies and hess t u proof see appendix let us assume that l ker j then and hence theorem shows that a small perturbation by qt dt alone always results in an improvement of the asymptotic variance however if we combine both perturbations qt dt and dt then the effect depends on the sign of this will be negative if and have different signs and also if they have the same sign and is big enough following section we require we then end up with the requirement which is satisfied if summarizing the results of this section for observables of the form f q q kq l q c choosing equal perturbations with a sufficiently strong damping always leads to an improvement in asymptotic variance under the conditions j k and l ker j this is finally the content of theorem let us illustrate the results of this section by plotting the asymptotic variance as a function of the perturbation strength see figure making the choices d l t and j the asymptotic variance has been computed according to using and from appendix the graphs confirm the results summarized in corollary concerning the asymptotic variance in the neighbourhood of the unperturbed dynamics additionally they give an impression of the global behaviour for larger values of figures and show the asymptotic variance associated with the quadratic observable f q q kq in accordance with corollary the asymptotic variance is at a local maximum at zero perturbation in the case see figure for increasing perturbation strength the graph shows that it decays monotonically and reaches a limit for this limiting behaviour will be explored quadratic observable asymptotic variance asymptotic variance quadratic observable a equal perturbations perturbation strength perturbation strength b approximately equal perturbations linear observable quadratic observable asymptotic variance asymptotic variance perturbation strength perturbation strength d equal perturbations sufficiently large friction c opposing perturbations linear observable asymptotic variance perturbation strength e equal perturbations small friction fig asymptotic variance for linear and quadratic observables depending on relative perturbation and friction strengths analytically in section if the condition is only approximately satisfied figure our numerical examples still exhibits decaying asymptotic variance in the neighbourhood of the critical point in this case however the asymptotic variance diverges for growing values of the perturbation if the perturbations are opposed as in example it is possible for certain observables that the unperturbed dynamics represents a global minimum such a case is observed in figure in figures and the observable f q l q is considered if the damping is sufficiently strong the unperturbed dynamics is at a local maximum of the asymptotic variance figure furthermore the asymptotic variance approaches zero as for a theoretical explanation see again section the graph in figure shows that the assumption of not being too small can not be dropped from corollary even in this case though the example shows decay of the asymptotic variance for large values of exponential decay rate let us denote by the optimal exponential decay rate in sup there exists c such that holds note that is and positive by theorem we also define the spectral bound of the generator l by s l inf re l in it is proven that the semigroup pt considered in this section is differentiable see proposition in this case see corollary of it is known that the exponential decay rate and the spectral bound coincide s l whereas in general only s l holds in this section we will therefore analyse the spectral properties of the generator in particular this leads to some intuition of why choosing equal perturbations is crucial for the performance of the sampler in see also it was proven that the spectrum of l as in in b is given by r x nj nj n b l note that l only depends on the drift matrix b in the case where the spectrum of b can be computed explicitly lemma assume then the spectrum of b is given by r r j j b proof we will compute b i and then use the identity n b b i we have det b i det i i i det i det det det a l in the case the arrows indicate the movement of the spectrum as the perturbation strength increases b b in the case the dynamics is only perturbed by pdt the arrows indicate the movement of the eigenvalues as increases fig effects of the perturbation on the spectra of l and where i is understood to denote the identity matrix of appropriate dimension the above quantity is zero if and only if or t u together with the claim follows using formula in figure we show a sketch of the spectrum for the case of equal perturbations with the convenient choices n and of course the eigenvalue at is associated to the invariant measure since and b where denotes the operator the of the arrows indicate the movement of the eigenvalues as the perturbation increases in accordance with lemma clearly the spectral bound of l is not affected by the perturbation note that the eigenvalues on the real axis stay invariant under the perturbation the subspace of b associated to those will turn out to be crucial for the characterisation of the limiting asymptotic variance as to illustrate the suboptimal properties of the perturbed dynamics when the perturbations are not equal we plot the spectrum of the drift matrix b in the case when the dynamics is only perturbed by the term pdt for n and see figure note that the full spectrum can be inferred from for we have that the spectrum b only consists of the degenerate eigenvalue for increasing the figure shows that the degenerate eigenvalue splits up into four eigenvalues two of which get closer to the imaginary axis as increases leading to a smaller spectral bound and therefore to a decrease in the speed of convergence to equilibrium figures and give an intuitive explanation of why the of the perturbation strengths is crucial unit covariance large perturbations in the previous subsection we observed that for the particular perturbation and dqt pt dt dt dpt dt dt dt p dwt the perturbed langevin dynamics demonstrated an improvement in performance for in a neighbourhood of when the observable is linear or quadratic recall that this dynamics is ergodic with respect to a standard gaussian measure b on with marginal with respect to the in the following we shall consider only observables that do not depend on moreover we assume without loss of generality that f for such an observable we will write f and assume the canonical embedding b the infinitesimal generator of is given by l p q jp z z a where we have introduced the notation lpert in the sequel the adjoint of an operator b in b will be denoted by b in the rest of this section we will make repeated use of the hermite polynomials x e e invoking the notation x q p r for m define the spaces hm span m with induced scalar product hf gim hf f g hm the space hm is then a real hilbert space with finite dimension m dim hm m the following result theorem holds for operators of the form l where the quadratic drift and diffusion matrices b and q are such that l is the generator of an ergodic stochastic process see definition for precise conditions on b and q that ensure ergodicity the generator of the sde is given by with b and q as in equations and respectively the following result provides an orthogonal decomposition of b into invariant subspaces of the operator theorem section the following holds a the space b has a decomposition into mutually orthogonal subspaces m b hm b for all m hm is invariant under l as well as under the semigroup c the spectrum of l has the following decomposition l where x m b remark note that by the ergodicity of the dynamics ker l consists of constant functions and so ker l therefore b has the decomposition b b ker l m hm our first main result of this section is an expression for the asymptotic variance in terms of the unperturbed operator and the perturbation a proposition let f so in particular f f q then the associated asymptotic variance is given by hf a f remark the proof of the preceding proposition will show that a is invertible on b and that a f d for all f b to prove proposition we will make use of the generator with reversed perturbation and the momentum flip operator p b b q p q clearly p i and p p further properties of a and the auxiliary operators and p are gathered in the following lemma lemma for all c b the following holds a the generator is symmetric in b with respect to p p p b the perturbation a is skewadjoint in b c the operators and a commute a d the perturbation a satisfies p ap e l and commute l and the following relation holds p lp f the operators l a and p leave the hermite spaces hm invariant remark the claim c in the above lemma is crucial for our approach which itself rests heavily on the fact that the and match proof of lemma to prove a consider the following decomposition of as in p q z z lham ltherm by partial integration it is straightforward to see that lham and ltherm hltherm for all c b lham and ltherm are antisymmetric and symmetric in b respectively furthermore we immediately see that p lham p and p ltherm p ltherm so that p p ltherm we note that this result holds in the more general setting of section for the infinitesimal generator the claim b follows by noting that the flow vector field b q p associated to a is with respect to b bb therefore a is the generator of a strongly continuous unitary semigroup on b and hence skewadjoint by stone s theorem to prove c we use the decomposition lham ltherm to obtain a lham a ltherm a c b the first term of gives p q jp p p jp jp jq jq the second term of gives a since jq commutes with p both terms in are clearly zero due the antisymmetry of j and the symmetry of the hessian the claim d follows from a short calculation similar to the proof of a to prove e note that the fact that l and commute follows from c as l a c b while the property p p follows from properties a b and d indeed p lp p p p p h as required to prove f first notice that l and are of the form and therefore leave the spaces hm invariant by theorem it follows immediately that also a leaves those spaces invariant the fact that p leaves the spaces hm invariant follows directly by inspection of t u now we proceed with the proof of proposition proof of proposition since the potential v is quadratic assumption clearly holds and thus lemma ensures that l and are invertible on b with dt and analogously for in particular the asymptotic variance can be written as hf f due to the respresentation and theorem the inverses of l and leave the hermite spaces hm invariant we will prove the claim from proposition under the assumption that p f f which includes the case f f q for the following calculations we will assume f hm for fixed m combining statement f with a and e of lemma and noting that hm c b we see that p lp and p p when restricted to hm therefore the following calculations are justified hf f hf f hf f hp f p f hf f hf f hf f hf f where in the third line we have used the assumption p f f and in the fourth line the properties p i p p and equation since l and commute on hm according to lemma e f we can write l for the restrictions on hm using l we also have a since and a commute we thus arrive at the formula hf a f f hm now since a f f d for all f b it follows that the operator a is bounded we l can therefore extend formula to the whole of b by continuity using the fact that b hm t u applying proposition we can analyse the behaviour of in the limit of large perturbation strength to this end we introduce the orthogonal decomposition ker jq ker jq where jq is understood as an unbounded operator acting on obtained as the smallest closed extension of jq acting on rd in particular ker jq is a closed linear subspace of let denote the projection onto ker jq we will write to stress the dependence of the asymptotic variance on the perturbation strength the following result shows that for large perturbations the limiting asymptotic variance is always smaller than the asymptotic variance in the unperturbed case furthermore the limit is given as the asymptotic variance of the projected observable for the unperturbed dynamics theorem let f then lim remark note that the fact that the limit exists and is finite is nontrivial in particular as figures and demonstrate it is often the case that if the condition is not satisfied remark the projection onto ker jq can be understood in terms of figure indeed the eigenvalues on the real axis highlighted by diamonds are not affected by the perturbations let us denote by the projection onto the span of the eigenspaces of those eigenvalues as the limiting asymptotic variance is given as the asymptotic variance associated to the unperturbed dynamics of the projection if we denote by the projection of b onto then we have that proof of theorem note that and a leave the hermite spaces hm invariant and their restrictions to those spaces commute see lemma b c and f furthermore as the hermite spaces hm are those operators have discrete lspectrum as a a is nonnegative adjoint there exists an orthogonal decomposition iw l li into eigenspaces of the operator a the decomposition wi being finer then hm in the sense that every wi is a subspace of some hm moreover a where is the eigenvalue of a associated to the subspace wi consequently formula can be written as x hfi fi i where f i fi and fi wi let us assume now without loss of generality that ker a so in particular then clearly p lim now note that ker a ker a due to ker im a it remains to show that to see this we write f f f f r where r f f note that since we only consider observables that do not depend on p ker l lq and f w since l commutes with a it follows that leaves both w and i wi invariant therefore as the latter spaces are orthogonal to each other it follows that r from which the result follows t u from theorem it follows that in the limit as the asymptotic variance is not decreased by the perturbation if f ker jq in fact this result also holds true observables in ker jq are not affected at all by the perturbation lemma let f ker jq then for all proof from f ker jq it follows immediately that f ker a then the claim follows from the expression t u d example recall the case of observables of the form f q q kq l q c with k sym l r and c r from section if j k and l ker j then f ker jq as jq q kq l q c kq jq l q kj jk q q jl from the preceding lemma it follows that for all r showing that the assumption in theorem does not exclude nontrivial cases the following result shows that the dynamics is particularly effective for antisymmetric observables at least in the limit of large perturbations proposition let f satisfy f q and assume that ker j furthermore assume that the eigenvalues of j are rationally independent j with r and p i ki for all kd zd then proof of proposition the claim would immediately follow from f ker jq according to theorem but that does not seem to be so easy to prove directly instead we again make use of the hermite polynomials recall from the proof of proposition that l is invertible on b and its inverse leaves the hermite spaces hm invariant consequently the asymptotic variance of an observable f b can be written as hf f x f f where b hm denotes the orthogonal projection onto hm from it is clear that ga is symmetric for even and antisymmetric for odd therefore from f being antisymmetric it follows that m hm m odd in view of and the spectrum of can be written as x m j d x m with appropriate real constants r that depend on and but not on for odd we have that d x m indeed assume to the contrary that the above expression is zero then it follows that for all j d by rational independence of from and it is clear that sup r b r where b r denotes the ball of radius r centered at the origin in consequently the spectral radius of and hence itself converge to zero as the result then follows from t u remark the idea of the preceding proof can be explained using figure and remark since the real eigenvalues correspond to hermite polynomials of even order antisymmetric observables are orthogonal to the associated subspaces the rational independence condition on the eigenvalues of j prevents cancellations that would lead to further eigenvalues on the real axis the following corollary gives a version of the converse of proposition and provides further intuition into the mechanics of the variance reduction achieved by the perturbation corollary let f and assume that then f dq b r for all r where b r denotes the ball centered at with radius proof according to theorem implies we can write and recall from the proof of proposition that and leave the hermite spaces hm invariant therefore ker in b and in particular implies which in turn shows that f ker jq using ker jq im jq it follows that there exists a sequence n rd such that jq f in taking a subsequence if necessary we can assume that the convergence is pointwise everywhere and that the sequence is pointwise bounded by a function in since j is antisymmetric we have that jq jq now gauss s theorem yields f dq dq dn b r b r r where n denotes the outward normal to the sphere r this quantity is zero due to the orthogonality of jq and n and so the result follows from lebesgue s dominated convergence theorem t u optimal choices of j for quadratic observables assume f is given by f q q kq l q tr k with k rsym and l rd note that the constant term is chosen such that f our objective is to choose j in such a way that becomes as small as possible to stress the dependence on the choice of j we introduce the notation j also we denote the orthogonal projection onto ker j by j lemma zero variance limit for linear observables assume k and j l then lim j proof according to proposition we have to show that where is the projection onto ker jq let us thus prove that f ker jq im jq im jq where the second identity uses the fact that indeed since j by fredholm s alternative there exists u rd such that ju now define by q q leading to f jq so the result follows t u lemma zero variance limit for purely quadratic observables let l and consider the decomposition k into the traceless part k trdk i and the trdk i for the corresponding decomposition of the observable f q q q q q q q tr k the following holds a there exists an antisymmetric matrix j such that j and there is an algorithmic way see algorithm to compute an appropriate j in terms of b the is not effected by the perturbation j for all proof to prove the first claim according to theorem it is sufficient to show that ker jq im jq let us consider the function q q aq with a sym it holds that jq q j t aq q a j q the task of finding an antisymmetric matrix j such that lim j can therefore be accomplished by constructing an antisymmetric matrix j such that there exists a symmetric matrix a with the property a j given any traceless matrix there exists an orthogonal matrix u o rd such that u u t has zero entries on the diagonal and that u can be obtained in an algorithmic manner see for example or chapter section problem for the reader s convenience we have summarised the algorithm in appendix assume thus that such a matrix u o rd has been found and choose real numbers ad r such that ai aj if i j we now set diag an and u u t ij ai if i j if i j observe that since u u t is symmetric is antisymmetric a short calculation shows that j to obtain a j therefore the j u u t we can thus define a u t and j u t ju constructed in this way indeed satisfies for the second claim note that ker jq since tr k tr k jq q q q jq d d because of the antisymmetry of j the result then follows from lemma t u we would like to stress that the perturbation j constructed in the previous lemma is far from unique due to the freedom of choice of u and ad r in its proof however it is asymptotically optimal corollary in the setting of lemma the following holds min lim j j t proof the claim follows immediately since ker jq for arbitrary antisymmetric j as shown in and therefore the contribution of the trace part to the asymptotic variance can not be reduced by any choice of j according to lemma as the proof of lemma is constructive we obtain the following algorithm for determining optimal perturbations for quadratic observables algorithm given k sym determine an optimal antisymmetric perturbation j as follows set k trdk i find u o rd such that u u t has zero entries on the diagonal see appendix d choose ai r i d such that ai aj for i j and set u u t ij ai aj for i j and otherwise set j u t ju remark in the authors consider the task of finding optimal perturbations j for the nonreversible overdamped langevin dynamics given in in the gaussian case this optimization problem turns out be equivalent to the one considered in this section indeed equation of can be rephrased as f ker jq therefore algorithm and its generalization algorithm described in section can be used without modifications to find optimal perturbations of overdamped langevin dynamics gaussians with arbitrary covariance and preconditioning in this section we extend the results of the preceding sections to the case when the target measure symmetric and is given by a gaussian with arbitrary covariance v q q sq with s rsym positive definite the dynamics then takes the form dqt m pt dt sqt dt dpt dt m pt dt m pt dt dwt the key observation is now that the choices m s and together with the transformation qe s q and pe s p lead to the dynamics dqet pet dt s qet dt dpet dt s pet dt pet dt p which is of the form if and obey the condition s note that both s s and s s are of course antisymmetric clearly the dynamics is ergodic with respect to a gaussian measure with unit covariance in the following denoted by the connection between the asymptotic variances associated to and is as follows for an observable f we can write t t t f qs ds f t t t e e f qes ds e f where fe q f s q therefore the asymptotic variances satisfy where denotes the asymptotic variance of the process qet because of this the results from the previous sections generalise to subject to the condition that the choices m s and s are made we formulate our results in this general setting as corollaries corollary consider the dynamics dqt m pt dt qt dt dpt qt dt m pt dt m pt dt dwt with v q q sq assume that m s with and s let f be an observable of the form f q q kq l q c d with k ker j is sym l r and c if at least one of the conditions s k and l satisfied then the asymptotic variance is at a local maximum for the unperturbed sampler and proof note that e fe q f s q q s ks q s l q c q kq e s ks and is again of the form where in the last equality k been defined from and theorem the claim follows if at least one e s s and e k l ker s s is satisfied the first of those can equivalent to s kjs sjk s e l s l have of the conditions easily seen to be which is equivalent to s k since s is nondegenerate the second condition is equivalent to s l which is equivalent to l again by nondegeneracy of t u corollary assume the setting from the previous corollary and denote by the orthogonal projection onto ker sq for f it holds that lim proof theorem implies lim e fe e denotes for the transformed system here fe q f s q is the transformed observable and l projection onto ker s s q according to it is sufficient to show e fe this however follows directly from the fact that the linear transformation that s s maps ker s s q bijectively onto ker sq t u let us also reformulate algorithm for the case of a gaussian with arbitrary covariance algorithm given k s sym with f q q kq and v q q sq assuming s is nondegenerate determine optimal perturbations and as follows e s ks and k k e tr ke i set k d e u t has zero entries on the diagonal see appendix d find u o rd such that u k choose ai r i d such that ai aj for i j and set e u t ij u k ai aj set je u t ju e put s js and s js finally we obtain the following optimality result from lemma and corollary corollary let f q q kq l q tr k and assume that j l then lim min s where q q q tr s k d optimal choices for and can be obtained using algorithm remark since in section we analysed the case where and are proportional we are not able to drop the restriction s from the above optimality result analysis of completely arbitrary perturbations will be the subject of future work remark the choices m s and have been introduced to make the perturbations considered in this article lead to samplers that perform well in terms of reducing the asymptotic variance however adjusting the mass and friction matrices according to the target covariance in this way m s and is a popular way of preconditioning the dynamics see for instance and in particular molecular dynamics here we will present an argument why such a preconditioning is indeed beneficial in terms of the convergence rate of the dynamics let us first assume that s is diagonal s diag s s d and that m diag m d m d and diag d d are chosen diagonally as well then decouples into sdes of the following form i dqt i p dt m i t i i dpt i qt dt p i i p dt i dwt t m i i let us write those processes as i dxt i i xt dt with b i i s i p i i dwt i m i i and q i as in section the rate of the exponential decay of is equal to min re b i a short calculation shows that the eigenvalues of b i are given by i i s i i i i i m therefore the rate of exponential decay is maximal when i s i i i m in which case it is given by r s i m i naturally it is reasonable to choose m i in such a way that the exponential rate i is the same for all i leading to the restriction m cs with c choosing c small will result in fast convergence to equilibrium but also make the dynamics quite stiff requiring a very small timestep in a discretisation scheme the choice of c will therefore need to strike a balance between those two competing effects the constraint then implies by a coordinate transformation the preceding argument also applies if s m and are diagonal in the same basis and of course m and can always be chosen that way numerical experiments show that it is possible to increase the rate of convergence to equilibrium even further by choosing m and nondiagonally with respect to s although only by a small margin a clearer understanding of this is a topic of further investigation i numerical experiments diffusion bridge sampling numerical scheme in this section we introduce a splitting scheme for simulating the perturbed underdamped langevin dynamics given by equation in the unpertubed case when the side can be decomposed into parts a b and c according to qt m pt d dt dt pt qt m dwt z z z a b o o refers to the part of the dynamics whereas a and b stand for the momentum and position updates respectively one particular splitting scheme which has proven to be efficient is the baoab scheme see and references therein the string of letters refers to the order in which the different parts are integrated namely pn qn qn m p exp m i n i q m we note that many different discretisation schemes such as aboba oabao etc are viable but that analytical and numerical evidence has shown that the has particularly good properties to compute ergodic averages with respect to observables motivated by this we introduce the following perturbed scheme introducing additional integration steps between the a b and o parts pn qn qn m p exp m m i n m where refers to fourth order integration of the ode q q up until time we remark that the is linear and can therefore be included in the opart without much computational overhead clearly other discretisation schemes are possible as well for instance one could use a symplectic integrator for the ode noting that it is of hamiltonian type however since v as the hamiltonian for is not separable in general such a symplectic integrator would have to be implcit moreover and could be merged since commutes with in this paper we content ourselves with the above scheme for our numerical experiments remark the aformentioned schemes lead to an error in the approximation for f since the invariant measure is not preserved exactly by the numerical scheme in practice the baoabscheme can therefore be accompanied by an metropolis step as in leading to an unbiased estimate of f albeit with an inflated variance in this case after every rejection the momentum variable has to be flipped p in order to keep the correct invariant measure we note here that our perturbed scheme can be metropolized in a similar way by flipping the matrices and after every rejection and and using an appropriate and integrator for the dynamics given by implementations of this idea are the subject of ongoing work diffusion bridge sampling to numerically test our analytical results we will apply the dynamics to sample a measure on path space associated to a diffusion bridge specifically consider the sde p dxs xs ds dws with xs rn and the potential u rn r obeying adequate growth and smoothness conditions see section for precise statements the law of the solution to this sde conditioned on the events x and x is a probability measure on rn which poses a challenging and important sampling problem especially if u is multimodal this setting has been used as a test case for sampling probability measures in high dimensions see for example and for a more detailed introduction including applications see and for a rigorous theoretical treatment the papers in the case u it can be shown that the law of the conditioned process is given by a gaussian measure with mean zero and precision operator s on the sobolev space h rd equipped with appropriate boundary conditions the general case can then be understood as a perturbation thereof the measure is absolutely continuous with respect to with derivative exp where g x s ds x and x x we will make the choice which is possible without loss of generality as explained in remark leading to dirichlet boundary conditions on for the precision operator furthermore we choose and discretise the ensuing according to g x sn sn in an equidistant way with stespize sj functions on this grid are determined by the values x x sn xn recalling that x x by the dirichlet boundary conditions we discretise the functional as d xn x g xi d x u xi u xi such that its gradient is given by i xi u xi u xi i the discretised version a of the on is given by a following the discretised target measure b has the form e dx z with x ax x rd in the following we will consider the case n with potential u r r given by u x and set to test our algorithm we adjust the parameters m and according to the recommended choice in the gaussian case v x x m s s where we take s a as the precision operator of the gaussian target we will consider the linear observable x l x with l and the quadratic observable x in a first experiment we adjust the perturbation and via also to the observable according to algorithm the dynamics is integrated using the splitting scheme introduced in section with a stepsize of over the time interval t with t furthermore we choose initial conditions and introduce a time we take the estimator to be t f qt dt f t we compute the variance of the above estimator from n realisations and compare the results for different choices of the friction coefficient and of the perturbation strength the numerical experiments show that the perturbed dynamics generally outperform the unperturbed dynamics independently of the choice of and both for linear and quadratic observables one notable exception is the behaviour of the linear observable for small friction see figure where the asymptotic variance initially increases for small perturbation strengths however this does not contradict our analytical results since the small perturbation results from section generally require be sufficiently big for example in theorem we remark here that the condition while necessary for the theoretical results from section is not a very advisable choice in practice at least in this experiment since figures and clearly indicate that the optimal friction is around interestingly the problem of choosing a suitable value for the friction coefficient coefficient becomes mitigated by the introduction of the perturbation while the performance of the unperturbed sampler depends quite sensitively on the asymptotic variance of the perturbed dynamics is a lot more stable with respect to variations of in the regime of growing values of the experiments confirm the results from section the asymptotic variance approaches a limit that is smaller than the asymptotic variance of the unperturbed dynamics as a final remark we report our finding that the performance of the sampler for the linear observable is qualitatively independent of the coice of as long as is adjusted according to this result is in alignment with propostion which predicts good properties of the sampler for antisymmetric observables in contrast to this a judicious choice of is critical for quadratic observables in particular applying algorithm significantly improves the performance of the perturbed sampler in comparison to choosing arbitrarily standard deviation standard deviation linear observable linear observable friction perturbation strength a b fig standard deviation of f for a linear observable as a function of friction and perturbation strength quadratic observable standard deviation standard deviation quadratic observable perturbation strength a friction b fig standard deviation of f for a quadratic observable as a function of friction and perturbation strength outlook and future work a new family of langevin samplers was introduced in this paper these new sde samplers consist of perturbations of the underdamped langevin dynamics that is known to be ergodic with respect to the canonical measure where auxiliary drift terms in the equations for both the position and the momentum are added in a way that the perturbed family of dynamics is ergodic with respect to the same canonical distribution these new langevin samplers were studied in detail for gaussian target distributions where it was shown using tools from spectral theory for differential operators that an appropriate choice of the perturbations in the equations for the position and momentum can improve the performance of the langvin sampler at least in terms of reducing the asymptotic variance the performance of the perturbed langevin sampler to target densities was tested numerically on the problem of diffusion bridge sampling the work presented in this paper can be improved and extended in several directions first a rigorous analysis of the new family of langevin samplers for target densities is needed the analytical tools developed in can be used as a starting point furthermore the study of the actual computational cost and its minimization by an appropriate choice of the numerical scheme and of the perturbations in position and momentum would be of interest to practitioners in addition the analysis of our proposed samplers can be facilitated by using tools from symplectic and differential geometry finally combining the new langevin samplers with existing variance reduction techniques such as zero variance mcmc manifold mcmc can lead to sampling schemes that can be of interest to practitioners in particular in molecular dynamics simulations all these topics are currently under investigation acknowledgments ad was supported by the epsrc under grant no nn is supported by epsrc through a roth departmental scholarship gp is partially supported by the epsrc under grants no and part of the work reported in this paper was done while nn and gp were visiting the institut henri during the trimester program stochastic dynamics out of equilibrium the hospitality of the institute and of the organizers of the program is greatly acknowledged a estimates for the bias and variance proof of lemma suppose that pt satisfies let be an initial distribution of xt such that slightly abusing notation we denote by pt the law of xt given then and h pt v h kh kh where denotes the of pt since f is assumed to be bounded we immediately obtain f xt f c kf and so for f f c kf u t as required proof of lemma given f for fixed t t f pt f x dt x then we have that d l and f pt f moreover pt f f dt t c kf so that t is a cauchy sequence in converging to f f dt t f pt f dt since l is closed and t in it follows that d l and f f moreover kpt f f dt kf f where c dt since we assume that f is smooth the coefficients are smooth and l is hypoelliptic then f f implies that c rd and thus we can apply s formula to xt to obtain t t f xt f dt xt xt xt dwt t t t one can check that the conditions of theorem hold in particular the following central limit theorem follows t d xt xt dwt n as t t by theorem the generator l has the form l where it follows that hl f first suppose that then xt is a stationary process and so xt t as t from which follows more generally suppose that where x h x x for h if f then by x f pt f x dt kf pt kt v dt c kf so that therefore t p xt as t and so holds in this case similarly b proofs of section proof of lemma we first note that l in can be written in the sum of squares form l where d ak m p v v m p m p and ak ek k here ek d denotes the standard euclidean basis and is the unique positive definite square root of the matrix the relevant commutators turn out to be ak ek m k because has full rank on rd it follows that span ak k d span k d since and span ek m span aj j d m k d k d span k d it follows that span ak k d ak k d r so the assumptions of s theorem hold u t the overdamped limit the following is a technical lemma required for the proof of proposition lemma assume the conditions from proposition then for every t there exists c such that sup e proof using variation of constants we can write the second line of as e m t e m v ds t e m dws we then compute m e sup sup e t e sup t e m v ds dws e sup e t t m m e sup e e v ds t m t m dws e sup e e t t m m e sup e v ds dws e m clearly the first term on the right hand side of is bounded for the second term observe that e sup t e m v ds sup t e m ds since v c td and therefore v is bounded by the basic matrix exponential estimate m for suitable c and we see that can further be bounded by c sup t e ds c so this term is bounded as well the third term is bounded by the inequality and a similar argument to the one used for the second term applies the cross terms can be bounded by the previous ones using the inequality and the elementary fact that sup ab sup a sup b for a b so the result follows u t proof of proposition equations can be written in integral form as and t ds t t t m ds m ds v ds w t where the first line has been multiplied by the matrix combining both equations yields t v ds t v qs ds wt now applying lemma gives the desired result since the above equation differs from the integral version of only by the term which vanishes in the limit as u t hypocoercivity the objective of this section is to prove that the perturbed dynamics converges to equilibrium exponentially fast that the associated semigroup pt satisfies the estimate we we will be using the theory of hypocoercivity outlined in see also the exposition in section we provide a brief review of the theory of hypocoercivity let h be a real separable hilbert space and consider two unbounded operators a and b with domains d a and d b respectively b antisymmetric let s h be a dense vectorspace such that s d a d b the operations of a and b are authorised on the theory of hypocoercivity is concerned with equations of the form h lh and the associated semigroup pt generated by l let us also introduce the notation k ker b a and b m p v v m p it with the choices h turns out that l is the flat of the generator l given in and therefore equation is the equation associated to the dynamics in many situations of practical interest the operator a is coercive only in certain directions of the state space and therefore exponential return to equilibrium does not follow in general in our case for instance the noise acts only in the and therefore relaxation in the can not be concluded a priori however intuitively speaking the noise gets transported through the equations by the hamiltonian part of the dynamics this is what the theory of hypocoercivity makes precise under some conditions on the interactions between a and b encoded in their iterated commutators exponential return to equilibrium can be proved to state the main abstract theorem we need the following definitions definition coercivity let t be an unbounded operator on h with domain d t and kernel assume that there exists another hilbert space continuously and densely embedded in k the operator is said to be if ht h for all h k d t definition an operator t on h is said to be relatively bounded with respect to the operators tn if the intersection of the domains tj is contained in d t and there exists a constant such that kt hk hk ktn hk holds for all h d t we can now proceed to the main result of the theory theorem theorem assume there exists n n and possibly unbounded operators cn rn zn such that a cj b j n cn and for all k n a a ck is relatively bounded with respect to cj and cj a b ck is relatively bounded with respect to i and cj c rk is relatively bounded with respect to cj and cj a and d there are positive constants such that i zj p furthermore assume that n cj cj is for some then there exists c and such that kpt where h is the subspace associated to the norm v u n u x kck and k ker a b remark property is called hypocoercivity of l on k k if the conditions of the above theorem hold we also get a regularization result for the semigroup see theorem theorem assume the setting and notation of theorem then there exists a constant c such that for all k n and t the following holds kck pt hk c khk h proof of theorem we pove the claim by verifying the conditions of theorem recall that a and b m p v v m p a quick calculation shows that p so that indeed a m p ltherm and a b we make the choice n and calculate the commutator a b let us now set and such that holds for j note that a a a and furthermore we have that a we now compute b v v and choose b and recall that by assumption of theorem with those choices assumptions a d of theorem are fulfilled indeed assumption a holds trivially since all relevant commutators are zero assumption b follows from the fact that a is clearly bounded relative to i to verify assumption c let us start with the case k it is necessary to show that is bounded relatively to a and this is obvious since the appearing in can be controlled by the pderivatives appearing in a for k a similar argument shows that v v is bounded relatively to a and because of the assumption that v is bounded note that it is crucial for the preceding arguments to assume that the matrices and m have full rank assumption d is trivially satisfied since and are equal to the identity it remains to show that t n x cj is for some it is straightforward to see that the kernel of t consists of constant functions and therefore b b ker t hence of t amounts to the functional inequality b b h b the above is equivalent since the transformation q p m q p is bijective on h to b b h b since b n m coercivity of t boils down to a inequality a inequality for for as in assumption this concludes the proof of the hypocoercive decay estimate clearly the b and therefore it follows that there exist abstract from is equivalent to the sobolev norm h constants c and such that kpt f kh kf kh this is not true automatically since a a stands for the array aj ak jk b k where k ker t consists of constant functions let us now lift this estimate to b for all f h there exist a constant such that x khkh kck b f h therefore theorem implies f kh b f b it holds that for t and a possibly different constant let us now assume that t and f l kpt f kpt f kh f kh f kh where the last inequality follows from now applying and gathering constants results in kpt f kf b f note that although we assumed t the above estimate also holds for t although possibly with a different constant c since kpt is bounded on u t c asymptotic variance of linear and quadratic observables in the gaussian case we begin by deriving a formula for the asymptotic variance of observables of the form f q q kq l q tr k d b f the following calculations with k sym and l r note that the constant term is chosen such that are very much along the lines of section since the hessian of v is bounded and the target measure is gaussian assumption is satisfied and exponential decay of the semigroup pt as in follows by theorem according to lemma the asymptotic variance is then given by f where is the solution to the poisson equation f recall that b l is the generator as in where for later convenience we have defined a b t i r in the sequel we will solve analytically first we introduce the notation k and l such that by slight abuse of notation f is given by f x x l x tr by uniqueness up to a constant of the solution to the poisson equation and linearity of l g has to be a quadratic polynomial so we can write g x x cx d x tr c where c rsym and d notice that c can be chosen to be symmetrical since x cx does not depend on the antisymmetric part of c plugging this ansatz into yields x x a d trp c x l x tr where x trp c cii denotes the trace of the momentum component of comparing different powers of x this leads to the conditions ac cat ad l trp c tr note that will be satisfied eventually by existence and uniqueness of the solution to then by the calculations in the asymptotic variance is given by tr c d proof of proposition according to and the asymptotic variance satisfies tr c where the matrix c solves ac cat and a is given as in we will use the notation c t and the abbreviations c c c and c let us first determine c the solution to the equation i c c i k this leads to the following system of equations t k t t note that equations and are equivalent by taking the transpose plugging into yields adding and together with and leads to k solving we obtain k so that c k k k k k taking the of and setting yields c a c c a t c t notice that c c t j c c k j jk kj with computations similar to those in the derivation of or by simple substitution equation can be solved by jk k j k j c kj k j k j we employ a similar strategy to determine c taking the in equation setting and inserting c and a as in and leads to the equation kj i i c c jk k j which can be solved by k j kj k j c kj k j k j note that tr c tr k and so tr k tr k j k since clearly tr k j k tr kjk tr jk in the same way it follows that proving taking the second of and setting yields c a c c a t t employing the notation c and noticing that a using we calculate j k j j k j k j t a c c a kj k j j as before we make the ansatz c t leading to the equations j k j j k j k j t kj k j j t t again and are equivalent by taking the transpose plugging into and combing with or gives j k kj jkj now tr jkjk tr j k tr jkjk gives the first part of we proceed in the same way to determine analogously we get kj k j j t a c c a jkj j k j k j j j k j tr k solving the resulting linear matrix system similar to results in kj j k jkj leading to tr j k tr jkjk to compute the cross term we take the mixed derivative of and set to arrive at tr k c c a c c a t c t c t using and we see that c c c t c t j k j jkj kj j k j jkj k j j j k j k j j j k j the ensuing linear matrix system yields the solution j k j jkj leading to tr k tr j k tr jkjk u t this completes the proof proof proof of proposition by and the function satisfies l recall the following formula for blockwise inversion of matrices using the schur complement u v u v x w w x provided that x and u v x w are invertible using this we obtain l taking derivatives setting and using the fact that j t leads to the desired result u t lemma the following holds a for b let j t and k k t then tr jkjk tr j k furthermore equality holds if and only if j k proof to show a we note that the function f has a unique global maximum on at with f so the result follows for b we note that j k t j k and that j k is symmetric and nonnegative definite we can write x tr j k i with denoting the real eigenvalues of j k from this it follows that tr j k with equality if and only if j k now expand tr j k tr jkjk tr j k which implies the advertised claim u t d orthogonal transformation of tracefree symmetric matrices into a matrix with zeros on the diagonal d given a symmetric matrix k sym with tr k we seek to find an orthogonal matrix u o r such that u ku t has zeros on the diagonal this is a crucial step in algorithms and and has been addressed in various places in the literature see for instance or chapter section for the convenience of the reader in the following we summarize an algorithm very similar to the one in since k is symmetric there exists an orthogonal matrix o rd such that diag now the algorithm proceeds iteratively orthogonally transforming this matrix into one with the first diagonal entry vanishing then the first two diagonal entries vanishing etc until after d steps we are left with a matrix with zeros on the diagonal starting with assume that otherwise proceed with since p tr k tr there exists j d such that and have opposing signs we now apply a rotation in the to transform the first diagonal entry into zero more specifically let j sin j o rd sin cos with arctan q we then have now the same procedure can be applied to the second diagonal entry leading to the matrix with iterating this process we obtain that ud udt has zeros on the diagonal so ud o rd is the required orthogonal transformation references arnold and erb sharp entropy decay for hypocoercive and equations with linear drift alrachid mones and ortner some remarks on preconditioning molecular dynamics arxiv preprint bass diffusions and elliptic operators springer science business media bennett mass tensor molecular dynamics journal of computational physics bakry gentil and ledoux analysis and geometry of markov diffusion operators volume springer science business media bhatia matrix analysis volume of graduate texts in mathematics new york beskos pinski and stuart hybrid monte carlo on hilbert spaces stochastic process beskos roberts stuart and voss mcmc methods for diffusion bridges stoch beskos and stuart mcmc methods for sampling function space in iciam international congress on industrial and applied mathematics pages eur math ceriotti bussi and parrinello langevin equation with colored noise for constanttemperature molecular dynamics simulations physical review letters cattiaux and guillin central limit theorems for additive functionals of ergodic markov diffusions processes alea a duncan lelievre and pavliotis variance reduction using nonreversible langevin samplers journal of statistical physics dolbeault mouhot and schmeiser hypocoercivity for linear kinetic equations conserving mass trans amer math ethier and kurtz markov processes wiley series in probability and mathematical statistics probability and mathematical statistics john wiley sons new york characterization and convergence engel and nagel semigroups for linear evolution equations volume of graduate texts in mathematics new york with contributions by brendle campiti hahn metafune nickel pallara perazzoli rhandi romanelli and schnaubelt girolami and calderhead riemann manifold langevin and hamiltonian monte carlo methods stat soc ser b stat with discussion and a reply by the authors gelman j carlin stern dunson vehtari and rubin bayesian data analysis texts in statistical science series crc press boca raton fl third edition hwang and sheu accelerating gaussian diffusions ann appl hwang and sheu accelerating diffusions ann appl roger horn and charles johnson matrix analysis cambridge university press cambridge second edition hwang normand and wu variance reduction for diffusions stochastic process hairer stuart and voss analysis of spdes arising in path sampling ii the nonlinear case ann appl hairer stuart and voss sampling conditioned diffusions in trends in stochastic analysis volume of london math soc lecture note pages cambridge univ press cambridge hairer stuart voss and wiberg analysis of spdes arising in path sampling i the gaussian case commun math joulin and ollivier curvature concentration and error estimates for markov chain monte carlo ann kazakia orthogonal transformation of a trace free symmetric matrix into one with zero diagonal elements internat engrg kliemann recurrence and invariant measures for degenerate diffusions the annals of probability pages komorowski landim and olla fluctuations in markov processes volume of grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences springer heidelberg time symmetry and martingale approximation liu monte carlo strategies in scientific computing springer science business media leimkuhler and matthews molecular dynamics volume of interdisciplinary applied mathematics springer cham with deterministic and stochastic numerical methods nier and pavliotis optimal linear drift for the convergence to equilibrium of a diffusion stat rousset and stoltz free energy computations imperial college press london a mathematical perspective and stoltz partial differential equations and stochastic methods in molecular dynamics acta ma chen and fox a complete recipe for stochastic gradient mcmc in advances in neural information processing systems pages metafune pallara and priola spectrum of operators in lp spaces with respect to invariant measures funct markowich and villani on the trend to equilibrium for the equation an interplay between physics and functional analysis mat contemp matthews weare and leimkuhler ensemble preconditioning for markov chain monte carlo simulation ottobre and pavliotis asymptotic analysis for the generalized langevin equation nonlinearity ottobre pavliotis and exponential return to equilibrium for hypoelliptic quadratic systems funct ottobre pavliotis and some remarks on degenerate hypoelliptic operators j math anal ottobre pillai pinski and andrew stuart a function space hmc algorithm with second order langevin diffusion limit bernoulli pavliotis stochastic processes and applications diffusion processes the and langevin equations volume springer pavliotis and stuart white noise limits for inertial particles in a random field multiscale model electronic pavliotis and stuart analysis of white noise limits for stochastic systems with two fast relaxation times multiscale model electronic and spiliopoulos irreversible langevin samplers and variance reduction a large deviations approach nonlinearity and spiliopoulos variance reduction for irreversible langevin samplers and diffusion on graphs electron commun no robert and casella monte carlo statistical methods springer science business media villani hypocoercivity number american mathematical wu hwang and chu attaining the optimal gaussian diffusion acceleration stat
| 10 |
divergences measures amadou diadie ba gane samb lo aug lerstad gaston berger lsa pierre et marie curie france introduction in this paper we deal with divergence measures estimation using both wavelet and classical probability density functions let p be a class of two probability measures on rd a divergence measure on p is an application d r q l d q l such that d q q for any q a divergence measure then is not necessarily symmetrical and it does neither have to be a metric to better explain our concern let us intoduce some of the most celebrated divergence measures most of them are based on probability density functions so let us suppose that all q p have fq with respect to a measure on rd b rd that is usually the lebesgues measure we have the measure z fq x fl x x q l rd the family of renyi divergence measures indexed by more known under the name of dr q l log x x x rd the family of tsallis divergence measures indexed by also known under the name of dt q l fq x fl x x rd and finally the divergence measure z fq x log fq x x x dkl q l rd the latter the measure may be interpreted as a limit case of both the renyi s family and the tsallis one by letting as well for near the tsallis family may be seen as derived from a fisrt order expansion dr q l based on the first order expansion of the logarithm function in the neigborhood of the unity although we are focusing on the aforementioned divergence measures we have to attract the attention of the reader that there exist quite a few number of them let us cite for example the ones denamed as alisilvey distance or f jeffrey s divergence see chernoff divergence etc according to there is more than a dozen of different divergence measures that one can find in the literature divergences measures consistency bands before coming back to our divergence measures of interest we want to highlight some important applications of them indeed divergence has proven to be useful in applications let us cite a few of them a it may be as a similarity measure in image registration or multimedia classification see it is also applicable as a loss function in evaluating and optimizing the performance of density estimation methods see b the estimation of divergence between the samples drawn from unknown distributions gauges the distance between those distributions divergence estimates can then be used in clustering and in particular for deciding whether the samples come from the same distribution by comparing the estimate to a threshold c divergence estimates can also be used to determine sample sizes required to achieve given performance levels in hypothesis testing d divergence gauges how differently two random variables are distributed and it provides a useful measure of discrepancy between distributions in the frame of information theory the key role of divergence is well known e there has been a growing interest in applying divergence to various fields of science and engineering for the purpose of estimation classification etc f divergence also plays a central role in the frame of large deviations results including the asymptotic rate of decrease of error probability in binary hypothesis testing problems the reader may find more applications descriptions in the following papers we may see two kinds of problems we encounter when dealing with these objects first the divergence measures may not be finite on the whole support of the distributions these two remarks apply to too many divergence measures both these problems are avoided with some boundedness assumption as in singh et al and in krishnamurthy et al in the case where all q p have fq with respect to a measure on rd b rd these authors suppose that there exist two finite numbers such that f q f l so that the quantities px py for example are finite in the expressions of and sures and that the is also finite we will follow these authors by adopting the assumption throughout this paper divergence measures as tests the divergence measures may be applied to two statistical problems among others first it may be used as a problem like that let a sample from x with an unkown probability distribution px and we want to test the hypothesis that px is equal to a known and fixed probability for example jager et al in proposed to be the uniform probability distribution on divergences measures consistency bands theoritically if we want to test the null hypothesis f versus f we have to use any of general test statistic fn x x then our test statistic is of the form z d fn fn x x dx then we can answer this question by estimating a divergence measure d px by the estimator n d px based on the sequences of empirical probabilities n n px n i n from there establishing an asymptotic theory of d px d px is necessary to conclude divergence measures as a comparison tool problem as a comparison tool for two distributions we may have two samples and wonder whether they come from the same probability measure here we also may two different cases in the first we have two independent samples and respectively n n from a random variable x and y here the empirical divergence d px py is the natural estimator of d px py on which depends the statistical test of px py but the data may aslo be paired x y that is xi and yi are measurements of the same case i in that case testing the equality of the margins px py should be based on the empirical probabilities from the couple x y that is n n p x y xi yi n related work krisnamurthy et al singh and poczos studied mainly the independent case of the two distributions comparaison they both used divergence measures based on probability density functions and concentrated of and reyni singh and poczos proposed divergence estimators that achieve the parametric convergence of rate s where n s depends on the smoothness s of the densities f and g both in a holder class of smothness s they showed that and i h n n e dt px py dt px py o h i n n e dr px py dr px py o where and min singh and poczos and krishnamurthy et al each proposed divergence estimators that achieve the parametric convergence rate o under weaker conditions than those given in divergences measures consistency bands krishnamurthy et al proposed three estimators for the divergence measures px py dr px py and for dt px py the plugging pl linear lin and the quadratic qd one they showed that s n n pl e dt px py dt px py o and n n lin e dt px py dt px py n n qd e dt px py dt px py with the quadratic estimator n n qd e dr px py dr px py n n qd e dt px py dt px py c c c c poczos and jeff considered two samples not necessarily with the same size and used the neighbour knn based density estimators they showed that if k then reyni estimator est asymptotically unbiaised that is m n lim e dr px py dr px py n and it is consistent for norm that is n m lim e dr px py dr px py n all this is under conditions on the densities fq and fl in liu et al and worked with densities in holder classes whereas our work applies for densities in the bessov class in any case the asymptotic distributions of the estimators in are currently unknown but in our view this case should rely on the available data so that using the same sample size may lead to a reduction to apply their method one should take the minimum of the two sizes and then loose n m information we suggest to come back to a general case and then study the asymptotics of d px py based on samples xn and ym as for the fitting approach we may cite hamza et al who used modern techniques of mason and on consistency bounds for s kernel estimators but these authors hamza and in the current version of their work did not address the existence problem of the divergence measures we will seize the opportunity of these papers to correct this also for the fitting case and when using and measures we do not have n symmetry so we have to deal with the estimation of both d px by d px and that of d px n by d px and decided which of these cases is better divergences measures consistency bands as to the paired case we are not aware of works on this yet this approach is very important and should be addressed this paper will be devoted to a general study the estimation the and measures in the three level fitting independent comparison and paired comparison we will use empirical estimations of the density functions both by the parzen estimator and the wavelet ones the main novelty here resides in the wavelet approach when using the parzen statistics our main tool will be modern techniques of mason and on consistency bounds for s kernel for the wavelet approch we will mainly back on the and nickl paper since the tools we are using do not have the level of developpement our results for the parzen scheme will use distributions while those pertaining to the wavelet frame are set for univariate distributions but we will have to give a precise account of wavelet theory and its applications to statistical estimation using hardle et al the paper will be organized as follows in section we will describe how to use the density estimations both for parzen and wavelets as well as the statements of the main hypothesis as for wavelets a broader account will be given in appendix in section we deal with the fitting questions section is devoted to independent distribution comparison finally in section we deal with margins distribution comparison in all sections and we will establish strong efficiency and central limit theorems under standards assumptions on the densities fq x fl x on the scale function and on the wavelet kernel k formalized in the sequel we establish the following properties a we define the linear wavelet density estimators and establish the consistency of these density estimators b we establish the asymptotic consistency showing that theorem b when we prove that the estimator is asymptotically normal theorem c we derive d we also prove e lastly we prove organization of the paper plan results we are going to establish general results both for consistency and asymptotic normality next results for particular divergences measures will follow as corollaries divergences measures consistency bands general conditions let j f g be a functional of two densities functions f and g satisfying assumption below of the form j f g z f x g x dx d where s t is a function of s t of class c we adopt the following notations with respect to the partial derivatives s t s t s t s t and s t s t s t s t s t s t we require the following general conditions s t the following integrals are finite z n o f x g x f x g x dx for any measurable sequences of functions x x x and x of x d uniformly converging to zero that is max then z z and z sup n i x j n x o z f x x g x dx f x g x dx z f x g x x dx f x g x dx f x n x g x n x dx z f x g x dx remark these results may result from the dominated convergence theorem or the monotone convergence theorem or from other limit theorems we may either express conditions under which these results hold true on the general function but we choose here to state the final results and next to check them for particular cases on which reside our real interests our general results concern the estimations of j f g in a one sample see theorem and two samples problems see theorem in both case we use the linear wavelet estimators of f and g denoted fn and gn and defined in from there we mainly use results for and nickl under their conditions we define an kfn f bn kgn cn an bn where stands for x divergences measures consistency bands wavelet setting the wavelet setting involves two functions and in r such that n o k k j k be a orthonormal basis of r the associated kernel function of the wavelets and is defined by kj x y k x y j n where k x y p x k y k x y for a mesurable function we define kj h x assuming the following r kj x y h y dy assumption s and are bounded and have compact support and either i the father wavelet r has weak derivatives up to order s in lp r or ii has s vanishing moments xm x dx for all m s assumption is of bounded for some p and vanishes on c for some assumption the resolution level j jn is such that with this assumption one has jn and r jn n jn log log n s log n as n log as n sup jn and these conditions allow the use of results of definition given two independent samples with size n xn f and yn g respectively from a random variable x and y and absolute continuous law px and py on r straighforward wavelets estimators of f and g are defined by n fn x pn x kjn x and kj x xi n n n kj x yi gn x pn y kjn x n n t in the sequel we suppose the densities f and g belong to the besov space r see h r khks sup h sup sup h where h the function r r h x x k dx and h r r h x x k dx are the wavelet coefficients of t the spaces r are the spaces which contain the classical spaces given these definitions we now describe how we will use the wavelet approach divergences measures consistency bands t it is remarquable from theorem in that if the densities f and g belong to r satisfies and satisfy t then an bn and cn are all of them s log n o log almost surely and converge all to zero at this rate with t t in order to establish the asymptotic normality of the divergences estimators we need to recall some facts about kernels wavelets t for h r the theorem below provides the asymptotic normality of n z fn x f x h x dx necessary for setting the asymptotic normality of divergence measure provided the finitness of px kjn h x t theorem under assumption and and if h r then we have n z fn x f x h x dx n as n where px kjn h x px kjn h x the symbol denotes the convergence in law r px h h x f x dx denotes the expection of the measurable function the proof of this theorem is postpooned to subsection main results in the sequel j f g is a functional of two densities functions f and g satisfying assumption and defined by j f g z f x g x dx d where s t is a function of s t of class c define the functions and by x f x g x and x f x g x and the constants and by z x dx and suppose that and are both finites z x dx divergences measures consistency bands one side estimation suppose that either we have a sample xn with unknown f and a known g and we want to study the limit behavior of j fn g or we have a sample yn with unknown g and a known f and we want to study the limit behavior of j f gn fn or gn are as in or in theorem under assumption and we have consistency lim sup fn g j f g an and lim sup f gn j f g bn where an and bn are as in asymptotic normality n j fn g j f g n as n n j f gn j f g n as n and where px kjn x px kjn x and py kjn y py kjn y two sides estimation suppose that we have two samples xn and yn with respectively unknown f and g and we want to study the limit behavior of j fn gn theorem under assumption and we have lim sup j fn gn j f g cn and n j fn gn j f g where n as n and are as in and the proofs are given in section right now we are going to apply these results to particular divergence measures estimations we will have to check the conditions and divergences measures consistency bands particular cases results for and divergences measures will follow as corollaries since they are particular cases of j f g to ensure the general conditions and we begin by giving the main assumption on the densities f and assumption there exists a compact k r containing the supports of the densities f and g and such that such that f x g x throughout this subsection we will use the assumption the integrales are on k and the constantes are integrables we use the dominate convergence theorem based on this remark meaning that with assumption then the conditions and are satisfied in the following the divergence measures the functions and should be updated in each cases in the t same way that and since they depend on the bessov functions f and g in r and on the randoms variables x and y case hellinger integral of order we start by the hellinger integral of order defined by z i f g f x g x dx k here s t s t and one has s t s t and s t s t s t s t now let x x g x and x f x g x r r x dx and x dx corollary one sample estimation we have consistency lim sup lim sup asymptotic normality fn g i f g an f gn i f g bn n i fn g i f g n as n n as n n i f gn i f g where px kjn x px kjn x and py kjn y py kjn y divergences measures consistency bands whith x x g x and x f x g x corollary two side estimation we have consistency lim sup asymptotic normality fn gn i f g cn n i fn gn i f g n as n where in the following handling i the hellinger integral of order conditions and are satisfied from assumption case tsallis divergence measure dt f g i f g corollary one side estimation we have consistency lim sup lim sup fn g dt f g an f gn dt f g bn asymptotic normality n dt fn g dt f g n dt f gn dt f g where and n n as n as n px kjn x px kjn x h y h y p k p k y j y j n n whith x x g x and f x g x corollary two sides estimation under conditions of theorem we have consistency lim sup asymptotic normality where fn gn dt f g cn n dt fn gn dt f g n divergences measures consistency bands case reyni divergence measure log i f g dr f g corollary one side estimation we have consistency dr fn g dr f g an dr f gn dr f g bn where an and bn are as in asymptotic normality n dr fn g dr f g n n dr f gn dr f g where i f g and n i f g as n as n corollary two sides estimation we have consistency dr fn gn dr f g cn where cn is as in asymptotic normality n dr fn gn dr f g n where as n the proofs of corollaries and are postponed to case divergence measure dkl f g in this case s t s log st and one has z k f x log f x dx g x s s s t log s t t t and s s t s t s t s t t r r x x x thus x log fg x dx and k x fg x k log fg x s t f x g x dx with the assumption the conditions and are satisfied for any measurables sequences of functions x x x and x of x d uniformly converging to zero divergences measures consistency bands corollary one side estimation we have consistency lim sup lim sup fn g dkl f g an f gn dkl f g bn asymptotic normality n dkl fn g dkl f g n dkl f gn dkl f g n n where and as n as n px kjn x px kjn x py kjn y py kjn y x and x with x log fg x f x g x corollary two sides estimation we have consistency lim sup fn gn dkl f g cn asymptotic normality n dkl fn gn dkl f g n where as n case divergence measure f g z k f x g x dx here s t x g x but we proceed by a different route one has fn g f g z fn x g x f x g x dx zk fn x f x fn x f x x dx k z z fn x f x dx fn x f x f x g x dx k k and also f gn f g z k gn x g x f x g x dx z k gn x g x dx divergences measures consistency bands let x f x g x and then we deduce z n fn g f g n fn x f x x dx op z n f gn f g n gn x g x x dx op let r k x g x dx then we give theorem one side estimation consistency lim sup lim sup fn g f g an f gn f g bn asymptotic normality n fn g f g n f gn f g n n where as n as n px kjn x px kjn x and py kjn y py kjn y with x f x g x and x g x f x theorem two sides estimation consistency lim sup normality where fn gn f g cn n fn gn f g n n applications statistics tests the divergence measures may be applied to two statistical problems among others first it may be used as a problem like that let a sample from x with an unkown probability density function f and we want to test the hypothesis that f is equal to a known and fixed probability density function g we want to test f g versus f g t both unctions f and g in besov space r for a fixed x d we can test the pointwise null hypothesis f x g x versus f x g x divergences measures consistency bands using particular divergences measure like kb or divergences then our proposed test statistics are of the form z fn g fn x g x dx as particular cases we consider s t s t s t s s log t limit distribution under null hypothesi in testing the null hypothesis we propose tests statistics using tsallis renyi kulback and divergence measures suppose that the null hypothesis holds so that g is a known then it follows from the previous work that n dt fn g dt f g n where as n px kjn ht x px kjn ht x r with ht x x g x and kjn ht x kjn x t t f t dt renya divergence measure where i f g n dr fn g dr f g n as n px kjn x px kjn x whith hr x x g x n dkl fn g dkl f g n where x where hk x log fg x as n px kjn hk x px kjn hk x confidence bands we want to obtain proofs the rest of this section proceeds as follows in we establish the proof of the theorem is devoted to the proof of the theorem in subsection we present the proof of the theorem the subsection is devoted to proofs of the corollaries and divergences measures consistency bands proof of the theorem t proof suppose assumptions and are satisfied and h r r we start by showing first that n fn x f x h x dx is a sum of an empirical process based on the sample xi and applied on the function kjn h and a random variable r we have by definition kjn h kjn x h x dx write z z n kj x xi h x f x h x dx fn x f x h x dx n n z n z kjn x xi h x dx f x h x dx n z n kjn h xi f x h x dx n therefore where n z pn x kjn h px h x pn x px kjn h px kjn h x h x fn x f x h x dx n n px kjn h x h x n pn x px kjn h n one has px kjn h x z kjn h x f x dx z kjn x t h t dt z jn jn f x dx k x t h t dt f x dx p boundedness and support compactness of and give k x t k x k t k r now k x t h t dt since vanishes on c and h is bounded finally px kjn h x with now the usual gives n pn x px kjn h where px kjn h x px kjn h x then the theorem will be proved if we show that n as n n op and it is in this step that we use the s fact that h r from theorem in one has kjn h h z h x h x f x dx kkjn h kf t divergences measures consistency bands therefore n h t n op for any t t note the moment condition in theorem quoted above is equivallent to assumption s see page this justify its use in our context finally we conclude by z n where is defined above n as n fn x f x h x dx proof of the theorem proof in the following development we are going to use systematically the mean value theorem in a bivariate dimensional and with real functions i depending on x k but always satisfying x for ease of notation we introduce the two following notations used in the sequel f x fn x f x and g x gn x g x such that an f and bn let cn max an bn recall an bn and cn are all op we start by the one side asymptotic estimation one has fn x g x f x f x g x by an application of the to the function x x g x one has that there exists x such that fn x g x f x g x f x f x x f x g x where f x f x x f x g x f x f x g x x f x f x x f x g x by an application of the to the function x x g x and with x we can write as fn x g x f x g x f x f x g x x f x f x x f x g x now we has j fn g j f g hence z f x f x g x dx fn g j f g an z z x f x f x x f x g x dx f x g x dx z f x x f x g x dx divergences measures consistency bands therefore fn g j f g an lim sup an where r z f x x f x g x dx f x g x dx this with yield and prove now let prove by swapping the roles of f and g one obtains j f gn j f g z g x f x g x dx then f gn j f g bn one obtains z r z x g x f x g x x g x dx f x g x dx f gn j f g bn bn where z z f x g x x g x dx f x g x x g x dx f x g x dx this and give and prove we focus now on the asymptotic normality for one sample estimation going back to we have z z n j fn g j f g f x g x x dx x n f x f x x f x g x dx z n fn x f x x dx n where x f x g x r now by theorem n fn x f x x dx n as n where px kjn x px kjn x and provided that r thus will be proved if we show that n one has z n nan f x x f x g x dx let show that op by chebyshev s inequality one has for any p nan p an e n divergences measures consistency bands from theorem in gine one has o r jn n o s log n log where we use the fact that thus finally p nan op since s s log n n log log n n as n log for any t finally from and using one has this yields and ends the proof of n as n going back to one has z z n j f gn j f g f x g x x dx n x g x f x g x x g x dx z n gn x g x x dx n where x f x g x then by theorem one has r n gn x g x x dx n where py kjn y py kjn y since and provided that r similarly while n nbn z f x g x x g x dx nbn op as previously so this and give n op finally this shows that holds and completes the proof of the theorem divergences measures consistency bands proof of theorem proof we proceed by the same techniques that led to the prove of we begin by breaking fn x gn x f x g x into two terms we have already handled fn x gn x f x gn x f x gn x f x g x z z fn x gn x f x g x by an application of the to the function fn x fn x gn x one has that there exists x such that f x x gn x f x gn x x f x x f x g x x f x g x x f x f x x f x g x by a second application of the to the function f x x f x f x x f x g x with x from we get g x f x g x x g x f x g x x g x therefore j fn gn j f g and z z f x f x g x dx g x f x g x dx z x f x f x x f x g x dx z x g x f x g x x g x dx fn gn j f g thus fn gn j f g cn z f x x f x g x dx cn cn z f x g x x g x dx z cn f x x f x g x dx z cn f x g x x g x dx and give lim sup this proves the desired result it remains to prove j fn gn j f g cn divergences measures consistency bands going back to one has z z n j fn gn f g n fn x x f x g x n gn x x f x g x n where n n then by theorem one has n z z x f x f x x f x g x dx z x g x f x g x x g x dx n fn x f x x dx n z gn x g x x dx n t since and provided that r now one has n as previously one has ncn z f x x f x g x dx z f x g x x g x dx ncn ncn op and from conditions and one has n op finally this shows that holds and completes the proof of the theorem proofs of corollaries and proof of corollary one has log i fn g log i f g but i fn g i f g an op then by using a taylor expansion of log y it follows that dr fn g dt f g almost surely i fn g i f g log i f g log i fn g log i f g i fn g i f g an i f g that is dr fn g dr f g an this proves the desired result the proof of is similar to the previous proof to prove recall n i fn g i f g n then i fn g i f g r z fn x f x x dx op op fn x f x x dx op i f g divergences measures consistency bands and by taylor expansion of log y it follows that almost surely r fn x f x x dx log i fn g log i f g log i f g r fn x f x x dx op i f g n therefore n dr fn g dr f g where i f g r n fn x f x x dx op i f g n as n is proved similarly finally this ends the proof of the corollary proof of the corollary proof we start by the consistency from the previous work one gets i fn gn i f g i f g cn log i fn gn log i f g hence dr fn gn dr f g cn that proves let find the asymptotic normality one gets z z n i fn gn i f g n fn x f x x dx n gn x g x x dx op nn op where x x g x and x f x g x hence we obtain log i fn gn log i f g nn op ni f g therefore n dr fn gn dr f g where n nn op i f g as n n references topsoe some inequalities for information divergence and related measures of discrimination ieee trans inf theory vol pp evren a some applications of and jeffreys divergences in multinomial populations cichocki amari s families of flexible and robust measures of similarities entropy moreno ho and vasconcelos a divergence based kernel for svm classification in multimedia applications hp laboratories cambridge ma tech jager and jon wellner goodness of fit tests via the annals of statistics divergences measures consistency bands hall on loss and density estimation ann vol no pp bhattacharya efficient estimation of a shift parameter from grouped data the annals of mathematical statistics vol no pp berlinet devroye and gyorfi asymptotic normality of error in density estimation statistics vol pp liu and shum boosting proc of the ieee computer society conference on computer vision and pattern recognition vol pp june kullback leibler on information and sufficiency ann math stat fukunaga hayes the reduced parzen classifier ieee trans pattern anal mach intell cardoso infomax and maximum likelihood for blind source separation ieee signal process lett cardoso j blind signal separation statistical principles proc ieee ojala pietik ainen harwood a comparative study of texture measures with classification based on featured distributions pattern recogn hastie tibshirani classification by pairwise coupling ann stat buccigrossi simoncelli image compression via joint statistical characterization in the wavelet domain ieee trans image process moreno ho vasconcelos a divergence based kernel for svm classification in multimedia applications adv neural inform process syst mackay information theory inference and learning algorithms cambridge university press cambridge uk cover and thomas elements of information theory wiley darbellay and vajda estimation of the information by an adaptive partitioning of the observation space ieee trans inf theory vol no pp may common independent component analysis a new concept signal vol pp tsitsiklis decentralized detection in advances in statistical signal processing new york jai pp akshay krishnamurthy kirthevasan kandasamy barnaba poczos larry wasserman nonparametric estimation of divergence and of selcuk university natural and applied science liu lafferty and wasserman exponential concentration inequality for shashank singh and barnabas poczos generalized exponential concentration inequality for divergence estimation carnegie mellon university forbes pittsburgh pa usa poczos jeff on the estimation of liu lafferty and wasserman exponential concentration inequality for mutual information estimation in neural information processing systems nips hamza ngom deme and mendy estimators of divergence measures and its strong uniform consistency xuanlong martin and michael estimating divergence functionals and the likelihood ratio by convex risk minimization ieee transastions on information theory nguyen wainwright and jordan estimating divergence functionals and the likelihood ratio by convex risk minimization ieee trans inform theory vol no pp einmahl and mason an empirical process approach to the uniform consistency of function estimators theoret einmahl and mason uniform in bandwidth consistency of function estimators ann and mason uniform in bandwidth estimation of integral functionals of the density function scand j cai kulkarni and verdu universal estimation of entropye and divergence via block sorting in proc ieee int symp information theory lausanne switzerland divergences measures consistency bands cai kulkarni and verdu universal divergence estimation for sources ieee trans inf theory submitted for publication ziv and merhav a measure of relative entropy between individual sequences with application to universal classification ieee trans vol no pp jul wolfgang hardie gerard kerkyacharian dominique picard and alexander tsybakov wavelets approximation and statistical applications a hero ma and michel estimation of c nyi information divergence via pruned minimal spanning trees in ieee workshop on higher order statistics caesaria israel jun moon and o hero iii ensemble estimation of multivariate f in ieee international symposium on information theory pp berlinet gyorfi and i d nes asymptotic normality of relative entropy in multivariate density estimation publications de l institut de statistique de l c de paris vol pp bickel and rosenblatt on some global measures of the deviations of density function estimates the annals of statistics pp kevin multivariate f estimation with confidence nickl uniform limit theorem for wavelet density estimators the annals of probability vol no doi
| 10 |
jan deep learning reconstruction for dual energy ct baggage scanner yoseob han jingu kang jong chul ye kaist daejeon korea email hanyoseob gemss medical seongnam korea email kaist daejeon korea email homeland and transportation security applications explosive detection system eds have been widely used but they have limitations in recognizing shape of the hidden objects among various types of computed tomography ct systems to address this issue this paper is interested in a stationary ct using fixed sources and detectors however due to the limited number of projection views analytic reconstruction algorithms produce severe streaking artifacts inspired by recent success of deep learning approach for sparse view ct reconstruction here we propose a novel image and sinogram domain deep learning architecture for reconstruction from very sparse view measurement the algorithm has been tested with the real data from a prototype dual energy stationary ct eds baggage scanner developed by gemss medical systems korea which confirms the superior reconstruction performance over the existing approaches index explosive detection system eds sparseview ct convolutional neural network cnn i ntroduction in homeland and aviation security applications there has been increasing demand for ct eds system for carryon baggage screening a can produce an accurate object structure for segmentation and threat detection which is often not possible when a system captures projection views in only one or two angular directions there are currently two types of ct eds systems ct and stationary ct while ct eds is largely the same as medical ct baggage screening should be carried out continuously so it is often difficult to continuously screen bags because of the possible mechanical overloading of the gantry system on the other hand a stationary ct eds system uses fixed sources and detectors making the system suitable for routine baggage inspection for example fig shows source and detector geometry of the prototype stationary system developed by gemss medical systems korea as shown in fig a nine pairs of source and dual energy detector in the opposite direction are distributed at the same angular interval for seamless screening without stopping convey belt each pair of source and detectors are arranged along the as shown in fig b so that different projection view data can be collected while the baggages moves continuously on the conveyor belt then fan beam projection data is obtained for each by rebinning the measurement data this type of stationary ct system is suitable for eds fig source positions in our prototype view dual energy ct eds a x y direction and b z direction respectively applications because it does not require a rotating gantry but with only projection views it is difficult to use a conventional filtered backprojection fbp algorithm due to severe streaking artifacts therefore advanced reconstruction algorithms with fast reconstruction time are required for ct eds iterative reconstruction mbir with the total variation tv penalty have been extensively investigated inspired by the recent success of deep learning approach for sparse view and limited angle ct that outperform the classical mbir approach this paper aims at developing a deep learning approach for sparse view ct eds however neural network training using the retrospective angular subsampling as in the existing works is not possible for our prototype system since there are no data for the real world sparse view ct eds we therefore propose a novel deep learning approach composed of image domain and sinogram domain learning that compensate for the imperfect label data ii t heory a problem formulation recall that the forward model for sparse view ct eds system can be represented by rf where r denotes the projection operator from an x y z volume image to a s z domain sinogram data with s and z denoting the detector projection angle and the direction fig sinogram interpolation flow for the proposed method the final reconstruction is obtained by applying the fbp for the interpolated sinogram data of the conveyor belt travel respectively see fig for the coordinate systems in denotes the view sampling operator for the measured angle set and refers to the measured sinogram data for each projection view data we use the notation and where denotes the specific view the main technical issue of the sparse view ct reconstruction is the of the solution for more specifically there exists a null spacce such that rh which leads to infinite number of feasible solutions to avoid the of the solution constrained form of the penalized mbir can be formulated as klf f subject to r where l refers to a linear operator and k denotes the norm for the case of the tv penalty l corresponds to the derivative then the uniqueness of is guaranteed that if the nl where nl denotes the null space of the operator instead of designing a linear operator l such that the common null space of and nl to be zero we can design a frame w its dual and shrinkage operator such that w i and w f g f more specifically our sparse view reconstruction algorithm finds the unknown f that satisfy both data fidelity and the frame constraints rf in other word directly removes the null space component eq is the constraint we use for training our neural network qi f f where qi is the image domain cnn that satisfies and f denotes the images that are available for training data now by defining m as a of r r we have f h for some h since the right inverse is not unique due to the existence of the null space thus we can show that is the feasible solution for since we have qi qi f h f for the training data and r f h therefore the neural network training problem to satisfy can be equivalently represented by min qi for the image f this regularization is also an active field of research for image denoising inpainting etc one of the most important contributions of the deep convolutional framelet theory is that w and correspond to the encoder and decoder structure of a convolutional neural network cnn respectively and the shrinkage operator emerges by controlling the number of filter channels and nonlinearities more specifically a convolutional neural network can be designed such that q w and q f h f derivation of image and projection domain cnns n x i kf i qi i where f i n denotes the training data set composed of image an its sparse view projection since a representative right inverse for the sparse view projection is the inverse radon transform after zero padding to i the missing view in can be implemented using the standard fbp algorithm in fact this is the main theoretical ground for the success of image domain cnn when the data is available moreover the rebinning makes the problem separable for each z slices so we use the fbp for each slice as shown in fig however the main technical difficulties in our ct eds system is that we do not have image f i n one could use physical phantoms and atomic number to form a set of images but those data set may be different from the real bags so we need a new method to account for the lack of for neural network training thus to overcome the lack of the groundtruth data the approximate label images are generated using an mbir with tv penalty then using mbir reconstruction i as label data f i n an image domain network q is trained to learn the mapping between the image and mbir reconstruction in x y domain one downside of this approach is that the network training by is no more optimal since the label data is not the image thus the generated sinogram data from the denoised volume may be biased thus we impose additional frame constraint to the sinogram data in addition to qs for the measured angle where qs is the s z sinogram domain cnn and denotes the sinogram data measured at then eq leads to the following network training min qs n xx i i qs rqi more specifically as shown in fig sinogram data is generated in the s z domain by applying the forward projection operator along views after stacking the image domain network output over multiple slices to form reconstruction volume in the x y z domain next a sinogram domain network qs is trained so that it can learn the mapping between the synthetic sinogram data and the real projection data in the s z domain since the real projection data is available only in views this sinogram network training is performed using synthetic and real projection data in the measured projection views the optimization problems and can be solved sequentially or simultaneously and in this paper we adopt the sequential optimization approach for simplicity after the neural networks qi and qs are trained the inference can be done simply by obtaining x y z volume images from the view projection data by fbp algorithm which are then fed into qi to obtain the denoised volume data then by applying projection operator we generate projection view data in s z domain which are fed into the qs to obtain denoised sinogram data for each angle then the final reconstruction is obtained by applying fbp algorithms one could use using additional denosing this algorithmic flow is illustrated in fig iii m ethods a real ct eds data acquisition we collected ct eds data using the prototype stationary view dual energy system developed by gemss medical systems korea as shown in fig the distance from source to detector dsd and the distance from source to fig cnn architecture for our image and singoram domain networks object dso are and respectively the number of detector is with a pitch of the region of interest roi is and the pixel size is the detectors collect low and high energy at and respectively we collect sets of projection data from the prototype ct eds baggage scanner among the sets dataset are and the other set are realistic bags the set of and was used during the training phase and the validation was performed by two and one the other set was used for test b network architecture and training fig illustrates modified the structure for the image domain and the sinogram domain networks to account for the image and sinogram data the input for the network is two channel image and sinogram data the proposed network consists of convolution layer batch normalization rectified linear unit relu and contracting path connection with concatenation a detail parameters are illustrated as shown in fig the proposed networks were trained by stochastic gradient descent sgd the regularization parameter was the learning rate has been set from to which has been reduced step by step in each epoch the number of epoch was the batch size was and the patch size for image and projection data are and respectively the network was implemented using matconvnet toolbox in the matlab environment mathworks natick central processing unit cpu and graphic processing unit gpu specification are cpu ghz and gtx ti gpu respectively iv e xperimental r esults to evaluate the performance of the proposed method we perform image reconstruction from real ct eds prototype system fig illustrates image reconstruction results of bag using various methods such as fbp mbir with tv penalty image domain cnn and the proposed method the fbp reconstruction results suffered from severe streaking artifacts so it was difficult to see the threats in the tomographic reconstruction and rendering the mbir and image domain cnn were slight better in their reconstruction quality but the detailed structures were not fully recovered and several objects were not detected as indicated by the red arrow in fig moreover the rendering results in fig fig a s z domain sinogram data from a measurement b fbp c mbir d image cnn and e the proposed method the number written in the images is the nmse value yellow and red arrows indicate grenade and knife respectively high quality images using real data from our prototype ct eds system we demonstrated that the proposed method outperforms the existing algorithms delivering high quality three reconstruction for threat detection acknowledgment fig reconstruction results by various methods from correctly identify the shape of grenade and knife as well as the frame of the bag which was not possible using other methods because we do not have the in the image domain we perform quantitative evaluation using normalized mean squares error nmse in the sinogram domain more specifically after obtaining the final reconstruction we perform the forward projection to generate the sinogram data in the measured projection view and calculated the normalized mean square errors table i showed that the proposed method provides the most accurate sinogram data compared to the other methods moreover the s z projection data in fig showed that the projection data from the proposed method is much closer to the measurement data table i nmse value comparison of various methods energy level fbp image cnn ours kvp kvp c onclusion in this paper we proposed a novel deep learning reconstruction algorithm for a prototype dual energy ct eds for baggage scanner even though the number of projection view was not sufficient for high equality reconstruction our method learns the relationships between the tomographic slices in x y domain as well as the projections in s z domain such that the image and sinogram data can be successively refined to obtain this work is supported by korea agency for infrastructure technology advancement grant number r eferences sagar mandava david coccarelli joel a greenberg michael e gehm amit ashok and ali bilgin image reconstruction for ct in baggage scanning in anomaly detection and imaging with adix ii international society for optics and photonics vol sherman j kisner eri haneda charles a bouman sondre skatter mikhail kourinny and simon bedford limited view angle iterative ct reconstruction in computational imaging x vol yoseop han jaejoon yoo and jong chul ye deep residual learning for compressed sensing ct reconstruction via persistent homology analysis arxiv preprint yoseob han and jong chul ye framing via deep convolutional framelets application to ct arxiv preprint kyong hwan jin michael t mccann emmanuel froustey and michael unser deep convolutional neural network for inverse problems in imaging ieee transactions on image processing vol no pp jawook gu and jong chul ye wavelet domain residual learning for ct reconstruction arxiv preprint cai raymond h chan and zuowei shen a image inpainting algorithm applied and computational harmonic analysis vol no pp jong chul ye yo seob han and eunjoo cha deep convolutional framelets a general deep learning framework for inverse problems arxiv preprint olaf ronneberger philipp fischer and thomas brox convolutional networks for biomedical image segmentation in international conference on medical image computing and intervention springer pp alex krizhevsky ilya sutskever and geoffrey e hinton imagenet classification with deep convolutional neural networks in advances in neural information processing systems pp andrea vedaldi and karel lenc matconvnet convolutional neural networks for matlab in proceedings of the acm international conference on multimedia acm pp
| 2 |
jun adaptive nonparametric drift estimation for diffusion processes using expansions frank van der meulen moritz jan van june abstract we consider the problem of nonparametric estimation of the drift of a continuously observed diffusion with periodic drift motivated by computational considerations van der meulen et al defined a prior on the drift as a randomly truncated and randomly scaled series expansion with gaussian coefficients we study the behaviour of the posterior obtained from this prior from a frequentist asymptotic point of view if the true data generating drift is smooth it is proved that the posterior is adaptive with posterior contraction rates for the l that are optimal up to a log factor contraction rates in l p with p are derived as well introduction assume continuous time observations x t x t t t from a diffusion process x defined as weak solution to the stochastic differential equation sde dx t b x t dt dwt x here w is a brownian motion and the drift b is assumed to be a measurable function on the real line that is and square integrable on the assumed periodicity implies that we can alternatively view the process x as a diffusion on the circle this model has been used for dynamic modelling of angles see for instance pokern and hindriks we are interested in nonparametric adaptive estimation of the drift this problem has recently been studied by multiple authors spokoiny proposed a locally linear smoother with a bandwidth choice that is rate adaptive with respect to x for all x and optimal tu delft mekelweg cd delft the netherlands address leiden university niels bohrweg ca leiden the netherlands address math vries institute for mathematics science park xg amsterdam the netherlands address up to a log factors interestingly the result is and does not require ergodicity dalalyan and kutoyants and dalalyan consider ergodic diffusions and construct estimators that are asymptotically minimax and adaptive under sobolev smoothness of the drift their results were extended to the multidimensional case by strauch in this paper we focus on bayesian nonparametric estimation a paradigm that has become increasingly popular over the past two decades an overview of some advances of bayesian nonparametric estimation for diffusion processes is given in van zanten the bayesian approach requires the specification of a prior ideally the prior on the drift is chosen such that drawing from the posterior is computationally efficient while at the same time ensuring that the resulting inference has good theoretical properties which is quantified by a contraction rate this is a rate for which we can shrink balls around the true parameter value while maintaining most of the posterior mass more formally if d is a semimetric on the space of drift functions a contraction rate is a sequence of positive numbers for which the posterior mass of the balls b d b b converges in probability to as t under the law of x with drift b for a general discussion on contraction rates see for instance ghosal et al and ghosal and van der vaart for diffusions the problem of deriving optimal posterior convergence rates has been studied recently under the additional assumption that the drift integrates to zero b x d x in papaspiliopoulos et al a mean zero gaussian process prior is proposed together with an algorithm to sample from the posterior the precision operator inverse covariance operator of the proposed gaussian process is given by where is the laplacian i is the identity operator and a first consistency result was shown in pokern et al in van waaij and van zanten it was shown that this rate result can be improved upon for a slightly more general class of priors on the drift more specifically in this paper the authors consider a prior which is defined as x k zk p p where x cos x sin are the standard fourier series basis functions zk is a sequence of independent standard normally distributed random variables and is positive it is shown that when l and are fixed and b is assumed to be smooth then the optimal posterior rate of contraction t is obtained note that this result is nonadaptive as the regularity of the prior must match the regularity of b for obtaining optimal posterior contraction rates for the full range of possible regularities of the drift two options are investigated endowing either l or with a hyperprior only the second option results in the desired adaptivity over all possible regularities while the prior in with additional prior on has good asymptotic properties from a computational point of view the infinite series expansion is inconvenient clearly in any implementation this expansion needs to be truncated random truncation of a series expansion is a well known method for defining priors in bayesian nonparametrics see for instance shen and ghosal exactly this idea was exploited in van der meulen et al where the prior is defined as the law of the random function figure elements and j k j of the basis b r s s z s r x x j where the functions j k constitute the basis see fig these functions feature prominently in the construction of brownian motion see for instance bhattacharya and waymire paragraph the prior coefficients z j k are equipped with a gaussian distribution and the truncation level r and the scaling factor s are equipped with independent priors truncation in absence of scaling increases the apparent smoothness of the prior as illustrated for deterministic truncation by example in van der vaart and van zanten whereas scaling by a number decreases the apparent smoothness scaling with a number only increases the apparent smoothness to a limited extent see for example knapik et al the simplest type of prior is obtained by taking the coefficients z j k independent we do however also consider the prior that is obtained by first expanding a periodic process into the basis followed by random scaling and truncation we will explain that specific stationarity properties of this prior make it a natural choice draws from the posterior can be computed using a reversible jump markov chain monte carlo mcmc algorithm cf van der meulen et al for both types of priors fast computation is facilitated by leveraging inherent sparsity properties stemming from the compact support of the functions j k in the discussion of van der meulen et al it was argued that inclusion of both the scaling and random truncation in the prior is beneficial however this claim was only supported by simulations results in this paper we support this claim theoretically by proving adaptive contraction rates of the posterior distribution in case the prior is used we start from a general result in van der meulen et al on brownian semimartingale models which we adapt to our setting here we take into account that as the drift is assumed to be information accumulates in a different way compared to general ergodic diffusions subsequently we verify that the resulting prior mass remaining mass and entropy conditions appearing in this adapted result are fied for the prior defined in equation an application of our results shows that if the true drift function is b smooth then by appropriate choice of the variances of z j k as well as the priors on r and s the posterior for the drift b contracts at the rate t log t around the true drift in the l up to the log factor this rate is see for instance kutoyants theorem moreover it is adaptive the prior does not depend on in case the true drift has greater than or equal to our method guarantees contraction rates equal to essentially t corresponding to a further application of our results shows that for l p we obtain contraction rate t up to the paper is organised as follows in the next section we give a precise definition of the prior in section a general contraction result for the class of diffusion processes considered here is derived our main result on posterior contraction for l p with p is presented in section many results of this paper concern general properties of the prior and their application is not confined to drift estimation of diffusion processes to illustrate this we show in section how these results can easily be adapted to nonparametric regression and nonparametric density estimation proofs are gathered in section the appendix contains a couple of technical results prior construction model and posterior let b x dx and b is l t b r r be the space of square integrable functions lemma if b l t then the sde eq has a unique weak solution the proof is in section for b l t let p b p b t denote the law of the process x t generated by eq when b is replaced by b if p denotes the law of x t when the drift is zero then p b is absolutely continuous with respect to p with density t z t p b x exp b x t dx t b x t dt given a prior on l t and path x t from the posterior is given by r p b x t db t b a x ra p b x t db where a is borel set of l t these assertions are verified as part of the proof of theorem motivating the choice of prior we are interested in randomly truncated scaled series priors that simultaneously enable a fast algorithm for obtaining draws from the posterior and enjoy good contraction rates to explain what we mean by the first item consider first a prior that is a finite series prior let denote basis functions and z zr a mean zero gaussian random vector with p precision matrix assume that the prior for b is given by b zi by conjugacy it follows that z x t n where w g t z x t dx t t z and g i i x t x t dt for i i r cf van der meulen et lemma the matrix g is referred to as the grammian from these expressions it follows that it is computationally advantageous to exploit compactly supported basis functions whenever and have nonoverlapping supports we have g i i depending on the choice of such basis functions the grammian g will have a specific sparsity structure a set of index pairs i i such that g i i independently of x t this sparsity structure is inherited by w as long as the sparsity structure of the prior precision matrix matches that of in the next section we make a specific choice for the basis functions and the prior precision matrix definition of the prior define the hat function by x x x x the basis functions are given by j k x j x k j k j let x x i x in figure we have plotted together with j k where j we define our prior as in with gaussian coefficients and z j k where the truncation level r and the scaling factor s are equipped with hyper priors we extend b periodically if we want to consider b as function on the real line if we identify the double index j k in with the single p index i j k then we can write b r s s zi let i j if i if i j j and j we say that belongs to level j if i j thus both and belong to level which is convenient for notational purposes for levels j the basis functions are per level orthogonal with essentially disjoint support define for r ir i i r let a cov zi zi i i and define its restriction by a r a i i i i if we denote z r zi i ir and assume that z r is multivariate normally distributed with mean zero and covariance matrix a r then the prior has the following hierarchy b r s z r s x z i i z r r n ar r s here we use to denote the joint distribution of r s we will consider two choices of priors for the sequence our first choice consists of taking independent gaussian random variables if the coefficients zi are independent with standard deviation i the random draws from this prior are scaled piecewise linear interpolations on a dyadic grid of a brownian bridge on plus the random function the choice of is motivated by the fact that in this case var b t s r s is independent of t we construct this second type of prior as follows for define v vt t to be the cyclically stationary and centred process this is a periodic gaussian process with covariance kernel cov v s v t e e h t s e this process is cyclically stationary that is the covariance only depends on and it is the unique gaussian and markovian prior with continuous periodic paths with this property this makes the cyclically stationary prior an appealing choice which respects the symmetries of the problem each realisation of v is continuous and can be extended to a periodic function on then v can be represented as an infinite series expansion in the basis vt x zi t t x x z j k j k t j i finally by scaling by s and truncating at r we obtain from v the second choice of prior on the drift function visualisations of the covariance kernels cov b s b t for first prior brownian bridge type and for the second prior periodic process prior with parameter are shown in fig for s and r sparsity structure induced by choice of zi conditional on r and s the posterior of z r is gaussian with precision matrix g r here g r is the grammian corresponding to using all basis functions up to and including level r if the coefficients are independent it is trivial to see that the precision matrix does not destroy the sparsity structure of g as defined in this is convenient for numerical computations the next lemma details the situation for periodic processes lemma let v be defined as in equation figure heat maps of s t cov b s b t in case s and r left brownian bridge plus the random function right periodic process with parameter and chosen such that var b s the sparsity structure of the precision matrix of the infinite stochastic vector z appearing in the series representation equals the sparsity structure of g as defined in the entries of the covariance matrix of the random gaussian coefficients zi and zi a i i e zi zi satisfy the following bounds a a coth and for and i i a i i i and a a and for i i i i i i i i i i i i otherwise the proof is given in section by the first part of the lemma also this prior does not destroy the sparsity structure of the the second part asserts that while the entries of a r are not zero they are of smaller order than the diagonal entries quantifying that the covariance matrix of the coefficients in the schauder expansion is close to a diagonal matrix posterior contraction for diffusion processes the main result in van der meulen et al gives sufficient conditions for deriving posterior contraction rates in brownian semimartingale models the following theorem is an adaptation and refinement of theorem and lemma of van der meulen et al for diffusions defined on the circle we assume observations x t where t let be a prior on l t which henceforth may depend on t and choose measurable subsets sieves bt l t define the balls b t b b bt kb the number of a set a for a semimetric denoted by n a is defined as the minimal number of of radius needed to cover the set a the logarithm of the covering number is referred to as the entropy the following theorem characterises the rate of posterior contraction for diffusions on the circle in terms of properties of the prior theorem suppose is a sequence of positive numbers such that t is bounded away from zero assume that there is a constant such that for every k there is a measurable set bt l t and for every a there is a constant c such that for t big enough log n b t b k c t b t b e and l t bt e t then for every m t p b l t kb b m t x t and for k big enough l t bt x t equations and are referred to as the entropy condition small ball condition and remaining mass condition of theorem respectively the proof of this theorem is in section theorems on posterior contraction rates the main result of this section theorem characterises the frequentist rate of contraction of the posterior probability around a fixed parameter b of unknown smoothness using the truncated series prior from section we make the following assumption on the true drift function assumption the true drift b can be expanded in the basis b z j p j z j k j k i z i and there exists a such that sup i i i note that we use a slightly different symbol for the norm as we denote the l by k remark if then assumption on b is equivalent to assuming b to be b smooth it follows from the definition of the basis functions that z j k b j b j b j therefore it follows from equations with r and with p in combination with equation with q in and nickl section that kb jb is equivalent to the b of b for if then smoothness and b coincide cf proposition in and nickl for the prior defined in eqs to we make the following assumptions assumption the covariance matrix a satisfies one of the following conditions a for fixed a i i i and a i i for i i b there exists c c and c with c independent from r such that for all i i ir c i a i i c i i i c i i if i i in particular the second assumption if fulfilled by the prior defined by eq if and any assumption the prior on the truncation level satisfies for some positive constants c c p r r exp r p r r exp r for the prior on the scaling we assume existence of constants p p q and c with p such that p s x p x p exp x q for all x c the prior on r can be defined as r logy c where y is poisson distributed equation is satisfied for a whole range of distributions including the popular family of inverse gamma distributions since the inverse gamma prior on s decays polynomially lemma condition of shen and ghosal is not satisfied and hence their posterior contraction results can not be applied to our prior we obtain the following result for our prior theorem assume b satisfies assumption suppose the prior satisfies assumptions and let be a sequence of positive numbers that converges to zero there is a constant c such that for any c there is a measurable set bn l t such that for every a there is a positive constant c such that for n sufficiently large log p kb r s b log log p b r s bn log log n b bn kb b k c log the following theorem is obtained by applying these bounds to theorem after taking t log t theorem assume b satisfies assumption suppose the prior satisfies assumptions and then for all m t p b kb b m t t log t t as t this means that when the true parameter is from b a rate is obtained that is for timal possibly up to a log factor when then b is in particular in the space b every small positive and therefore converges with rate essentially t when a different function is used defined on a compact interval of r and the basis elements p are defined by j k j forcing them to be then theorem and derived results for applications still holds provided j k and j k j l when l d for a fixed d n and the smoothness assumptions on b are changed accordingly a finite number of basis elements can be added or redefined as long as they are it is easy to see that our results imply posterior convergences rates in weaker l p p with the same rate when p the l p is stronger than the l we apply ideas of knapik and salomond to obtain rates for stronger l p theorem assume the true drift b satisfies assumption suppose the prior satisfies assumptions and let p then for all m t p b kb b kp m t t log t x t as t these rates are similar to the rates obtained for the density estimation in and nickl however our proof is less involved note that we have only consistency for applications to nonparametric regression and density estimation our general results also apply to other models the following results are obtained for b satisfying assumption and the prior satisfying assumptions and nonparametric regression model as a direct application of the properties of the prior shown in the previous section we obtain the following result for a nonparametric regression problem assume x in b i i i n with independent gaussian observation errors i n when we apply ghosal and van der vaart example to theorem we obtain for every m n b kb b m n n log n n x as n and in a similar way as in theorem for every p b n p log n b kb b m n n x as n density estimation let us consider n independent observations x n x x n with x i p where p is an unknown density on relative to the lebesgue measure let p denote the space of densities on relative to the lebesgue measure the natural distance for densities is the hellinger distance h defined by z p p x q x dx h p q b define the prior on p by p keeb k where b is endowed with the prior of theorem or its periodic version assume that log p is in the sense of assumption applying ghosal et al theorem and van der vaart and van zanten lemma to theorem we obtain for a big enough constant m n p p h p p m log n n x as n proofs proof of lemma since conditions nd and li of karatzas and shreve theorem hold the sde eq has a unique weak solution up to an explosion time assume without loss of generality that x define and for i the random times inf t t x by periodicity of drift and the markov property the random variables ui are independent and identically distributed note that inf t x t n x ui i p and hence follows from ui almost surely the latter holds true since with positive probability which is clear from the continuity of diffusion paths proof of lemma proof of the first part for the proof we introduce some notation for any j k j k we write j k j k if supp j k supp j k the set of indices become a lattice with partial order and by j k j k we denote the supremum identify i with j k and similarly i with j k for i denote by t i the time points in corresponding to the maxima of without loss of generality assume t i t i we have g i i if and only if the interiors of the supports of and are disjoint in that case max supp j k t j k j k min supp j k the values of zi can be found by the midpoint displacement technique the coefficients are given by v and for j z j k j j j k as v is a gaussian process the vector z is gaussian say with infinite precision matrix now i if there exists a set l n such that l i i for which conditional on zi i l zi are zi are independent define j k j k j k and l i n i j k with j j the set zi i l determine the process v at all times j k now zi and zi are conditionally independent given vt t j k j the markov property of the nonperiodic process the result follows since zi i l vt t j lemma let k s t evs vt k e e by and if x s t t k s x k x k t x t k x proof without loss of generality assume that t x with m t s and t s e e e e e e e e e e e e e e e e e e e e e the result follows from e e and scaling both sides with proof of the second part denote by a b c d the support of and respectively and let m b a and n d c but for i let m v and var var coth and cov sinh note that the covariance matrix of and has eigenvalues tanh and coth and is strictly positive definite by midpoint displacement va vb i and k s t evs vt e e assume without loss of generality d define to be the halfwidth of the smaller interval so that d c j then b a j with h j consider three cases the entries on diagonal i i the interiors of the supports of and are the support of is contained in the support of case by elementary computations for i e a i i e e e e e e e e e as and under the assumption the last display can be bounded by e e a i i e hence j a i i j case necessarily i i by twofold application of lemma a i j k c b n b k d b k c m n m k d m k c a n a k d a sinh d k n b n m k n a sinh d k n m using the convexity of sinh we obtain the bound for x note that f x e e is convex on from which we derive f x e using this bound and the fact that for k n m coth which can be easily seen from a plot that i i j j n m j j case for i i with m or i with m using eq we obtain i i m n k m c k m d d k m n j k m n j when i i then using the calculation eq and lemma noting that a b and m are not in c d we obtain a i i d k n b n m k n a write x and a simple computation then shows e e x e x the derivative of f e x e x is nonnegative for x hence f is increasing and so f f f note that f for x and f e g x maximising g x over x gives g x and g and therefore f g x it follows that e e for the other terms we derive the following bounds write e e x e x h now h is decreasing for x log and convex and positive for x log in both case we can bound h by its value at the endpoints and using that we obtain h e x and h e e x so h using the bound eq and exp x we obtain i i j j proof of theorem a general result for deriving contraction rates for brownian models was proved in van der meulen et al theorem follows upon verifying the assumptions of this result for the diffusion on the circle these assumptions are easily seen to boil down to for every t and b b l t the measures p t and p t are equivalent the posterior as defined in equation eq is well defined define the random hellinger semimetric h t on l t by z h t b b b b x t dt b b l t there are constants c c for which p p lim p t c t kb b h t b b c t kb b b b l t t we start by verifying the third condition recall that the local time of the process x t is defined as the random process l t x which satisfies z t z f x t dt f x l t x dx r for every measurable function f for which the above integrals are defined since we are working with functions we define the periodic local time by x t x l t x k note that t x t is continuous with probability one hence the support of t x t is compact with probability one since x l t x is only positive on the support of t x t it follows that the sum in the definition of t x has only finitely many nonzero terms and is therefore well defined for a function f we have z t z f x t dt f x t x dx provided the involved integrals exists it follows from schauer and van zanten theorem that t x converges to a positive deterministic function only depending only on b and which is bounded away from zero and infinity since the hellinger distance can be written as s z p t x h t b b t b x b x dt t p it follows that the third assumption is satisfied with d t b b t kb b conditions and now follow by arguing precisely as in lemmas and of van waaij and van zanten respectively the key observation being that the convergence result of t x also holds when b x dx is nonzero which is assumed in that paper p the stated result follows from theorem in van der meulen et al taking t in their paper proof of theorem with assumption a the proof proceeds by verifying the conditions of theorem by assumption the true drift can p j be represented as b z j z j k j k for r define its truncated version by b z r x x z j small ball probability for choose an integer r with where for notational convenience we will write r instead of r in the remainder of the proof by lemma we have kb b therefore kb r s b kb r s b kb b kb r s b which implies p kb r s b p kb r s b let f s denotes the probability density of for any x we have z x p kb r s b p kb r s b f s s ds p r r r p r r where inf l p and p kb r s b z f s s ds p and p p and q are taken from assumption for sufficiently small we have by the second part of assumption z f s s ds exp by choice of r and the first part of assumption there exists a positive constant c such that p r r exp c r exp log for sufficiently small for lower bounding the middle term in equation we write b r s b s z r x x j s z j k z j k j k which implies kb r s b z r x max z j k z j k r max zi z i j j i this gives the bound y p kb r s b p zi z i i r by choice of the zi we have for all i i zi is standard normally distributed and hence i i i log p log p zi z i zi z i r r s i zi i i log log r s r s s where the inequality follows from lemma the third term can be further bounded as we have i z i i z i jb hence i i i log log log p zi z i r r s r s s for s l and i ir we will now derive bounds on the first three terms on the right of eq for sufficiently small we have r r and then inequality implies logc r log log bounding the first term on the rhs of for sufficiently small we have p r s r log i log log r log log log log p q log where p q is a positive constant bounding the second term on the rhs of for sufficiently small we have i r s logc l logc the final inequality is immediate in case else if suffices to verify that the exponent is nonnegative under the assumption p bounding the third term on the rhs of for sufficiently small in case we have i jb jb l in case we have i jb r jb l p as the exponent of is positive under the assumption p hence for small enough we have log p zi z i p q log r as we get log inf p x x p p kb r s b p q log log we conclude that the right hand side of eq is bounded below by exp log for some positive constant c and sufficiently small entropy and remaining mass conditions for r denote by c r the linear space spanned by and j k j r k j and define c r t b c r jb t proposition for any log n c r t k log t where a proof we follow van der meulen et choose such that define j j t j t if j r uj t if j pr j j j for each j r let e j be a minimal j with respect to the on and let e be a minimal with respect to the on hence if x u j then there exists a e j e j such that maxk k e k j take b c r t arbitrary b z pr j j z j k j k let e pr j j e j k j k where j e e e and e j e j j e j for j we have kb e e this can be bounded by pr j j r x r x max j k e j k j k j j max j j z j k j e j k j j by an appropriate choice of the coefficients in in that case we obtain that kb this implies j t log log j log n c r t k j j r x r x j the asserted bound now follows upon choosing j j proposition there exists a constant a positive constant k such that log n b c r kb b k log k proof there exists a positive k such that b c r kb b b c r k by lemma this set is included in the set n o p b c r r k by lemma for any b z pr j j z j k j k in this set we have o n p max j k j r k j r k hence the set eq is included in the set b c r jb a r c r a r where a r p r k hence n b c r kb b k n c r a r k using lemma again the latter can be bounded by p n r c r a r k the result follows upon applying proposition we can now finish the proof for the entropy and remaining mass conditions choose r n to be the smallest integer so that n where l is a constant and set bn c r n the entropy bound then follows directly from proposition for the remaining mass condition using assumption we obtain p b r s bn p r r n exp c n r n exp log and note that the constant c can be made arbitrarily big by choosing l big enough proof of theorem under assumption b we start with a lemma lemma assume there exists c c and c with c c independent from r such that for all i i i i r c i a i i c i i i c i i if i i let ae a i i i i so the submatrix of a r then for all x e x ax e x e c c x e e i i i i is the diagonal matrix with e i i i where proof in the following the summation are over i i i i r p p i x i a i i i j x i a i j x j by the first inequality c x r x c x i x i x i x a i i c x i trivially x a r x x i c x r x on the other hand x x x a x i i i i c i i i i i i i i at the first inequality we used the second part of of the second inequality follows upon including the diagonal by this can be further bounded by x i x i xi c x i i p p p i j j where the final inequality follows from i i the result i j follows by combining the derived inequalities we continue with the proof of theorem write a as block matrix b b with a a and b a defined accordingly by lemma coth coth define the c tanh i c where i is the matrix it is easy to see that a is positive definite when a b a b is positive definite then it follows from the cholesky decomposition that a is positive definite where diag positive definite note where b a b i i x k k b i k a kk b i k x k k a kk x k k a kk b i b i b i b i sinh coth therefore i i b a b i i now consider a b a b by lemma and the bound on b a b i i and choosing c in the definition of small enough under the assumption that i i i i and for i i i i i i therefore by lemma is positive definite with diagonal matrix with diagonal entries i it follows that x x ax this implies that the small ball probabilities and the mass outside a sieve behave similar under assumption b as when the zi are independent normally distributed with zero mean and variance i as this case corresponds to assumption a with for which posterior contraction has already been established the stated contraction rate under assumption b follows from anderson s lemma lemma proof of theorem convergence in stronger norms the linear embedding operator t l p t l t x x is a injective continuous operator for all p its inverse is easily seen to be a densely defined closed unbounded linear operator following knapik and salomond we define the modulus of continuity m as m bn sup k f f kp f bn k f f theorem of knapik and salomond adapted to our case is theorem knapik and salomond let tn and be a prior on l p t such that bnc x tn for measurable sets bn l p t assume that for any positive sequence m n b bn kb b m n x tn then b l p t kb b kp m bn m n x tn c note that the sieves c r t which we define in section have by eq the property c r t t r x by lemmas and the modulus of continuity satisfies m c r u for all p assume and the result follows a lemmas used in the proofs lemma suppose z has expansion x x z z z j if jz with the norm defined in then for r x z i i jz proof this follows from x z i i x z j x j j max j j k jz j x j j lemma if x ig a b then for any m p x m ba m a proof this follows from p x m ba a z m x dx b a ba x m a a lemma let x n r and q p e log p p p e proof note that z and e e thus e e e z p e x now the elementary bound e e x dx z e dx p e hence dx e ry e p p x z z e p e dx p z p p e u du y gives e x e dx p p p z p e u p e p e p p p p du r log e lemma anderson s lemma define a partial order on the space of n n by setting a b when b a is positive definite if x n and y n independently with x then for all symmetric convex sets c p y c p x c proof see anderson lemma let f z r x x z j k j k j then sup i f i i proof note that f f and f f and inductively for j z j k f j f j k f j k hence j k f lemma let c r as in section then sup f r k f p r k f proof let f c r be nonzero note that for any constant c kc f k f kc f k f hence we may and do assume that k f furthermore since the l and l norm of f and f are the same we also assume that f is nonnegative let x be a global maximum of f clearly f x since f is a linear interpolation between the points j k we may also assume that x is of the form x j we consider two cases i k ii k in case i we have that f x x i x for all x k in case ii f x x i x for all x hence in both cases z k f x dx thus k f k f r p r uniformly over all nonzero f c r s lemma let a a x x be positive numbers then w proof suppose that the lemma is not true so there are positive a a x x such that w w x x x a a w x x x a a hence both terms on the are negative in particular this means for the first term that x a for the second term this gives x a these two inequalities can not hold simultaneously and we have reached a contradiction lemma let c r and c r s as in section then for p sup f r k f kp k f p r proof let f c r just as in proof of lemma we may assume that f is nonnegative and k f hence p k f kp k f kp p sup sup k f kp sup f r k f k f f r k f f r k f note that p k f kp x z hence by repeatedly applying lemma r f x p dx r f x dx f x p dx r k k f x r f x p dx f x dx note that f is a linear interpolation between the points k now study affine functions g r which are positive a maximum of g is attained in either or without lose of generality it is attained in using scaling in a later stadium of the proof we assume for the moment that g hence a g note that g x a x when a kg kp kg now consider a z z p g x dx a x dx let y then x and dx dy hence z z p g x dx a p y p dy a p note that for a constant c and a function h p kchkp p c p khkp c let c p c khkp hence c g has l one and p p kcg kp c p kg kp a r a p r a a a p the maximum is attained for a then p kc g kp hence kcg kp r p p r and the result follows using that k f i k f and that for c c p p kc g kp p c p kc g kp kc g kp kc g c kc g kc g b acknowledgement this work was partly supported by the netherlands organisation for scientific research nwo under the research programme foundations of nonparametric bayes procedures and by the erc advanced grant bayesian statistics in infinite dimensions references anderson the integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities proc amer math bhattacharya and waymire a basic course in probability theory universitext springer new york dalalyan a sharp adaptive estimation of the drift function for ergodic diffusions ann dalalyan and kutoyants y a asymptotically efficient trend coefficient estimation for ergodic diffusion math methods ghosal ghosh and van der vaart convergence rates of posterior distributions ann ghosal and van der vaart convergence rates of posterior distributions for noniid observations ann and nickl rates of contraction for posterior distributions in l r r ann and nickl mathematical foundations of statistical models cambridge series in statistical and probabilistic mathematics cambridge university press hindriks empirical dynamics of neuronal rhythms phd thesis vrije universiteit amsterdam karatzas and shreve brownian motion and stochastic calculus volume of graduate texts in mathematics new york second edition knapik and salomond a general approach to posterior contraction in nonparametric inverse problems bernoulli knapik van der vaart and van zanten bayesian inverse problems with gaussian priors ann kutoyants y a statistical inference for ergodic diffusion processes springer new york papaspiliopoulos pokern roberts and stuart nonparametric estimation of diffusions a differential equations approach biometrika pokern y fitting stochastic differential equations to molecular dynamics data phd thesis university of warwick pokern stuart and van zanten posterior consistency via precision operators for bayesian nonparametric drift estimation in sdes stochastic processes and their applications schauer and van zanten uniform central limit theorems for additive functionals of diffusions on the circle in preparation shen and ghosal adaptive bayesian procedures using random series priors scandinavian journal of statistics spokoiny adaptive drift estimation for nonparametric diffusion model ann strauch sharp adaptive drift estimation for ergodic diffusions the multivariate case stochastic process van der meulen schauer and van zanten reversible jump mcmc for nonparametric drift estimation for diffusion processes comput statist data van der meulen van der vaart and van zanten convergence rates of posterior distributions for brownian semimartingale models bernoulli van der vaart and van zanten rates of contraction of posterior distributions based on gaussian process priors ann van waaij and van zanten gaussian process methods for diffusions optimal rates and adaptation electron j van zanten nonparametric bayesian methods for diffusion models mathematical biosciences
| 10 |
work in progress by shie mannor vianney perchet and gilles stoltz approachability in unknown games online learning meets optimization shie mannor shie israel institute of technology technion faculty of electrical engineering haifa israel vianney perchet jun ensae paristech avenue pierre larousse malakoff france gilles stoltz stoltz greghec hec paris cnrs rue de la france abstract in the standard setting of approachability there are two players and a target set the players play repeatedly a known game where the first player wants to have the average payoff converge to the target set which the other player tries to exclude it from this set we revisit this setting in the spirit of online learning and do not assume that the first player knows the game structure she receives an arbitrary vectorvalued reward vector at every round she wishes to approach the smallest best possible set given the observed average payoffs in hindsight this extension of the standard setting has implications even when the original target set is not approachable and when it is not obvious which expansion of it should be approached instead we show that it is impossible in general to approach the best target set in hindsight and propose achievable though ambitious alternative goals we further propose a concrete strategy to approach these goals our method does not require projection onto a target set and amounts to switching between scalar regret minimization algorithms that are performed in episodes applications to global cost minimization and to approachability under sample path constraints are considered keywords approachability online learning optimization introduction the approachability theory of blackwell is arguably the most general approach available so far for online optimization and it has received significant attention recently in the learning community see abernethy et and the references therein in the standard setting of approachability there are two players a payoff function and a target set the players play a repeated game where the first player wants the average payoff representing the states in which the different objectives are to converge to the target set representing the admissible values for the said states which the opponent tries to exclude the target set is prescribed a priori before the game starts and the aim of the is that the average reward be asymptotically inside the target set a theory of approachability in unknown games for arbitrary bandit problems the analysis in approachability has been limited to date to cases where some underlying structure of the problem is known namely the vector payoff function r c by mannor perchet stoltz mannor perchet stoltz and some signalling structure if the obtained payoffs are not observed we consider the case of unknown games where only rewards are observed and there is no a priori assumption on what can and can not be obtained in particular we do not assume that there is some underlying game structure we can exploit in our model at each round for every action of the decision maker there is a reward that is only assumed to be arbitrary the minimization of regret could be extended to this setting see and lugosi sections and and we know that the minimization of regret is a special case of approachability hence our motivation question can a theory of approachability be developed for unknown games one might wonder if it is possible to treat an unknown game as a known game with a very large class of actions and then use approachability while such lifting is possible in principle it would lead to unreasonable time and memory complexity as the dimensionality of the problem will explode in such unknown games the decision maker does not try to approach a target set but rather tries to approach the best smallest target set given the observed rewards defining a goal in terms of the actual rewards is standard in online learning but has not been pursued with a few exceptions listed below in the multiobjective optimization community a theory of smallest approachable set in insight even in known games it may happen that no target set is given when the natural target set is not approachable typical relaxations are then to consider uniform expansions of this natural target set or its convex hull can we do better to answer this question another property of regret minimization is our source of inspiration the definition of a strategy see and lugosi is that its performance is asymptotically as good as the best constant strategy the strategy that selects at each stage the same mixed action another way to formulate this claim is that a strategy performs almost as well as the best mixed action in hindsight in the approachability scenario this question can be translated into the existence of a strategy that approaches the smallest approachable set for a mixed action in hindsight if the answer is negative and unfortunately it is the next question is to define a weaker aim that would still be more ambitious than the typical relaxations considered short literature review our approach generalizes several existing works our proposed strategy can be used for standard approachability in all the cases where the desired target set is not approachable and where one wonders what the aim should be we illustrate this on the problems of global costs introduced by et al and of approachability with sample path constraints as described in the special case of regret minimization by mannor et al the algorithm we present does not require projection which is the achilles heel of many schemes it does so similarly to bernstein and shimkin our approach is also strictly more general and more ambitious than one recently considered by azar et al an extensive comparison to the results by bernstein and shimkin and azar et al is offered in section approachability in unknown games outline this article consists of four parts of about equal lengths we first define the problem of approachability in unknown games and link it to the standard setting of approachability section we then discuss what are the reasonable target sets to consider sections and section shows by means of two examples that the expansion can not be achieved while its convexification can be attained but is not ambitious enough section introduces a general class of achievable and ambitious enough targets a sort of convexification of some target set the third part of the paper section exhibits concrete and computationally efficient algorithms to achieve the goals discussed in the first part of the paper the general strategy of section amounts to playing a standard regret minimization in blocks and modifying the direction as needed its performance and merits are then studied in detail with respect to the literature mentioned above it bears some resemblance with the approach developed by abernethy et al last but not least the fourth part of the paper revisits two important problems for which dedicated methods were created and dedicated articles were written regret minimization with global cost functions and online learning with sample path constraints section we show that our general strategy has stronger performance guarantees in these problems than the ad hoc strategies that had been constructed by the literature setup unknown games notation and aim the setting is the one of classical approachability that is vector payoffs are considered the difference lies in the aim in classical approachability theory the average rt of the obtained vector payoffs should converge asymptotically to some target set c which can be known to be approachable based on the existence and knowledge of the payoff function in our setting we do not know whether c is approachable because there is no underlying payoff function we then ask for convergence to some of c where should be as small as possible setting unknown game with vectors of vector payoffs the following game is repeatedly played between two players who will be called respectively the or first player and the opponent or second player vector payoffs in rd where d will be considered the first player has finitely many actions whose set we denote by a a we assume a throughout the paper to avoid trivialities the opponent chooses at each round t a vector mt mt a of vector payoffs mt a rd we impose the restriction that these vectors mt lie in a convex and bounded set k of rd a the first player picks at each round t an action at a possibly at random according to some mixed action xt xt a we denote by a the set of all such mixed actions she then receives mt at as a vector payoff we can also assume that mt at is the only feedback she gets on mt and that she does not see the other components of mt than the one she chose this is called bandit monitoring but can and will be relaxed to a full monitoring as we explain below remark we will not assume that the first player knows k or any bound on the maximal norm of its elements put differently the scaling of the problem is unknown mannor perchet stoltz the terminology of unknown game was introduced in the machine learning literature see and lugosi sections and for a survey a game is unknown to the when she not only does not observe the vector payoffs she would have received has she chosen a different pure action bandit monitoring but also when she does not even know the underlying structure of the game if any such structure exists section will make the latter point clear by explaining how the classical setting of approachability introduced by blackwell is a particular case of the setting described above some payoff function r exists therein and the knows the strategy proposed by blackwell crucially relies on the knowledge of in our setting r is unknown and even worse might not even exist section and section a will recall how a particular case of approachability known as minimization of the regret could be dealt with for unknown games formulation of the approachability aim the is interested in controlling her average payoff t ret mt at t she wants it to approach an as small as possible neighborhood of a given target set c which we assume to be closed this concept of neighborhood could be formulated in terms of a general filtration see remark below for the sake of concreteness we resort rather to expansions of a base set c in some p which we denote by k k for p formally we denote by the closed in p of c c rd c kc kp c rd dp c c here and in the sequel dp s denotes the distance in p to a set as is traditional in the literature of approachability and regret minimization we consider the smallest set that would have been approachable in hindsight that is had the averages of the vectors of vector payoffs be known in advance mt t mt t whose components equal a a mt a t mt a t this notion of smallest set is somewhat tricky and the first part of this article will be devoted to discuss it the model we will consider is the following one we fix a target function k it takes mt as argument section will indicate reasonable such choices of it associates with it the mt of our aim is then to ensure the convergence dp ret mt as t as in the definition of classic approachability uniformity will be required with respect to the strategies of the opponent the should construct strategies such that for all there exists a time such that for all strategies of the opponent with probability at least sup dp ret mt t approachability in unknown games remark more general filtrations could have been considered than expansions in some norm by filtration we mean that for all a for instance if c one could have considered shrinkages and that is and c for or given some compact set b with interior c for but for the sake of clarity and simplicity we restrict the exposition to the more concrete case of expansions of a base set c in some p summary the two sources of unknowness as will become clearer in the concrete examples presented in section not only the structure of the game is unknown and might even not exist first source of unknownness but also the target is unknown this second source arises also in known games in the following cases when some natural target some target is proven to be unachievable or when some feasible target is not ambitious enough the least approachable uniform expansion of c as will be discussed in section what to aim for then convex relaxations are often considered more manageable and ambitious enough targets but we will show that they can be improved upon in general see the paragraph discussion on page for more details on these two sources of unknownness in the concrete example of global costs two classical relaxations mixed actions and full monitoring we present two extremely classical relaxations of the general setting described above they come at no cost but simplify the exposition of our general theory the can play mixed actions first because of martingale convergence results for instance the inequality controlling ret is equivalent to controlling the averages rt of the conditionally expected payoffs rt where rt xt mt x xt a mt a and rt t t rt xt t t mt indeed the boundedness of k and a application of the said inequality ensure that there exists a constant c such that for all for all t for all strategies of the opponent with probability at least r w w wret rt w c ln p t given we use these inequalities each with replaced by a union bound entails that choosing sufficiently large so that r ln dt sup c t t we then have for all strategies of the opponent with probability at least w w sup wret rt wp t therefore we may focus on rt instead of ret in the sequel and consider equivalently the aim discussed below mannor perchet stoltz the can enjoy a full monitoring second the assumption can be relaxed to a full monitoring at least under some regularity assumptions uniform continuity of the target function indeed we assumed that the only gets to observe mt at after choosing the component at a of mt however standard estimation techniques presented by auer et al and mertens et al sections and provide accurate and unbiased estimators m b t of the whole vectors mt at least in the case when the latter only depends on what happened in the past and on the opponent s strategy but not on the s choice of an action at the components of these estimators m b t equal for a a m b t a mt a i xt a with the constraint on mixed actions that xt a for all t the should then base her decisions and apply her strategy on m b t and eventually choose as a mixed action the convex combination of the mixed action she would have freely chosen based on the m b t with weight and of the uniform distribution with weight indeed by the inequality the averages of the vector payoffs and of the vectors of vector payoffs based respectively on the mt a and on the m b t a as well as the corresponding average payoffs obtained by the differ by something of the order of u t t u x x p ln t t for each t with probability at least and uniformly over the opponent s strategies these differences vanish as t at a t rate when the are of the order of a treatment similar to the one performed to obtain can also be applied to obtain statements with uniformities both with respect to time t and to the strategies of opponent because our aim involves the average payoffs mt via the target function as in mt we require the uniform continuity of for technical reasons to carry over the negligible differences between the average payoffs and their estimation in the approachability aim this assumption of uniform continuity can easily be dropped based on the result of theorem details are omitted conclusion approachability aim the enjoying a full monitoring should construct a strategy such that almost surely and uniformly over the opponent s strategies dp rt mt as t that is for all there exists such that for all strategies of the opponent with probability at least sup dp rt mt t however such a dependency can still be dealt with in some cases see the case of regret minimization in section and section a when the dependency on the s present action at comes only through an additive term equal to the obtained payoff which is known approachability in unknown games we note that we will often be able to provide stronger uniform and deterministic controls of the form there exists a function such that t and for all strategies of the opponent dp rt mt t to conclude this section we point out again that the two relaxations considered come at no cost in the generality of setting they are only intended to simplify and clarify the exposition full details of this standard reduction from the case of bandit monitoring to full monitoring are omitted because they are classical though lengthy and technical to expose link with approachability in known finite games we link here our general setting above with the classical setting considered by blackwell therein the and the opponent have finite sets of actions a and b and choose at each round t respective pure actions at a and bt b possibly at random according to some mixed actions xt xt a a and yt yt b b a payoff function r a b rd is given and is multilinearly extended to a b according to xx x y a b r x y xa yb r a b from the viewpoint the game takes place as if the opponent was choosing at each round the vector of vector payoffs mt r bt r a bt a target set c is to be approached that is the convergence ret t r at bt c t should hold uniformly over the opponent s strategies of course as recalled above we can equivalently require the uniform convergence of rt to a necessary and sufficient condition for this when c is closed and convex is that for all y b there exists some x a such that r x y of course this condition called the dual condition for approachability is not always met however in view of the dual condition the least approachable in p of such a closed and convex set c is given by max min dp r x y c b a approaching corresponds to considering the constant target function in better uniformly smaller choices of target functions exist as will be discussed in section this will be put in correspondence therein with what is called opportunistic mannor perchet stoltz the knowledge of r is crucial a first strategy the general strategies used to approach c or when c is not approachable and p rely crucially on the knowledge of indeed the original strategy of blackwell proceeds as follows at round t it first computes the projection e ct of ret onto then it picks at random according to a mixed action such that ret e ct r y e ct y b when c is approachable such a mixed action always exists one can take for instance arg min max ret e ct r x y a b in general the strategy thus heavily depends on the knowledge of when c is not approachable and p the set is the target and the choice right above is still suitable to approach in indeed the projection det of ret onto is such that ret det is proportional to ret e ct thus arg min max ret e ct r x y arg min max ret det r x y a b a b the knowledge of r is crucial a second strategy there are other strategies to perform approachability in known finite games though the one described above may be the most popular one for instance bernstein and shimkin propose a strategy based on condition for approachability which still performs approachability at the optimal t rate we discuss it in greater details and generalize it to the case of unknown games in section for now we describe it shortly only to show how heavily it relies on the game r being known assume that c is approachable at round t choose an arbitrary mixed action to draw and choose an arbitrary mixed action b for rounds t assume that mixed actions yet b have been chosen by the in addition to the pure actions bt actually played by the opponent and that corresponding mixed actions x x et such that r x es yes c have been chosen as well denoting t x r the strategy selects arg min max r x y i a b and t x r x arg max min r x y i b a exists since c is as well as x a such that r x c where such an x approachable thus it is crucial that the strategy knows r however that c be approachable is not essential in case it is not approachable and is to be approached instead it suffices to pick x es arg min dp r x yes c a so that r x es yes any p is suitable for this argument approachability in unknown games link with regret minimization in unknown games the problem of regret minimization can be encompassed as an instance of approachability for the sake of completeness we recall in appendix a why the knowledge of the payoff structure is not crucial for this very specific problem this of course is not the case at all for general approachability problems two toy examples to develop some intuition the examples presented below will serve as guides to determine suitable target functions k that is target functions for which the convergence can be guaranteed and that are ambitious small enough in a sense that will be made formal in the next section example minimize several costs at a time the following example is a toy modeling of a case when the first player has to perform several tasks simultaneously and incurs a loss or a cost for each of them we assume that her overall loss is the worst the largest of the losses thus suffered for simplicity and because it will be enough for our purpose we will assume that the only has two actions that is a while the opponent is restricted to only pick convex combinations of the following vectors of vector payoffs and m m a m with with m and and m the opponent s actions can thus be indexed by where the latter corresponds to the vector of vectors m m the base target set c is the negative orthant c and its in the supremum norm p are a graphical representation of these expansions and of the vectors and m is provided in figure example control absolute values in this example the still has only two actions a and gets scalar rewards d the aim is to minimize the absolute value of the average payoff to control the latter from above and from below for instance because these payoffs measure deviations in either direction from a desired situation formally the opponent chooses vectors mt which we assume to actually lie in k the product is then simply the standard inner product over we consider c as a base target set to be approached its expansions in any p are c for min min mannor perchet stoltz if if if this last set of equalities can be seen to hold true by contemplating figure m c m figure graphical of and m ofand different figurerepresentation graphical representation of and m andexpansions of left graph of the functions bold solid line of and dotted lines and of some fo x thin solid line the smallest set in hindsight can not be achieved in general we denote by k the function that associates with a vector of vector payoffs proof of lemma assume contradiction that combination is achievableofinitsthe above example an m k the index of the smallest p of cbycontaining a convex consider any strategy of the decision maker to do so which we denote by imagine in a first tim components that nature chooses at every stage t othe vectors mt which amounts to playing n m min given the x average a m min dp x m c the ai equals and that its image by equals a is then to converge to but this can only be guaranteed if the average of the chosen xt converge the infimum being achieved byis continuity this defines function x large integer such that to that given there exists asome possibly m k x m a x m m m lemma in examples and the convergence can not be achieved for against all strategies of the opponent now consider a second scenario during the first stages nature chooses the vectors mt m by construction as is fixed is ensured now for the next stages for t the proofs located in appendix reveal difficulty in is that it should assume that nature chooses the vectors mt m and denote by the average of th hold along a whole path while the value of mt can change more rapidly than the x selected by the average of the played vectors is corresponds to whos average payoff vectors trt do image by equals therefore the target set is however by definition of they will formalize the following proof scheme to accommodate a first situation which we have lasts a large number t of stages the should play in a given way but then is she from where the x the opponent changes drastically his and the m situation repeated can not catch up and is far the target at stage a concave relaxation is not ambitious enough a classical relaxation in the literature for unachievable targets see how mannor et proceed is to consider concavifications can the convergence hold with cav the concavification of the latter is defined as the least concave function k above the next section will show that it is indeed always the case but we illustrate on our examples why such a goal is not ambitious enough the proof of the lemma below can be found in appendix lemma in examples and the has a mixed action x x that she can play at each round to ensure the convergence for a target function that is uniformly smaller than and even strictly smaller at some points approachability in unknown games a general class of ambitious enough target functions the previous section showed on examples that the target function was too ambitious a goal while its concavification cav seemed not ambitious enough in this section based on the intuition given by the formula for concavification we provide a whole class of achievable target functions relying on a parameter a response function in the definition below by uniformity over strategies of the opponent player we mean the uniform convergence stated right after we denote by the graph of the mapping m k m n o m r k rd r m rd a rd definition a continuous target function k is achievable if the decisionmaker has a strategy ensuring that uniformly over all strategies of the opponent player dp rt mt as t more generally a possibly target function k is achievable if is approachable for the game with payoff function x m a k m x m that is if uniformly over all strategies of the opponent player mt rt as t we always have that entails with or without continuity of the condition is however less restrictive in general and it is useful in the case of target functions to avoid lack of convergence due to errors at early stages but for continuous target functions the two definitions and are equivalent we prove these two facts in section in the appendix the defining equalities for show that this function is continuous it is even a lipschitz function with constant in the p we already showed in section that the target function is not achievable in general to be able to compare target functions we consider the following definition and notation definition a target function k is strictly smaller than another target function if and there exists m k with m m we denote this fact by for instance in lemma we had the target function cav is always achievable we show below that the target function cav is always achievable but of course section already showed that cav is not ambitious enough in examples and there exist achievable target functions with cav we however provide here a general study of the achievability of cav as it sheds light on how to achieve more ambitious target functions so we now only ask for convergence of mt rt to the convex hull of not to itself indeed this convex hull is exactly the graph gcav where cav is the mannor perchet stoltz concavification of defined as the least concave function k above its variational expression reads x cav m sup mi n and mi m for all m k where the supremum is over all finite convex decompositions of m as elements of k the mi belong to k and the factors are nonnegative and sum up to by a theorem of fenchel and bunt see and theorem we could actually further impose that n da in general cav is not continuous it is however so when k is a polytope lemma the target function cav is always achievable proof sketch when k is known when the knows k and only in this case she can compute cav and its graph gcav as indicated after definition it suffices to show that the convex set gcav is approachable for the game with payoffs x m a k m x m the should then play any strategy approaching gcav note that is continuous that is thus a closed set and that gcav is a closed convex set containing now the characterization of approachability by blackwell for closed convex sets recalled already in section states that for all m k there should exist x a such that m x m gcav but by the definition we even have m x m m which concludes the proof we only proved lemma under the assumption that the knows k a restriction which we are however not ready to consider as indicated in remark indeed she needs to know k to compute and the needed projections onto this set to implement blackwell s approachability strategy some other approachability strategies may not require this knowledge a generalized version of the one of bernstein and shimkin based on the dual condition for approachability see section for their original version see section for our generalization but anyway we chose not to go into these details now because at least in the case when c is convex lemma will anyway follow from lemmas and or theorem below which are proved independently and wherein no knowledge of k is even better they prove the strongest notion of convergence of definition irrespectively of the continuity or lack of continuity of cav an example of a more ambitious target function by we can rewrite as cav m sup dp x mi mi c n and x mi m indeed the functions x and at hand therein are independent of k as they are defined for each m k as the solutions of some optimization program that only depends on this specific m and on c but not on approachability in unknown games now whenever c is convex the function dp c is convex as well over rd see boyd and vandenberghe example therefore denoting by the function defined as x x m sup dp x mi mi c n and mi m for all m k we have cav the two examples considered in section actually show that this inequality can be strict at some points we summarize these facts in the lemma below whose proof can be found in appendix that is achievable is a special case of lemma stated in the next subsection where a class generalizing the form of will be discussed lemma the inequality cav always holds when c is convex for examples and we even have cav a general class of achievable target functions the class is formulated by generalizing the definition we call response function any function k a and we replace in the specific response function x by any response function definition the target function based on the response function is defined for all m k as x x m sup dp mi mi c n and mi m lemma for all response functions the target functions are achievable the lemma actually follows from theorem below which provides an explicit and efficient strategy to achieve any in the stronger sense irrespectively of the continuity or lack of continuity of for now we provide a sketch of proof under an additional assumption of lipschitzness for based on calibration because it further explains the intuition behind it also advocates why the functions are reasonable targets resorting to some auxiliary calibrated strategy outputting accurate predictions in the sense of calibration of the vectors mt almost amounts to knowing in advance the mt and with such a knowledge what can we get proof sketch when is a lipschitz function we will show below that there exists a constant ensuring the following given any there exists randomized strategy of the such that for all there exists a time such that for all strategies of the opponent with probability at least sup dp rt mt t mannor perchet stoltz in terms of approachability theory see perchet for a survey this means that is in particular an set for all thus a set but approachability and approachability are two equivalents notions a fact when the sets at hand are not closed convex sets that is is approachable or put differently is achievable indeed fixing there exists a randomized strategy picking predictions m b t among j finitely many elements m k where j so that the calibration score is controlled for all there exists a time such that for all strategies of the opponent with probability at least w w t x x i sup w m b t j wt t w w m b t mt w w p foster and vohra now the main strategy based on such an auxiliary calibrated strategy is to play m b t at each round the average payoff of the is thus rt t m bt t mt we decompose it depending on the predictions m b t made for each j the av j erage number of times m was predicted and the average vectors of vector payoffs obtained on the corresponding rounds equal bj t t i m b t j t and m b j t pt mt i m b t j pt i m b t j j t bj t otherwise we take an arbitrary value for m whenever b in particular mt x j t bj t m b rt and x bj t m j j t m b using this convex decomposition of mt in terms of elements of k the very definition of leads to j t x j t bj t m b m b mt actually the latter reference only considers the case of calibrated predictions of elements in some simplex but it is clear from the method used in mannor and stoltz a reduction to a problem of approachability that this can be performed for all subsets of compact sets such as k here with the desired uniformity over the opponent s strategies see also mannor et al appendix b the result holds for any p by equivalence of norms on vector spaces of finite dimension even if the original references considered the or only approachability in unknown games hence dp rt mt w w w w j t x w j t w b w wrt t m b m b w w w w p w wx j t w bj t m j m b w w w w j t w m b w w w p we denote by bp max a bound on the maximal p of an element in the bounded set a triangular equality shows that w w wx w j t j t w j t w x x w j t w w w j bj t m j m bj t w w b m b m b ba w m wm w w a a p w w p bp max x bj t x m j j t m b a a where m a refers to the probability mass put on a a by m as indicated above we assume for this sketch of proof that is a lipschitz function with lipschitz constant l with respect to the over a and the p over we get bp max x bj t x m j j t m b a a bp max l x w w j t w w j b b w t wm m p w w t x x bp max l i w m b t j wt w w m b t mt w w p substituting we proved for bp max l which concludes the proof some thoughts on the optimality of target functions the previous subsections showed that target functions of the form were achievable unlike the target function and that they were more ambitious than the concavification cav the question of their optimality can be raised a question to which we will not be able to answer in general our thoughts are gathered in appendix a strategy by regret minimization in blocks in this section we exhibit a strategy to achieve the stronger notion of convergence with the target functions advocated in section irrespectively of the continuity or lack of continuity of the algorithm is efficient as long as calls to are a full discussion of the complexity issues will be provided for each application studied in section mannor perchet stoltz description and analysis of the strategy as in abernethy et al the considered strategy see figure relies on some auxiliary strategy r namely a strategy with the following property assumption the strategy r sequentially outputs mixed actions ut a such that for all ranges b not necessarily known in advance for all t not necessarily known in advance for all sequences of vectors ra of payoffs lying in the bounded interval b possibly chosen online by some opponent player where t t t t x x max u t ln a ut a note in particular that the auxiliary strategy r automatically adapts to the range b of the payoffs and to the number of rounds t and has a sublinear guarantee the adaptation to b will be needed because k is unknown such auxiliary strategies indeed exist for instance the polynomially weighted average forecaster of and lugosi other ones with a possibly larger constant factor in front of the b t ln a term also exist for instance exponentially weighted average strategies with learning rates carefully tuned over time as described by et al or de rooij et al for the sake of elegance but maybe at the cost of not providing all the intuitions that led us to this result we only provide in figure the version of our strategy which does not need to know the time horizon t in advance the used blocks are of increasing lengths simpler versions with fixed block length l would require a tuning of l in terms of t pick l of the order of t to optimize the theoretical bound theorem for all response functions the strategy of figure is such that for all t for all strategies of the opponent there exists ct mt ensuring w w wrt ct w t ln a max t where max max is the maximal euclidean norm of elements in in particular denoting by a constant such that k kp k for all t and all strategies of the opponent dp rt mt t ln a max t remark with the notation of figure denoting in addition by nt the largest integer such that nt nt t by mpart t nt nt t x mt nt the partial average of the vectors of vector payoffs mt obtained during the last and nt block when nt nt t and an arbitrary element of k otherwise we can take nt x nt nt k k part part ct m m m m t approachability in unknown games parameters a strategy r with initial action and a response function k a initialization play and observe rd a this is block n for blocks n compute the total discrepancy at the beginninga of block n that is till the end of block n n x where m k xt mt x k mk k k m k m k rd is the average vector of vector payoffs obtained in block k n run a fresh instance rn of r for n rounds as follows set then for t n a play xn un t and observe mn rd a b feed rn with the vector payoff t ra with components given for a a by t a mn a i r where h i denotes the inner product in rd c obtain from rn a mixed action un a block n starts at round n n n n is of length n thus lasts till round figure the proposed strategy which plays in blocks of increasing lengths important comments on the result the strategy itself does not rely on the knowledge of k as promised in remark only its performance bound does via the max term also the convexity of c is not required the convergence rates are independent of the ambient dimension concerning the norms even if the strategy and its bound are based on the euclidean norm the set mt is defined in terms of the p as in the constant exists by equivalence of the norms on a space finally we note that we obtained the uniformity requirement stated after in the deterministic form with a function where t o t proof the convergence follows from the bound via the equivalence between p and that the stated ct in belongs to mt where the latter set is defined in terms of the p as in is by construction of as a supremum it thus suffices to prove with the ct defined in which we do by induction the induction is on the index n of the blocks and the quantities to control are the squared euclidean norms of the discrepancies at the end of these blocks we recall mannor perchet stoltz that denotes the discrepancy at the end of block we have that is a difference between two elements of k thus that max we use a approach we consider a function to be defined by the analysis and assume that we have proved that our strategy is such that for some n and for all sequences of vectors of vector payoffs mt k possibly chosen by some opponent for all strategies of the opponent w wn w n x x w w k k w w w xt mt m m w n w w max for instance we define we then study what we can guarantee for n we have w w w x w w w xt mt n m m w w w x m xt mt n m w w x w xt w mt n m w w w m w w max we upper bound the two squared norms by n and n respectively using the inner product can be rewritten with the the notation u m notation of figure as x xt mt n m m x t t x u t now the inequality indicates that for all a and t p t a kmn a max n where we used again the induction hypothesis assumption therefore indicates that the p p quantity can be bounded by max n n ln a putting everything together we have proved that the induction holds provided that n is defined for instance as p p n n max n n ln a max n by the lemma in appendix taking max ln a and max we thus get first n n max ln a then n max ln a approachability in unknown games it only remains to relate the quantity at hand in and to the by separating time till the end of the nt and starting from the beginning of block nt should the latter start strictly before t we get t rt ct xt t mt t t t nt x k m t x nt k m xt mpart nt nt m part part m mt the second sum contains at most nt elements as the nt regime is incomplete a triangular inequality thus shows that q nt nt krt ct max max ln a max t t t t t ln a max t where we used the inequality nt ntp t its implication nt as well as for the sake of readability the bounds and discussion in this section we gather comments remarks and pointers to the literature we discuss in particular the links and improvements over the concurrent and independent works by bernstein and shimkin and azar et al do we have to play in blocks is the obtained t rate optimal our strategy proceeds in blocks unlike the ones exhibited for the case of known games as the original strategy by blackwell or the more recent one by bernstein and shimkin see section the strategy considered in the proof of lemma also performed some grouping according to the finitely many possible values of the predicted vectors of vector payoffs this is because the target set to approach is unknown the approaches a sequence of expansions of this set where the sizes mt of the expansions vary depending on the sequence of realized averages mt of vectors of vector payoffs when an approachable target set c is given the strategies by blackwell or bernstein and shimkin do not need to perform any grouping actually it is easy to prove that the following quantity which involves no grouping in rounds can not be minimized in general krt where kp t xt t t mt t mt t mt mt t mt p mannor perchet stoltz indeed consider a toy case where the mt gt a have scalar components gt a r the negative orthant c is to be approached whose expansions are given by for considering the response function ga arg ga we see that boils down to controlling t t xx max t xt a gt a t t t which is this is in contrast with the regret which can be minimized the most severe issue here is not really the absolute value taken but the fact that we are comparing the s payoff to the sum of the instantaneous maxima of the payoffs ga t instead of being interesting in the maximum of their sums as in so the answer to the first question would be yes we have to play in blocks given that is the obtained t rate optimal we can answer this question in the positive by considering the same toy case as above with this example the bound given the definition of ct rewrites k nt t t x x xx xt a gt a max gt max gt t t an t nt which corresponds to the control from above and from below of what is called a tracking regret for nt shifts this notion was introduced by helmbold and see also and lugosi chapter for a review of the results known for tracking regret in particular the examples used therein to show the optimality of the bounds which are of the form of the one considered in footnote can be adapted in our context so thatpthe lower bound on tracking regret with nt shifts applies in our case it is of the order of nt thus of t in a nutshell what we proved in these paragraphs is that if we are to ensure the convergence by controlling a quantity of the form and then we have to proceed in blocks and convergence can not hold at a faster rate than t however the associated strategy is computationally efficient also neither the convexity of c nor the continuity of or of are required yet the stronger convergence is achieved not only trading efficiency for a better rate an interpretation of the different rates theorem shows that some set is approachable here namely the set defined in it is thus a in the terminology of spinat see also hou as well as a remark by blackwell therefore there exists some possibly computationally extremely inefficient strategy which approaches it at a t indeed the proof of existence of such a strategy does not rely on any constructive argument this can be seen by taking a and binary payoffs gt a the expectation of the regret is larger than a positive constant when the gt a are realizations of independent random variables gt a identically distributed according to a symmetric bernoulli distribution in particular the regret is larger than this constant for some sequence of binary payoffs gt a approachability in unknown games based on all remarks above we may provide an intuitive interpretation of the t rate obtained in theorem versus the t rate achieved either in our context by the abstract strategy mentioned right above or associated with blackwell s original strategy or variations of it as the one by bernstein and shimkin in the classical case of known games and sets c being known to be approachable the interpretation is in terms of the number of significant costly computational units ncomp projections solutions of convex or linear programs etc to be performed the strategies with the faster rate t perform at least one or two of these units at each round while our strategy does it only of the order of t times during t are encompassed into the calls to and take p place at times t k k for k in all these cases the rate is proportional to ncomp on the related framework of azar et al the setting considered therein is exactly the one described in section our works are concurrent and independent crucial differences lie however in the aims pursued and in the nature of the results obtained the quality of a strategy is evaluated by azar et al based on some and lipschitz function f rd with the notation of theorem the straightforward extension to an unknown horizon t of their aim is to guarantee that lim inf t f t xt t mt min m k max f x nt a where we recall that nt is of order t azar et al mention that this convergence can take place at an optimal t rate satisfying and recovering this optimal rate is actually a direct consequence of our theorem and of the assumptions on f indeed and together with the lipschitz assumption on f entail that lim inf t f t xt t mt nt x f o t k m k t m k the of f implies that the image by f of a convex combination is larger than the minimum of the images by f of the convex combinations thus yields in particular lim inf t f t xt t mt min nt f m k m k the convergence rate is the same as for thus is of order at least t defining the response function by m arg max f x m we get a however we need to underline that the aim is extremely weak assume for instance that during some block nature chooses m k with identical components such that x a f x m k min f mannor perchet stoltz then is satisfied irrespectively of the algorithm on the contrary the more demanding aim that we consider is not necessarily satisfied and an appropriate our be used in addition the strategy designed by azar et al still requires some the set k of vectors of vector payoffs needs to be known which is a severe restriction uses projections onto convex sets the rate they obtain for their weaker aim is o t as we get for our improved aim links with the strategy of bernstein and shimkin in this final paragraph of our discussion of theorem we review the strategy of bernstein and shimkin and extend it as much as it can be extended to a setting as close as possible to our setting of unknown games see figure the extension however requires that the set k of possible vectors of vector payoff is known to the an assumption that we would not be ready to make parameters the set k a response function k a initialization play an arbitrary a pick an arbitrary m k for rounds t update the discrepancy x play a mixed action xt arg min max x a m e t arg max min x compute a x m es mi m es mi figure a generalization of the strategy of bernstein and shimkin theorem for all response functions the strategy of figure is such that for all t for all sequences mt rd a of vectors of vector payoffs possibly chosen by an opponent player w w t t w x max w w xt mt m et m e tw w w wt t t the obtained bound is deterministic and uniform over all strategies of the opponent just as the bound of theorem was of course the control is a much weaker statement than trying to force the convergence of the quantity towards to which set can we guarantee that t x m et m et belongs it seems difficult to relate this quantity to the set mt and get the convergence dp rt mt except in some special cases the applications of section will further underline this limitation approachability in unknown games one of these special cases is when the set c is approachable that the null target function is achievable this assumption of approachability translates in our more general case into the existence of a response function such that m m c for all m as advocated by bernstein and shimkin in such settings it is often computationally feasible to access to m and less costly than performing projections onto in a nutshell the strategy of bernstein and shimkin can be extended to the setting of almost unknown games the set k needs to be known but the obtained convergence guarantees are meaningful only under an assumption of approachability of the target set one of the two sources of unknownness of our setting is then almost dealt with the fact that the underlying structure of the game is unknown but not the fact that the target is unknown as well proof of theorem the construction of the strategy at hand and the proof of its performance bound also follow some approach as for theorem however no blocks are needed we proceed as in by developing the square euclidian norm of to relate it to the one of where t m e m e z w m e z max m e we show below that the inner product is which after an immediate recurrence shows that max t and concludes the proof indeed by von neumann s minmax theorem using the definitions of and m e mi min x a mi max min x a min a x in particular for all k and a i mi min a choosing and m e entails x m e as used above to complete the induction mi m e m e m e m e link with classical approachability opportunistic approachability we recall that in the setting of known finite games described in section vectors of vector payoffs m actually correspond to the r b this defines the closed convex set k as the set mannor perchet stoltz of the r y for all mixed actions y b of the opponent both strategies considered therein relied on a response function x defined as y b r y x r y arg min dp r x y c a accessing to a value of this response function amounts to solving the convex program min x xa r a y c x a c c which can be done efficiently it even reduces to a quadratic problem when c is a polytope our algorithm based on this response function approaches the set where the quantity is defined in it is not required to compute the said quantity the same guarantee with the same remark apply to the two strategies presented in section blackwell s strategy for the case p only and the strategy by bernstein and shimkin for all p these three algorithms ensure in particular that the average payoffs rt are asymptotically inside of or on the border of the set now that is null or positive indicates whether a convex set c is approachable or not but the problem of determining the approachability of a set is actually an extremely difficult problem as even the determination of the approachability of the singleton set c in known games is to perform see mannor and tsitsiklis to see that there is no contradiction between being able to approach and not being able to say that or not note that none of the algorithms discussed above does neither in advance nor in retrospect issue any statement on the value of they happen to perform approachability to for the specific sequence of actions chosen by the opponent but do not determine a minimal approachable set which would be suited for all sequences of actions in particular they do not provide a certificate of whether a given convex set c is approachable or not opportunistic approachability in general in known games one has that the target function considered above satisfies that is sequences of vectors r bt can lead to an average payoff rt being much closer to c than the uniform distance we get some pathwise refinement of classical approachability this should be put in correspondence with the recent but different notion of opportunistic approachability see bernstein et however quantifying exactly what we gain here with the pathwise refinement would require much additional work maybe a complete paper as the one mentioned above and this is why we do not explore further this issue applications in this section we work out two applications learning while being evaluated with global cost functions and approachability under sample path constraints global cost functions this problem was introduced by et al and slightly generalized by bernstein and shimkin we first extend it to our setting of unknown games and describe what approachability in unknown games theorem guarantees in our case and then compare our approach and results to the ones of the two mentioned references we keep the original terminology of global costs thus to be minimized and do not switch to global gains to be maximized but such a substitution would be straightforward description of the problem in the case of unknown games we denote by kproj rd the closed convex and bounded set formed by the ma when m k and a a a global cost function is a mapping c kproj r measuring the quality of any vector in kproj for instance the choice of a mixed action x a given a vector of vector payoffs m k is evaluated by c x m or the performance of the average payoff rt is equal to c rt some regret is to be controlled to ensure that the latter quantity is small as well et al and bernstein and shimkin defined this regret as c rt where c m k inf a inf c x a c x mt c rt c mt m assuming that c is continuous the infimum in the defining equation of c is achieved and we can thus construct a response function k a such that k c m m min c x m c m a actually the proof techniques developed in the latter references see the discussion below only ensure a vanishing regret for the convexification vex c of c and the concavification cav c of c they can only issue statements of the form lim sup vex c rt cav c mt t they additionally get convergence rates when vex c is a lipschitz function we recall that vex c c and that cav c c so that the statements of the form above are much weaker than the original aim at least when c is not convex or c is not concave a natural case when the latter assumptions are however satisfied is when c cp is the p for p including the supremum norm p d x cp ud upj and ud max d our main contribution a better notion of regret we will directly bound c rt whether c is convex or not and will similarly relax the assumption of concavity of c needed in all mentioned references to tackle the desired regret to that end we propose a notion of regret that is better in all cases whether c and c are respectively convex and concave or not more precisely we compare c rt to a quantity mt based on any response function and which generalizes the definition for all m k x x m sup c mi mi n and mi m mannor perchet stoltz the extended notion of regret is then defined as c rt mt we now explain why this new definition is always more ambitious than what could be guaranteed so far by the literature namely indeed when c is convex and by definition of we have in particular x x m sup c mi mi n and mi m cav c m the inequality stated above can be strict for instance as indicated in section when c dp c where c is convex the global cost function c is indeed convex we then have cav c cav and and thus we possibly have cav c as stated in lemma the function c dp c is also a lipschitz function which illustrates the interest of the second part of the following corollary we recall that max denotes the maximal euclidean norm of elements in corollary for all response functions when c is continuous and convex the strategy of figure ensures that uniformly over all strategies of the opponent n o lim sup c rt mt t when c is in addition a lipschitz function with constant l for the on kproj we more precisely have c rt mt t ln a max l t proof we apply theorem and use its notation the function c is continuous thus uniformly continuous on the compact set kproj thus w w wrt ct w entails c rt c ct both convergences toward being uniform over all strategies of the opponent now by definition of ct as a convex combination of elements of the form mi mi we have c ct mt which concludes the first part of the corollary the second part is proved in the same manner simply by taking into account the bound and the fact that c is a lipschitz function discussion as indicated in general in section we offered two extensions to the setting of global costs first we explained how to deal with unknown games and second indicated what to aim for given that the natural target is not necessarily approachable and that sharper targets as the ones traditionally considered can be reached the second contribution is perhaps the most important one indeed the natural target corresponds to ensuring the following convergence to a set where h r m c r c m rt mt h approachability in unknown games this target set h is not necessarily a closed convex and approachable set but its convex hull co h is so as proved by et al and bernstein and shimkin this convex hull is exactly equal to co h r m vex c r cav c m we replace the convergence of rt mt to the above convex hull co h by a convergence to the smaller set n r m c r m such a convergence is ensured by and the continuity of c and this set is smaller than co h as follows from the discussion before corollary et al use directly blackwell s approachability strategy to approach co h which requires the computation of projections onto co h a possibly computationally delicate task we thus only focus on how bernstein and shimkin proceed and will explain why the obtained guarantee of convergence to co h can not be easily improved with their strategy we apply theorem to a lifted space of payoffs k rd rd a namely with each m k we associate m k defined as ma rd rd a a a ma m that is the component a a of m contains the corresponding component ma of m as well as the vector m itself in particular t xt t mt rt mt we pick the response function k a corresponding to the base response function defined in m m then the convergence reads w w w t t t w w x w x x w rt w w w m et m e tw w xt mt m et m e t w w w mt w wt w t t e in by definition of k and for all t for some m e m m et m et r m c r c m h et m et m m et thus the convex combination of the m et m e t belongs to co h and the convergence is achieved under additional regularity assumptions continuity of vex c and cav c the stronger convergence holds as can be seen by adapting the arguments used in the second part of section however the limitations of the approach of bernstein and shimkin are twofold first as already underline in section the sets k or equivalently k need to be known to the strategy thus the game is not fully unknown second there is no control on where the m e t or m e t lie and therefore there is no reasonable hope to refine the convergence to a convergence to a set smaller than co h and defined in terms of mt as in our approach mannor perchet stoltz approachability under sample path constraints we generalize here the setting of regret minimization in known finite games under sample path constraints as introduced by mannor et al and further studied by bernstein and shimkin the straightforward enough generalization is twofold we deal with approachability rather than just with regret we consider unknown games description of the problem in the case of unknown games a vector in kproj rd now not only represents some payoff but also some cost the aim of the player here is to control the average payoff vector to have it converge to the smallest expansion of a given closed convex target set p while abiding by some cost constraints ensuring that the average cost vector converges to a prescribed closed convex set formally two matrices g and c of respective sizes g d and d associate with a vector ma kproj rd a payoff vector gma rg and a cost vector cma for instance when the chooses a mixed action x a and the vector of vector payoffs is m k she gets an instantaneous payoff g x m and suffers an instantaneous cost c x m the admissible costs are represented by a closed convex set while some closed convex payoff set p rg is to be approached the question is in particular what the should aim for the target is unknown following the general aim and generalizing the aims of mannor et al and bernstein and shimkin we assume that she wants the following convergences to take place uniformly over all strategies of the opponent as t dp grt mt and dp crt for some target function to be defined being as small as possible that is she wants to control her average payoff grt as well as she can while ensuring that asymptotically her average cost crt lies in the set of admissible costs to make the problem meaningful and as in the original references we assume that the cost constraint is feasible assumption for all m k there exists x a such that g x m what the general result of theorem states we consider mostly the following response function x for all m k n o x m arg min dp g x m p x a g x m which provides the and response the defining minimum is indeed achieved by continuity as both p and are closed sets since in addition p and are convex the defining equation of x is a convex optimization problem under a convex constraint and can be solved efficiently of course more general preferably also response functions can be considered by a response function we mean any response function such that m k c m m this property is indeed satisfied by x approachability in unknown games we adapt the definition of the target function based on some response function to only consider payoffs for all m k x x m sup dp mi mi p n and mi m a discussion below will explain why such goals with are more ambitious than the aims targeted in the original references which essentially consisted of shooting for with cav only and in restricted cases ones g where for all m k m dp g x m m p corollary for all response functions the strategy of figure ensures that for all t and for all strategies of the opponent dp grt mt lg t ln a max t and dp crt lc t ln a max t where lg respectively lc is a norm on g respectively c seen as a linear function from rd equipped with the to rg respectively equipped with the p in particular the aim is achieved proof we apply theorem and use its notation by and by definition of lc w w wcrt cct w lc t ln a max t p because was assumed to be and in view of the form of ct we have cct and we thus have proved dp crt lc t ln a max t a similar argument based on the fact that gct mt by definition of yields the stated bound for dp grt mt what the extension of earlier results theorem yields as indicated several times already mannor et al and bernstein and shimkin only considered the case of regret minimization a special case of approachability when g is a linear form g and p is an interval of the form where is a bound on the values taken by we will discuss this special case below the strategies considered by mannor et al were not efficient they relied on being able to project on complicated sets or resorted to calibrated auxiliary strategies unlike the one studied by bernstein and shimkin we will thus focus on the latter the not necessarily convex target set considered therein is h r m cr and dp gr p m mannor perchet stoltz where was defined in because p is convex and g is linear the function r rd dp gr p is convex see boyd and vandenberghe example the convex hull of h thus equals co h r m cr and dp gr p cav m to be able to compare the merits of the strategy by bernstein and shimkin to corollary we first extend it to the case of unknown games based on theorem to that end we consider the same lifting as in and apply similarly theorem to get as well for the response function x using that in this case by definition of x m et m et et m et m h m et the convergence rewrites t w x x m et w rt w m et w mt t z co h m et w w w w and entails the convergence of rt mt to co h in particular crt under an additional regularity assumption the continuity of cav we also get by adapting the arguments used in the second part of section the stronger convergence n o lim sup dp grt p cav mt t that is lim sup dp grt pcav mt t summarizing the convergence is guaranteed with cav an inspection of the arguments above shows that cav being actually uniformly continuous the desired uniformity over the strategies of the opponent is achieved the same limitations to this approach as mentioned at the end of the previous section arise as far as the concepts of unknown game and unknown target are concerned first the set k needs to be known to the strategy and the game is not fully unknown second there is no control on where the m e t lie and therefore there is no reasonable hope to refine the convergence with cav into a convergence with a smaller target function in contrast corollary provided such a refinement with which by convexity of r rd dp gr p is smaller and possibly strictly smaller than cav adapt lemma to prove the strict inequality a note on known games however mannor et al section exhibit a class of cases when cav is the optimal target function in known games with scalar payoffs and scalar constraints and with set of constraints of the form this amounts to minimizing some constrained regret we thus briefly indicate what known games are in this context as defined by mannor et al and bernstein and shimkin some linear scalar payoff function u approachability in unknown games a b and some linear cost function v a b are given with no loss of generality we can assume that the payoff function takes values in a bounded nonnegative interval the set k of our general formulation corresponds to the vectors as y describes b u y v y r the matrices g and c extract respectively the first component and all but the first component regret is considered that is the payoff set p to be be approached given the constraints is the expansions are the distance of some r r to some equals r in this context convergences of the form thus read t v xt bt t and lim inf t where ut vt t u xt bt ut v t t t x u bt v bt t and thus correspond to some constrained problems indeed denoting yt t t the empirical frequency of actions bt b taken by the opponent and recalling that u is bounded by we have for instance when ut v t u y t where u y max u x y x a v x y the convergence finally reads when t v xt bt t and lim inf t t u xt bt y t t just as we showed in section that in general the target function is not achievable mannor et al section showed that the constrained regret with respect to u y t defined in can not be minimized the proposed relaxation was to consider its convexification vex u instead in which corresponds to cav in in this specific setting the target function equals cav our general theory provides no improvement this is in line with the optimality result for cav exhibited by mannor et al section in this case mannor perchet stoltz approachability of an approachable set at a minimal cost this is the dual problem of the previous problem have the payoffs approach an approachable convex set while suffering some costs and trying to control the overall cost in this case the set p is fixed and the are in terms of the set of constraints actually this is a problem symmetric to the previous one when the roles of g and p are exchanged with c and acknowledgments vianney perchet acknowledges funding from the anr under grants and shie mannor was partially supported by the isf under contract gilles stoltz would like to thank investissements d avenir labex ecodec for financial support an extended abstract of this article appeared in the proceedings of the annual conference on learning theory colt jmlr workshop and conference proceedings volume pages approachability in unknown games references abernethy bartlett and hazan blackwell approachability and learning are equivalent in proceedings of colt pages auer freund and schapire the nonstochastic multiarmed bandit problem siam journal on computing azar feige feldman and tennenholtz sequential decision making with vector outcomes in proceedings of itcs bernstein and shimkin approachability with applications to generalized problems journal of machine learning research apr bernstein mannor and shimkin opportunistic strategies for generalized problems in proceedings of colt pages blackwell an analog of the minimax theorem for vector payoffs pacific journal of mathematics boyd and vandenberghe convex optimization cambridge university press cambridge uk and lugosi algorithms in prediction and game theory machine learning and lugosi prediction learning and games cambridge university press mansour and stoltz improved bounds for prediction with expert advice machine learning de rooij van erven and koolen follow the leader if you can hedge if you must journal of machine learning research apr kleinberg mannor and mansour online learning for global cost functions in proceedings of colt foster and vohra asymptotic calibration biometrika helmbold and tracking the best expert machine learning and fundamentals of convex analysis hou approachability in a game the annals of mathematical statistics mannor and stoltz a geometric proof of calibration mathematics of operations research mannor perchet stoltz mannor and tsitsiklis approachability in repeated games computational aspects and a stackelberg variant games and economic behavior mannor tsitsiklis and yu online learning with sample path constraints journal of machine learning research mannor perchet and stoltz approachability and online learning with partial monitoring journal of machine learning research oct mertens sorin and zamir repeated games core discussion papers belgium perchet approachability regret and calibration implications and equivalences journal of dynamics and games spinat a necessary and sufficient condition for approachability mathematics of operations research approachability in unknown games appendix a link with regret minimization in unknown games the problem of regret minimization can be encompassed as an instance of approachability we recall here why the knowledge of the payoff structure is not crucial for this very specific problem this of course is not the case at all for general approachability problems indeed with the notation of section the aim of regret minimization in a known finite game with payoff function s a b is for the to ensure that lim sup t t t s at bt max s a bt t t this can be guaranteed by approaching a with the vector payoff function r a b ra defined by r a b s b s a b the necessary and sufficient condition for approachability of the closed convex set a is satisfied for the condition rewrites in our case d e e d y b ret ct r y ct ret r y ret ret r y where and denote respectively the vectors formed by taking the nonnegative and parts of the original components of the vector of interest now using the specific form of r we see that d e x x ret r y ret s y ret s y either all components of ret are ret is already in a or we can choose the mixed distribution defined by ret a a a a p et r in the latter case we then get y b ret e ct r y e ct and is in particular satisfied d ret e r x y the knowledge of s or r is not crucial here comments have to be made on the specific choice of it is independent of the payoff structure s or r it only depends on the past payoff vectors s where in particular the strategy above to minimize the regret can be generalized in a straightforward way to the case of games with full monitoring but whose payoff structure is unknown in these games at each round the opponent chooses a payoff vector gt gt a mannor perchet stoltz the chooses an action at a and observes the entire vector gt while wanting to ensure that the regret vanishes t t lim sup gt at max gt a t t t it suffices to replace all occurrences of s bt above by gt in particular the payoff function r defined in is to be replaced by the vectors of vector payoffs mt whose components equal a a mt a gt gt a a note on the bandit monitoring the case of unknown games in the case of an unknown game when the payoff structure is unknown and when only bandit monitoring is available the generic trick presented around should be adapted as indicated by footnote indeed the only feedback available at the end of each round is gt at and not mt at the estimation to be performed is rather on the vectors gt than on the mt for all a a gt a gbt a i xt a with the same constraints xt a for all t from which we define m b t a gbt gbt a substituting the estimates gbt in the strategy defined around in lieu of the vectors s bt ensures that the regret vanishes approachability in unknown games appendix b calculations associated with examples and proof of lemma proof for example assume by contradiction that the convergence can be achieved and consider any strategy of the decision maker to do so which we denote by it suffices to consider the almost sure convergence the stronger uniformity requirements stated after it will not be invoked all statements in the sequel hold almost surely and quantities like and should be thought of as random variables imagine in a first time that opponent chooses at every stage t the vectors mt we have the smallest of the supremum norms of and the aim is then that the average payoffs rt converge to but this can be guaranteed only if the averages of the chosen mixed actions xt converge to that is given there exists some possibly large integer such that now consider a second scenario during the first stages the opponent chooses the vectors mt by construction as the strategy is fixed is ensured now in the next stages for t assume that the opponent chooses the vectors mt m and denote by the average of the first components of the mixed actions xt selected by in this second set of stages we have m where and m m m has components therefore the target set is however by definition of we have r xt m and therefore because of this entails that this construction can be repeated again after stage by choosing mt till a stage is reached when such a stage exists by the assumption that the convergence is achieved by the strategy one can then similarly see that mannor perchet stoltz by repeating this over again and again one proves that lim sup rt mt t which contradicts the assumption that ensures the convergence the claim follows proof for example sketch the same construction as for the previous example holds by switching between a first regime when mt is chosen and at the end of which the average payoff should be close to null then another regime of the same length starts with mt and no matter what the does she will get an average payoff of in this regime in total at the end of the second regime while the target set is given by as this can be repeated over and over again proof of lemma proof for example we have cav to prove this fact we first compute for the components of m m equal and therefore m min max max if if if if we note that m and that so that cav is identically equal to on the set k defined as the convex hull of and m smaller target functions such that the convergence holds can be considered this proves in particular that can also be guaranteed for the larger cav indeed m max is smaller than cav and even strictly smaller when and see figure in addition the convergence can hold for it indeed if the plays xt at each round always picks the first component of mt then her average payoff equals rt where mt m t for some t by definition of the distance of if this last set of equalities can be seen to hold true by contemplating figure approachability in unknown games m c m figure graphs of the functions bold solid line and dotted line figure graphical representation of and m and of different expansions left graphs of the functions bold solid of and is dotted lines for we even have in to in theline supremum norm precisely and m of therefore x this thin casesolid line rt mt which proves by in particular that convergence for proof of lemma assume contradiction that is achievable in holds the above and consider any strategy of the decision maker to do so which we denote by imagine in a first time that nature chooses at every t the vectors mt which amounts to playing seemingly proof for stage example the computations are more involved in this simpler given that the average of the equals and that its image by equals the aim t t example as before we start by computing we refer to vectors m chosen is then to converge opponent but this can guaranteed if the the chosen xt by converges bytothe asonly m v w and to theaverage mixed of actions picked the by to that is given there exists some possibly large integer t such that x where the value of a convex combination of v and w is to be minimized this achieved with if v w or v w if w v or w v stages nature chooses x v w first now consider a second scenario during the vectors mt next stages if for v t now for the by construction as is fixed is ensured assume that nature chooses the vectors mt m and denote by the average of the which to xt selected by the average of the played corresponds to whose min if v w image by equals therefore thew target v set is however by definition of if v w we have all v w the concavification of thus admits expression xt m cav v w rt we replace a lengthy expression by the graphical illustrations proof provided by figure now we consider the target function defined as v w we denote mt v t wt by playing xt at each round the ensures that v t wt v t wt rt while mt thus for this example again in which the p can be chosen freely we have dp rt mt v w v w v w if if if if if and v and v w w and v w v and v w mannor stoltz w and v w which shows that cav admittedly a picture would help we provide one as figure v v w w cav cav left right figure of and of its concavification cav right representations of left its concavification cav figure figure representations in ofrepresentations figure representations of left of the alternative target function center and of the concavification cav right x can be improved indeed we conclude our discussion of example by construction showing thatlemma even proof of previous the sameexample construction the previous proof of lemma the same as holds asbyforswitching example holds by switching be that is somehow by having the unwas constructedtween by choosing the response function x tween regimes mend isthe chosen andpayoff at the should end of which the average payoff should regimes when mt is chosen andwhen at the which average t of in mind this response function as the main algorithm below shows chievable target function be close to null rt the then another regime oft the same length be close null rt then another regime same length with and no starts with mt and no m the convergence ofholds for and also forstarts cav since the latter is larger than s a reply to awhat localthe average of vectors payoffs reasonable quantities matter whatwill theget does of she getregime an average payoff of in this regime in total matter does vector she average payoff will in this in total strictly at some points indeed follows from equals hould be targeted rtquite surprisingly we that relaxing the rtbelow while the image of by equals the fact this whilelarger theillustrate target that the image of m the that inequality byis expectations cav this target lso results in better payoffs for instance if the decision maker knows in advance that that can be repeated over and over again can be repeated over and over again thus he next vectors will be mt she should not worry about getting and max laying xt she could well be satisfied with and thus play xt proof of the second consider w which is an achievthe second part of lemma we v lemma w we which an v achievturns out that theproof same of argument can be performed at each v consider w part that the inequality can be strict seen at again an illustration target function it indeed suffices to play x at each able target function it indeed able suffices to cav play xt atiseach round the inequality t round the inequality follows from the fact that max that this follows from the cav cav fact that max that this roof of the second lemma by thefigure function already considered above is thiswe is seen at a conclusion inequality can be strict is seen inequality at can be as strict a conclusion we have cav is have cav this is x as uch that as can be seen on a picture illustrated by figures and illustrated by figures and what we could a signvalues the absolute values of the proof below illustrates whatthe weproof call a illustrates sign thecall absolute of proof of lemma the convex in can be much smallerofthan the convex combinations of the convex combinations considered in combinations can be much considered smaller than the convex combinations it can indeed be it in the absolute values their elements as expression cav absolute values of their elements considered the expression of considered cav indeed be a of sign proof for example proofofinbelow illustrates what wecanthe could call seenfor in our example given seen in our example form of the that we given have the here allsets v cw r the form of the sets that n n n n x x x x n n sup vi wi v vii w wii v n w and vi wi v w cav v w sup xcav vi x w v i w i w i n cav v w sup x vi i vi i n and i vi wi v w proof of second to part lemma according cand given the form of the sets the proof of the second part of lemma the according of and given the form of thetosets the target function is defined astarget function is defined as n n n x x n n x x x x vi wi xi vn w v w v w sup i w sup vi wi v vii w wiin and and v w xw v v v i vii wii v w i w i iw by a tedious case study consisting of identifying the worst convex decompositions one then gets the explicit expression v w v w v w if if if if if and v and v w w and v w v and v w w and v w p vi wi approachability in unknown games to be compared to the expression obtained earlier for cav namely cav v w admittedly a picture would help we provide one as figure we see on the picture or v w cav of figure representations in and and figure of left of concavification cav figure representations in representations of left and center representation of right cav proof of lemma the same construction as for the previous example holds by switching bex cav and even that cav by considering the by direct calculations that tween regimes when mt is chosen and at the end of which the average payoff should respective values be and to null at r then another regime of the same length starts with m and no t t matter what the does she will get an average payoff of in this regime in total x is rt the target that the image will by equals this proof for example while will prove that thatofthe result follow from the so already be repeated over and over again inequality cav proved in section indeed as can be seen in the computations leading to the expression of we have proof of the second part of lemma we consider v w which is an if able target x function m it indeed suffices to play xt at each round the inequality the if cav follows from that max that this inequality can be strict is seen at as a conclusion we have cav this is therefore illustrated by and if x m m if the proof below illustrates what we could call a sign the absolute values of but for wecombinations have the inequality convex considered in can be much smaller than the convex combinations of the absolute values of their elements as considered in the expression of cav it can indeed be given the form of the seen in our example n n x x which entails that for all m m substicav v w supagain x vi wi vxi wm vi wi v w i n and tuting in and using that the supremum distance to the negative orthant is increasing with respect to inequalities proof of the second part of lemma according to and given the form of the sets the x v w r d v w max v w target function as n n x x x x vi wi vi wi n and vi wi v w v w sup mannor perchet stoltz we get that x m sup sup n x n x x m m i n and n and m n x n x the converse inequality follows from the decomposition of any as the convex combination of with weight and with weight in particular m x m m x m m m as both x m x m as indicated above approachability in unknown games appendix other technical proofs proof of two facts related to definition the comments after definition mentioned two facts that we now prove first that condition is less restrictive in general than second that for continuous target functions the two definitions and coincide that that entails condition is less restrictive in general than target functions need to be considered to that end we consider a toy case when a is reduced to one element so that the has no decision to make and has to play this action and the opponent player chooses elements in r d and more precisely k the target set equals c and the target function is defined as if m m if m since d a we can identify mt and rt we consider the sequence mt we have mt rt since m m on the contrary for all t we have mt and therefore dp rt mt rt does not converge to as t this sequence converges to actually proof that entails under a continuity assumption we consider a continuous function k to show that entails it suffices to show that there exists a function f with f as such that for all m r k dp r m f dp m r the required uniformities with respect to the strategies of the opponent will be carried over to that end the continuity of will be exploited through the following two properties first is closed second since k is bounded is actually uniformly continuous we denote by its modulus of continuity in the p which is a function that satisfies as we denote by mg rg the projection in p of m r k rd onto the closed set by definition rg mg and w w dp m r w m r mg rg wp km mg kp we also define an element m as follows if rg m then we let rg otherwise mg m and as rg mg there exists an element c such that krg kp mg we recall that c is a closed set and its expansions are closed expansions we denote by d the vector rg mg mannor perchet stoltz by construction kdkp we introduce a new point and provide a rewriting of rg m d m and rg mg d these two equalities yield that krg kp mg m kdkp mg m summarizing we have in all cases whether rg belongs to m or not w krg wp mg m since m we get by a triangle inequality w w w dp r m kr wp w m r m wp w w w w w w w m r mg rg wp w mg rg m rg wp w m rg m wp dp m r krg kp where the last inequality follows from by and the uniform continuity of the last term in the side of the display above can be bounded as krg kp mg m kmg mk dp m r where for the last inequality we used again and the fact that is putting all pieces together we proved with f x x a lemma used in the proof of theorem lemma consider two positive numbers and form the positive sequence un defined by and p un n un n for all n then n un max proof we proceed by induction and note that the relation is satisfied by construction for n assuming now that it holds for some n we show that it is also true for n denoting c max we get p p un n un n c c n n it suffices to show that the latter upper bound is smaller than c n which follows from p c n n n n c indeed the first inequality comes from bounding by and expanding the term while the second inequality holds because c and by definition of approachability in unknown games appendix some thoughts on the optimality of target functions we first define a notion of optimality based on the classical theory of mathematical orderings with see definition being seen as a strict partial order with associated partial order denoted by corresponding to the standard pointwise inequality for functions on the existence of admissible target functions definition a target function is admissible if it is achievable and if there exists no other achievable target function such that there might exist several even an infinite number of admissible target functions as we will show below for example but there exists always at least one such admissible function as we show below in a way we unfortunately were unable to exhibit general concrete and admissible target functions lemma in any unknown game there exists at least one admissible mapping proof the proof is based on an application of zorn s lemma we prove below that the set t of all achievable target functions k which is partially ordered for has the property that every totally ordered subset has a lower bound in t in that case zorn s lemma ensures that the set t contains at least one minimal element an element such that no other element t satisfies given a totally ordered subset we can define the target function m k inf m is of course smaller than any element of the point is to show that t that is still achievable a property that we will use repeatedly below is that if two target functions are such that then now by definition the fact that the are achievable means that the compact sets are each approachable for the game with payoffs x m a k m x m in particular they are non empty the compact set can not be empty indeed if it were fixing any we would have that the subsets cover the compact topological space as these subsets are open sets in the topological space only finitely many of them would be needed for the covering call them with j n since is totally ordered one of the sets is minimal for inclusion and therefore one of the sets is maximal for the inclusion say the one corresponding to j therefore we would have as is totally ordered we would either have and or and this would lead to in the former case and in the latter case in both cases to a contradiction mannor perchet stoltz in addition we now prove that for all there exists such that is included in the open of which we denote by indeed denote by the compact sets we have that therefore by the same argument as above we see that there must exist some such that which is exactly what we wanted to prove so summarizing we proved that is non empty and that each of its expansion is approachable as it contains an approachable set as in the proof of lemma this means that is a set thus an approachable set or put differently that is achievable illustration on examples and which response function should we choose in practice and are target functions always admissible a convenient and natural choice in practice is x but example shows that unfortunately is not always admissible example shows that many different target functions may be admissible it is thus difficult to issue any general theory on how to choose and even on the optimality of the class of target functions example unfortunately is not admissible indeed we have as can be seen by carefully comparing the expressions and on the other hand is achievable it suffices to play xt at each round actually is of the form for the constant response function example all the target functions associated with the x x are admissible we illustrate the general existence result of lemma by showing that in example the target functions associated with the constant response functions x x are admissible for all x this corresponds to the case when the chooses the mixed action x x at all rounds in particular the proof of lemma indicates that the latter is thus admissible unlike in example expressions of these target functions will be needed for when the plays x x while the vector vector payoffs is m she gets an average payoff which we denote by r x x m and which equals r x x m x m x m x x x x x x the underlying response function being constant no convex decomposition needs to be considered in the defining supremum for m and the latter equals m r x x m max x approachability in unknown games since x is decreasing and is increasing and both functions take the same value at we get x if m if our proof follows the methodology used to prove lemma we fix any strategy of the achieving a target function for some fixed x and we show that necessarily we provide a detailed proof of the equality only for m where lies in the interval but this proof can be adapted in a straightforward manner to prove the equality as well the intervals and as in the proof of lemma it suffices to consider the almost sure statement of convergence as in the uniformity with respect to strategies of the opponent is not needed all statements below hold almost surely and the times t and t should be thought of as random variables our argument for is based on three sequences of mixed actions for the first one assume that the opponent chooses corresponding to during t stages where t can be made arbitrarily large we denote by vt vt the average of the mixed actions xt xt played by the during these rounds the average payoff vector received equals vt whose distance to the negative orthant is vt since x and the strategy achieves where by assumption it holds that lim sup vt x as t for the sake of compactness we will denote this fact by vt x this entails that lim inf vt x as t a fact that we denote by vt x during the next t stages we assume that the opponent chooses m which corresponds to and denote by wt wt the average of the mixed actions xt xt played by the during these rounds the average payoff vectors received between rounds t to on the one hand and during rounds to on the other hand are therefore respectively equal to wt wt and vt so that the distance of the latter to the negative orthant is given by max wt vt which we know is asymptotically smaller than m by achievability of where by assumption m m x we thus obtained the following system of equations vt x wt mannor perchet stoltz the sum of the last two inequalities is vt wt together with the first inequality vt x it leads to wt x substituting in the second inequality we get wt where the symbol denotes a convergence as t summing the proved limits and yields thus from the latter limit and wt we finally get vt x and wt x consider now some we show that to that end assume that after the t stages of the opponent switches instead to m m during t rounds note that in this case the average values of the coefficients for and m used in the first t t rounds are proportional to and that is mt m m was played we perform first some auxiliary calculations by multiplying the equalities in by t we see that the total number t t of rounds equals t t t in particular we have t t t and t t t finally denoting by ut the average mixed action played by the in rounds t to t t we have that the average vector payoffs during rounds t to t t and during rounds to t t are respectively equal to ut r ut ut and the overall average payoff is given by the distance of this vector in the supremum norm to the negative orthant and must be smaller than m in the limit by achievability of however the said distance of to the negative orthant is bound to be larger than the second component of which equals t t vt t vt vt x m as t where we substituted the above limit vt x we thus proved m m as claimed
| 10 |
journal of machine learning research x manuscript under review submitted published expected policy gradients for reinforcement learning kamil ciosek shimon whiteson jan department of computer science university of oxford wolfson building parks road oxford editor david blei and bernhard abstract we propose expected policy gradients epg which unify stochastic policy gradients spg and deterministic policy gradients dpg for reinforcement learning inspired by expected sarsa epg integrates or sums across actions when estimating the gradient instead of relying only on the action in the sampled trajectory for continuous action spaces we first derive a practical result for gaussian policies and quadric critics and then extend it to an analytical method for the universal case covering a broad class of actors and critics including gaussian exponential families and reparameterised policies with bounded support for gaussian policies we show that it is optimal to explore using covariance proportional to eh where h is the scaled hessian of the critic with respect to the actions epg also provides a general framework for reasoning about policy gradient methods which we use to establish a new general policy gradient theorem of which the stochastic and deterministic policy gradient theorems are special cases furthermore we prove that epg reduces the variance of the gradient estimates without requiring deterministic policies and with little computational overhead finally we show that epg outperforms existing approaches on six challenging domains involving the simulated control of physical systems keywords policy gradients exploration bounded actions reinforcement learning markov decision process mdp introduction in reinforcement learning an agent aims to learn an optimal behaviour policy from trajectories sampled from the environment in settings where it is feasible to explicitly represent the policy policy gradient methods sutton et peters and schaal silver et which optimise policies by gradient ascent have enjoyed great success especially with large or continuous action spaces the archetypal algorithm optimises an actor a policy by following a policy gradient that is estimated using a critic a value function the policy can be stochastic or deterministic yielding stochastic policy gradients spg sutton et or deterministic policy gradients dpg silver et the theory underpinning these methods is quite fragmented as each approach has a separate policy gradient theorem guaranteeing the policy gradient is unbiased under certain conditions furthermore both approaches have significant shortcomings for spg variance in the gradient estimates means that many trajectories are usually needed for learning since gathering trajectories is typically expensive there is a great need for more sample efficient methods c x kamil ciosek and shimon whiteson license see https attribution requirements are provided at http ciosek and whiteson dpg s use of deterministic policies mitigates the problem of variance in the gradient but raises other difficulties the theoretical support for dpg is limited since it assumes a critic that approximates q when in practice it approximates q instead in addition dpg learns which is undesirable when we want learning to take the cost of exploration into account more importantly learning necessitates designing a suitable exploration policy which is difficult in practice in fact efficient exploration in dpg is an open problem and most applications simply use independent gaussian noise or the heuristic uhlenbeck and ornstein lillicrap et this article which extends our previous work ciosek and whiteson proposes a new approach called expected policy gradients epg that unifies policy gradients in a way that yields both theoretical and practical insights inspired by expected sarsa sutton and barto van seijen et the main idea is to integrate across the action selected by the stochastic policy when estimating the gradient instead of relying only on the action selected during the sampled trajectory the contributions of this paper are threefold first epg enables two general theoretical contributions section a new general policy gradient theorem of which the stochastic and deterministic policy gradient theorems are special cases and a proof that section epg reduces the variance of the gradient estimates without requiring deterministic policies and for the gaussian case with no computational overhead over spg second we define practical policy gradient methods for the gaussian case section the epg solution is not only analytically tractable but also leads to a principled exploration strategy section for continuous problems with an exploration covariance that is proportional to eh where h is the scaled hessian of the critic with respect to the actions we present empirical results section confirming that this new approach to exploration substantially outperforms dpg with exploration in six challenging mujoco domains third we provide a way of deriving tractable epg methods for the general case of policies coming from a certain exponential family section and for critics that can be reparameterised as polynomials thus yielding analytic epg solutions that are tractable for a broad class of problems and essentially making epg a universal method finally in section we relate epg to other rl approaches background a markov decision process puterman is a tuple s a r p where s is a set of states a is a set of actions in practice either a rd or a is finite r s a is a reward function p a s is a transition kernel is an initial state distribution and is a discount factor a policy a s is a distribution over actions given a state we denote trajectories as where at and rt is a sample reward a policy induces a markov process with transition kernel r s s a a s p a s where we use the symbol a s to denote lebesgue integration against the measure a s where s is fixed we assume the induced markov process is ergodic with a single p invariant measure defined for the whole state space the value function is v i ri where actions are sampled from the is we show in this article that in certain settings dpg is equivalent to epg our method expected policy gradients a s er r s a v s and the advantage function ris a s a s v s an optimal policy maximises the total return j s s v s since we consider only learning with just one current policy we drop the where it is redundant if is parameterised by then stochastic policy gradients spg sutton et peters and schaal perform gradient ascent on the gradient of j with respect to gradients without a subscript are always with respect to for stochastic policies we have r r s s a a s log a s q a s b s where is the occupancy measure defined in the appendix and b s is a r baseline which can be any function that depends on the state but not the action since a a s log a s b s typically because of ergodicity and lemma see appendix we can approximate from samples from a trajectory of length t pt t log at st at st b st where at st is a critic discussed below if the policy is deterministic we denote it s we can use deterministic policy gradients silver et instead r s s s q a s s this update is then approximated using samples h i pt t s a st st since the policy is deterministic the problem of exploration is addressed using an external source of noise typically modelled using a ou process uhlenbeck and ornstein lillicrap et parameterised by and ni n a s ni in and is a critic that approximates q and can be learned by sarsa rummery and niranjan sutton at st at st at st alternatively we can use expected sarsa sutton and barto van seijen et which marginalises out the distribution over which is specified by the known policy to reduce the variance in the update r st at at st a a s a at st we could also use advantage learning baird et or lstdq lagoudakis and parr if the critic s function approximator is compatible then the actor converges sutton et ciosek and whiteson instead of learning we can set b s s so that q a s b s a s a and then use the td error r s r v s as an estimate of a s a bhatnagar et pt t log at st r s where s is an approximate value function learned using any policy evaluation algorithm works because e r s a s a s a the td error is an unbiased estimate of the advantage function the benefit of this approach is that it is sometimes easier to approximate v than q and that the return in the td error is unprojected it is not distorted by function approximation however the td error is noisy introducing variance in the gradient to cope with this variance we can reduce the learning rate when the variance of the gradient would otherwise explode using adam kingma and ba natural policy gradients kakade amari peters and schaal or newton s method furmston and barber however this results in slow learning when the variance is high see section for further discussion on variance reduction techniques expected policy gradients in this section we propose expected policy gradients epg first we introduce s to denote the inner integral in z z s a s log a s q a s b s s z s z z a s log a s q a s s zs a s s s this suggests a new way to write the approximate using lemma see appendix t x t st z where s z a s log a s a s a gt this approach makes explicit that one step in estimating the gradient is to evaluate an integral included in the term s the main insight behind epg is that given a state s is expressed fully in terms of known quantities hence we can manipulate it analytically to obtain a formula or we can just compute the integral using numerical quadrature if an analytical solution is impossible in section we show that this is rare for a discrete action space st becomes a sum over actions the idea behind epg was also independently and concurrently developed as mean actor critic asadi et though only for discrete actions and without a supporting theoretical analysis expected policy gradients spg as given in performs this quadrature using a simple monte carlo method as follows using the action at st s z a s log a s a s log at st at st b st a moreover spg assumes that the action at used in the above estimation is the same action that is executed in the environment however relying on such a method is unnecessary in fact the actions used to interact with the environment need not be used at all in the evaluation of s since a is a bound variable in the definition of s the motivation is thus similar to that of expected sarsa but applied to the actor s gradient estimate instead of the critic s update rule epg shown in algorithm uses to form a policy gradient algorithm that repeatedly estimates s with an integration subroutine algorithm expected policy gradients s t initialise optimiser initialise policy parameterised by while not converged do gt t s gt is the estimated policy gradient as per gt a s r a s a r s end while one of the motivations of dpg was precisely that the simple quadrature implicitly used by spg often yields high variance gradient estimates even with a good baseline to see why consider figure left a simple monte carlo method evaluates the integral by sampling one or more times from a s blue and evaluating log a s q a s red as a function of a a baseline can decrease the variance by adding a multiple of log a s to the red curve but the problem remains that the red curve has high values where the blue curve is almost zero consequently substantial variance persists whatever the baseline even with a simple linear as shown in figure right dpg addressed this problem for deterministic policies but epg extends it to stochastic ones we show in section that an analytical epg solution and thus the corresponding reduction in the variance is possible for a wide array of critics we also discuss the rare case where numerical quadrature is necessary in section we now provide our most general results which apply to epg in any setting general policy gradient theorem we begin by stating our most general result showing that epg can be seen as a generalisation of both spg and dpg to do this we first state a new general policy gradient theorem ciosek and whiteson spg update variance of mc policy pdf action baseline figure at left a s for a gaussian policy with mean at a given state and constant blue and log a s q a s for q a red at right the variance of a simple monte carlo estimator as a function of the baseline in a simple monte carlo method the variance would go down as the number of samples theorem general policy gradient theorem if s is a normalised lebesgue measure for all s then z z s s a s s a z ig s proof we begin by expanding the following expression r r r r r s s a a s rs s ra r a s dp s s a v s s s a a s dp s a z r r s s s r r s s s s s s z r s s s the first equality follows by expanding the definition of q and the penultimate one follows from lemma in the appendix then the theorem follows by rearranging terms the crucial benefit of theorem is that it works for all policies both stochastic and deterministic unifying previously separate derivations for the two settings to show this in the following two corollaries we use theorem to recover the stochastic policy gradient theorem sutton et and the deterministic policy gradient theorem silver et in each case by introducing additional assumptions to obtain a formula for ig s expressible in terms of known quantities corollary stochastic policy gradient theorem if s is differentiable then r r r s s ig s s s a a s log a s q a s expected policy gradients proof we obtain the following by expanding r r r a q a s a da q a s a a s r we obtain ig s a a s log a s q a s s by plugging this into the definition of ig s we obtain by invoking theorem and plugging in the above expression for ig s we now recover the dpg update introduced in corollary deterministic policy gradient theorem if s is a measure a deterministic policy and q s is differentiable then r r s s ig s s s s q a s s we overload the notation of slightly we denote by s the action taken at state s r s a a s where s is the corresponding measure proof we begin by expanding the term for s which will be useful later on s s a a s s q a s s s the above results from applying the multivariate chain that both s and q a s depend on the policy parameters hence the dependency appears twice in q s s we proceed to obtain an expression for ig s r ig s s a a s s a s s s q a s s here the second equality follows by observing that the policy is a and the third one follows from using we can then obtain by invoking theorem and plugging in the above expression for ig s these corollaries show that the choice between deterministic and stochastic policy gradients is fundamentally a choice of quadrature method hence the empirical success of dpg relative to spg silver et lillicrap et can be understood in a new light in particular it can be attributed not to a fundamental limitation of stochastic policies indeed stochastic policies are sometimes preferred but instead to superior quadrature dpg integrates over measures which is known to be easy while spg typically relies on simple monte carlo integration thanks to epg a deterministic approach is no longer required to obtain a method with low variance variance analysis we now prove that for any policy the epg estimator of has lower variance than the spg estimator of ciosek and whiteson lemma if for all s s the random variable log a s a s where a has nonzero variance then hp i hp i t log a s a s b s v t i s t t t t t t proof both random variables have the same mean so we need only show that t t log at st at st b st st we start by applying lemma to the lefthand side and setting x st t log at st at st b st where at at this shows that t log at st at st b st is the total return of the mrp p where x x x ep v likewise applying lemma again to the instantiating as a deterministic t random variable st st we have that is the total return st of the mrp p where x x ep v note that x x and therefore furthermore by assumption of the lemma the inequality is strict the lemma then follows by applying observation for convenience lemma also assumes infinite length trajectories however this is not a practical limitation since all policy gradient methods implicitly assume trajectories are long enough to be modelled as infinite furthermore a finite trajectory variant also holds though the proof is messier lemma s assumption is reasonable since the only way a random variable log a s a s could have zero variance is if it were the same for all actions in the policy s support except for sets of measure zero in which case optimising the policy would be unnecessary since we know that both the estimators of and are unbiased the estimator with lower variance has lower mse moreover we observe that lemma holds for the case where the computation of is exact section shows that this is often possible expected policy gradients for gaussian policies epg is particularly useful when we make the common assumption of a gaussian policy we can then perform the integration analytically under reasonable conditions we show below expected policy gradients algorithm gaussian policy gradients s t initialise optimiser while not converged do gt t s gt policy parameters are updated using gradient s computed from scratch a s s n r a s a r s end while algorithm gaussian integrals function s q s a s s use lemma q s s return end function function s h s return ech end function use lemma see corollary that the update to the policy mean computed by epg is equivalent to the dpg update moreover we derive a simple formula for the covariance see lemma algorithms and show the resulting special case of epg which we call gaussian policy gradients gpg surprisingly gpg is but nonetheless fully equivalent to dpg an method with a particular form of exploration hence gpg by specifying the policy s covariance can be seen as a derivation of an exploration strategy for dpg in this way gpg addresses an important open question as we show in section this leads to improved performance in practice the computational cost of gpg is small while it must store a hessian matrix h a s a s its size is only d d where a rd which is typically small d for one of the mujoco tasks we use for our experiments in section this hessian is the same size as the policy s covariance matrix which any policy gradient must store anyway and should not be confused with the hessian with respect to the parameters of the neural network as used with newton s or natural gradient methods peters and schaal furmston et which can easily have thousands of entries hence gpg obtains epg s variance reduction essentially for free ciosek and whiteson analytical quadrature for gaussian policies we now derive a lemma supporting gpg lemma gaussian policy gradients if the policy is gaussian n with and parameterised by and the critic is of the form q a s a a s a a b s const then q q q s s i s where the mean and covariance components are given by q s b s s iq s s a s proof for ease of presentation we prove the lemma for a action space where a r and is the standard deviation we drop the suffix s in and in the subsequent formulae first note that the constant term in the critic does not influence the value of s since it depends only on the state and not on the action and can be treated as a baseline observe that q s log q a s h i h log a b s log a a s a we consider the linear term and the quadric term separately for the linear term we have log ab s b s a b s b s b s b s a b s b s b s for the quadric term we have log a s a s a a s a s s summing the two terms yields q s s b s expected policy gradients we now calculate the integrals for the standard deviation again beginning with the linear term a log ab s b s a b s a b s a a a b s b s for the quadric term we have log a s a a s a a s a a s a a a s s summing the two terms yields q s s the multivariate case with a action space can be obtained using the method developed in section later in the paper by observing that the multivariate normal distribution is in the parametric family given by with the sufficient statistic vector t containing the vector a and the vectorised matrix aa both of which are polynomial in a and hence lemma is applicable while lemma requires the critic to be quadric in the actions this assumption is not very restrictive since the coefficients b s and a s can be arbitrary continuous functions of the state a neural network exploration using the hessian equation suggests that we can include the covariance in the actor network and learn it along with the mean using the update rule s s h s however another option is to compute the covariance from scratch at each iteration by analytically computing the result of applying infinitely many times as in the following lemma lemma exploration limit the iterative procedure defined by applied n times using the diminishing learning rate converges to eh s as n ciosek and whiteson eigenvalue increases sharp maximum very little exploration sharp minimum lots of exploration moderate exploration figure the parabolas show different possible curvatures of the critic s we set exploration to be the strongest for sharp mimima on the left side of the figure the exploration strength then decreases as we move towards the right there is almost no exploration to the far right where we have a sharp maximum proof consider the sequence i n h s we diagonalise the hessian as h s u for some orthonormal matrix u and obtain the following expression for the element of the sequence s i h s n u i n u n n since we have n for each eigenvalue of the hessian we obtain the identity lim u i n u eh s n the practical implication of lemma is that in a policy gradient method it is to use gaussian exploration with covariance proportional to ech for some reward scaling constant thus by exploring with scaled covariance ech we obtain a principled alternative to the heuristic of our results below show that it also performs much better in practice lemma has an intuitive interpretation if h s has a large positive eigenvalue then s has a sharp minimum along the corresponding eigenvector and the corresponding eigenvalue of is also large this is easiest to see with a action space where the hessian and its only eigenvalue are just the same scalar the exploration mechanism in the case is illustrated in figure the idea is simple the larger the eigenvalue the worse the minimum we are in and the more exploration we need to leave it on the other hand if is negative then s has a maximum and so is small since exploration is not needed in the case the critic can have saddle points as shown in figure for the case shown in the figure we explore little along the blue eigenvector since the intersection of q s with the blue plane shows a maximum and much more along the red lemma relies crucially on the use of step sizes diminishing in the length of the trajectory rather than finite step sizes therefore the step sequence serves as a useful intermediate stage between simply taking one pg step of and using finite step sizes which would mean that the covariance would converge either to zero or diverge to infinity expected policy gradients q s figure in action spaces the critic s can have saddle points in this case we define exploration along each eigenvector separately eigenvector since the intersection of q s with the red plane shows a minimum which we want to escape in essence we apply the reasoning shown in figure to each plane separately where the planes are spanned by the corresponding eigenvector and the this way we can escape saddle points and action clipping we now describe how gpg works in environments where the action space has bounded this setting occurs frequently in practice since systems often have physical constraints such as a bound on how fast a robot arm can accelerate the typical solution to this problem is simply to start with a policy with unbounded support and then when an action is to be taken clip it to the desired range as follows a a s equivalent to a max min b with b b s the justification for this process is that we can simply treat the clipping operation max min b as part of the environment specification formally this means that we transform the original mdp m defined as m s a r p with a d into another mdp m s r where rd and is defined as s p max min b s since m has an unbounded action space we can use the rl machinery for unbounded actions to solve it since any mdp is guaranteed to have an optimal deterministic policy s a now can be transformed into a policy we call this deterministic solution d for m of the form max min s in practice the mdp m is never constructed described process in equivalent to using an rl algorithm meant for a rd and then when the action is generated simply clipping it algorithm of course the optimisation is still local and there is no guarantee of finding a global can merely increase our chances we assume without loss of generality that the support interval is ciosek and whiteson algorithm policy gradients with clipped actions s t initialise optimiser initialise policy parameterised by while not converged do gt t s gt b s s a c b clipping function c b max min s r a s b r update using the action b s end while g b b g b bl b figure vanishing gradients when using hard clipping the agent can not determine whether b is too small or too large from and alone it is necessary to sample from the interval b g b in order to obtain a meaningful policy update but this is unlikely for the current policy shown as the red curve however while such an algorithm does not introduce new bias in the sense that reward obtained in m and m will be the same it can lead to problems with slow convergence in the policy gradient settings to see why consider figure with hard clipping the agent can not distinguish between and since squashing reduces them both to the same value g g hence the corresponding q values are identical and based on trajectories using and there is no way of knowing how the mean of the policy should be adjusted in order to get a useful gradient a b has to be chosen which falls into the interval bl since the b s are samples from a gaussian with infinite support it will eventually happen and a nonzero gradient will be obtained however if this interval falls into a distant part of the tail of convergence will be slow however this problem is mitigated with gpg to see why consider figure once the policy shifts into the flat area the critic becomes constant a constant critic has a zero hessian generating a boost to exploration by increasing the standard deviation of the policy making it much more likely that a point b bl is sampled and a useful gradient is obtained expected policy gradients g b b g b bl b figure gpg avoids the vanishing gradient problem once a policy denoted in red enters the flat area entering the flat area b bl exploration immediately increases the new distribution is in blue another way of mitigating the hard clipping problem is to use a differentiable squashing function which we describe in section quadric critics and their approximations gaussian policy gradients require a quadric critic given the state this assumption which is different from assuming a quadric dependency on the state is typically sufficient for two reasons first linear quadratic regulators lqr with feedback a class of problems widely studied in classical control theory are known to have a that is quadric in the action vector given the state crassidis and junkins equation second it is often assumed li and todorov that a quadric critic or a quadric approximation to a general critic is enough to capture enough local structure to preform a policy optimisation step in much the same way as newton s method for deterministic unconstrained optimisation which locally approximates a function with a quadric can be used to optimise a function across several iterations in corollary below we describe such an approximation method applied to gpg where we approximate q with a quadric function in the neighbourhood of the policy mean corollary approximate gaussian policy gradients with an arbitrary critic if the policy is gaussian n with and parameterised by as in lemma and any critic q a s doubly differentiable with respect to actions for each state q then s q a s and i q h s where h s s s is the hessian of q with respect to a evaluated at for a fixed proof we begin by approximating the critic for a given s using the first two terms of the taylor expansion of q in q a s q s a q a s a h s a a h s a a q a s h s const indeed the hessian discussed in section can be considered a type of reward model ciosek and whiteson because of the series truncation the function on the righthand side is quadric and we can then use lemma q s h s q a s h s s q a s i q s h s h s s s to actually obtain the hessian we could use automatic differentiation to compute it analytically sometimes this may not be example when relu units are used the hessian is always zero in these cases we can approximate the hessian by generating a number of random around computing the q values and locally fitting a quadric akin to methods in control roth et universal expected policy gradients having covered the most common case of continuous gaussian policies we now extend the analysis to other policy classes we provide two cases of such results in the following sections exponential family policies with multivariate polynomial critics of arbitrary order and arbitrary policies possessing a mean with linear critics our main claim is that an analytic solution to the epg integral is possible for almost any system hence we describe epg as a universal exponential family policies and polynomial critics we now describe a general technique to obtain analytic epg updates for the case when the policy belongs to a certain exponential family and the critic is an arbitrary polynomial this result is significant since polynomials can approximate any continuous function on a bounded interval with arbitrary accuracy weierstrass stone since our result holds for a nontrivial class of distributions in the exponential family it implies that analytic solutions for epg can almost always be obtained in practice and hence that the monte carlo sampling to estimate the inner integral that is typical in spg is rarely necessary lemma epg for exponential families with polynomial sufficient statistics consider the class of policies parameterised by where a s t a a where each entry in the vector t a is a possibly multivariate polynomial in the entries of the vector a moreover assume that the critic a is a possibly multivariate polynomial of course no method can be truly universal for a completely arbitrary problem our claim is that epg is universal for the class of systems arising from lemmas in this section however this class is so broad that we feel the term universal is justified this is similar to the claim that neural networks based on sigmoid nonlinearities are universal even though then can only approximate continuous functions as opposed to completely arbitrary ones expected policy gradients in the entries of a then the policy gradient update is a closed form expression in terms of the uncentered moments of s s ct q cq where cq is the vector containing the coefficients of the polynomial q ct q is the vector containing the coefficients of the polynomial t a q a a multiplication of t and q and is a vector of uncentered moments of in the order matching the polynomials proof we first rewrite the inner integral as an expectation z q s a s log a s q a a log a s q a h i t a w a q a i h t a q a q a t a q a q a since t a and q a are polynomials and the multiplication of polynomials is still polynomial both expectations are expectations of polynomials to compute the second expectation we exploit the fact that since q is a polynomial it is a sum of monomial terms d d d d x y x y p j p j q a ci aj i ci aj i z of hq i pi j d on the right the terms a are the uncentered pi pi pi d j moments of if we arrange the coefficients ci into the vector cq and the into the vector we obtain the right term in we can apply the same reasoning to the product of t and q to obtain the left term the themselves can be obtained from the moment generating function mgf of indeed for a distribution of the form of the mgf of t a is guaranteed to exist and has a closed form bickel and doksum hence the computation of the moments reduces to the computation of derivatives see details in appendix note that the assumption that t and q are polynomial is with respect to the action a the dependence on the state only appears in and and can be arbitrary a neural network of course while polynomials are universal approximators they may not be the most efficient or stable ones the importance of lemma is currently mainly epg is possible for a universal class of approximators polynomials which shows that epg ciosek and whiteson is analytically tractable in principle for any continuous it is an open research question whether more suitable universal approximators admitting analytic epg solutions can be identified reparameterised exponential families and reparameterised critics in lemma we assumed that the function t a called the sufficient statistic of the exponential family is polynomial we now relax this assumption our approach is to start with a policy which does have a polynomial sufficient statistic and then introduce a suitable reparameterisation function g rd a the policy is then defined as equivalent to a g b with b b s t b b a a s where b is the random variable representing the action before the squashing assuming that g exists and the jacobian is almost everywhere the of the policy can be written as a s g a s b s det g a det b the following lemma develops an epg method for such policies lemma consider an invertible and differentiable function define a policy as in assume that the jacobian of g is nonsingular except on a set of zero consider a critic denote as qb a reparameterised critic such that for all a qb g a q a then the policy gradient update is given by the formula s s proof s z a s log a s q a za g b s log g b s q g b det b zr d b s log g b s qb b d zr rd b s log b s log det b qb b s z in the second equality we perform the variable substitution a g b in the third equality we use and the fact that qb g a q a in the fourth equality we again use and the fact that log det b since g is not parameterised by the universality of polynomials holds only for bounded intervals weierstrass while the support of the policy may be unbounded we do not address the unbounded approximation case here other than by saying that in practice the critic is learned from samples and is thus typically only accurate on a bounded interval anyway we abuse notation slightly by using a s for both the probability distribution and its pdf expected policy gradients we are now ready to state our universality result the idea is to obtain a reparameterised version of epg and lemma by reparameterising the critic and the policy using the same transformation we do so in the following corollary which is the most general constructive result in this article corollary epg for exponential families with reparameterisation consider the class of policies parameterised by defined as in consider reparameterisation function g and define tb vb and qb as tb g a t a wb g a w a and qb g a q a for every a assume the following g is invertible the jacobian of g exists and is nonsingluar except on a set of zero where is the reparameterised policy as in and tb and qb are polynomial as in lemma then a policy gradient update can be obtained as follows s ct b qb cq b proof apply lemmas and then lemma also has a practical application in case we want to deal with bounded action spaces as we discussed in section hard clipping can cause the problem of vanishing gradients and the default solution should be to use gpg in case we can t use gpg for instance when the dimensionality of the action space is so large that computing the covariance of the policy is too costly we can alleviate the vanishing gradients problem by using a strictly monotonic squashing function one implication of lemma is that if we set to be gaussian we can invoke lemma to obtain exact analytic updates for useful policy classes such as and obtained by setting g to the sigmoid and the exponential function respectively as long as we choose our critic q to be quadric in g a qb is quadric in b the reparameterised version of epg is the same as algorithm except it uses a squashing function g instead of the clipping function aribtrary policies and linear critics next we consider the case where the stochastic policy is almost completely arbitrary it only has to possess a mean and need not even be in the already general exponential family of policies used in lemma and corollary but the critic is constrained to be linear in the actions we have the following lemma which is a slight modification of an observation made in connection with the algorithm gu et eq lemma epg for arbitrary stochastic policies and linear critics consider an arbitrary nondegenerate probability distribution s which has a mean assume that the critic a is of the form a s a for some coefficient vector as then the r policy gradient q update is given by s as where denotes the integral a a s the mean ciosek and whiteson proof s z a s q a s da za a s a s ada a z a s ada a s a z a s since dpg already provides the same result for policies see corollary we conclude that using linear critics means we can have an analytic solution for any reasonable policy class to see why the above lemma is useful first consider systems that arise as a discretisation of continuous time systems with a time scale if we assume that the true q is smooth in the actions and that the magnitude of the allowed action goes to zero as the time step decreases then a linear critic is sufficient as an approximation of q because we can approximate any smooth function with a linear function in any sufficiently small neighbourhood of a given point and then choose the time step to be small enough so an action does not leave that neighbourhood we can then use lemma to perform policy gradients with any if all else fails epg with numerical quadrature if despite the broad framework shown in this article an analytical solution is impossible we can still perform integration numerically epg can still be beneficial in these cases if the action space is low dimensional numerical quadrature is cheap if it is high dimensional it is still often worthwhile to balance the expense of simulating the system with the cost of quadrature actually even in the extreme case of expensive quadrature but cheap simulation the limited resources available for quadrature could still be better spent on epg with smart quadrature than spg with simple monte carlo the crucial insight behind numerical epg is that the integral given as z a s log a s a s a only depends on two fully known quantities the current policy and the current approximate critic therefore we can use any standard numerical integration method to compute it the actions at which the integrand is evaluated do not have to be can also use a method such as the quadrature where the abscissae are designed of course the update derived in lemma only provides a direction in which to change the policy mean which means that exploration has to be performed using some other mechanism this is because a linear critic does not contain enough information to determine exploration expected policy gradients experiments while epg has many potential uses we focus on empirically evaluating one particular application exploration driven by the hessian exponential as introduced in algorithm and lemma replacing the standard ou exploration in continuous action domains to this end we apply epg to five domains modelled with the mujoco physics simulator todorov et and and compare its performance to dpg and spg the experiments described here extend our previous conference work ciosek and whiteson in two ways we added the domain and used it for a detailed comparison with the ppo algorithm schulman et in practice epg differs from deep dpg lillicrap et silver et only in the exploration strategy though their theoretical underpinnings are also different the hyperparameters for dpg and those of epg that are not related to exploration were taken from an existing benchmark islam et brockman et the exploration hyperparameters for epg were and c where the exploration covariance is ech these values were obtained using a grid search from the set for and for c over the domain since c is just a constant scaling the rewards it is reasonable to set it to whenever reward scaling is already used hence our exploration strategy has just one hyperparameter as opposed specifying a pair of parameters standard deviation and mean reversion constant for ou we used the same learning parameters for the other domains for we used ou exploration and a constant diagonal covariance of in the actor update this approximately corresponds to the average variance of the ou process over time the other parameters for spg are the same as for the rest of the algorithm for the learning curves we obtained confidence intervals and show results of independent evaluation runs that used actions generated by the policy mean without any exploration noise the hessian in gpg is obtained using a method as follows at each step the agent samples action values from s and a quadric is fit to them in the norm since this is a problem it can be accomplished by solving a linear system the hessian computation could be greatly sped up by using an approximate method or even skipped completely if we used a quadric critic however we did not optimise this part of the algorithm since the core message of gpg is that a hessian is useful not how to compute it efficiently the results in figure show that epg s exploration strategy yields much better performance than dpg with ou furthermore spg does poorly solving only the easiest domain reasonably quickly achieving slow progress on and failing entirely on the other domains this is not surprising since dpg was introduced precisely to solve the problem of high variance spg estimates on this type of task in spg initially learns quickly outperforming the other methods this is because noisy gradient updates provide a crude indirect form of exploration that happens to suit this problem clearly this is inadequate for more complex domains even for this simple domain it leads to subpar performance late in learning we tried learning the covariance for spg but the covariance estimate was unstable no regularisation hyperparameters we tested matched spg s performance with ou even on the simplest domain ciosek and whiteson a b epg runs dpg runs spg runs epg runs dpg runs spg runs c d epg runs dpg runs spg runs epg runs dpg runs spg runs e epg runs dpg runs spg runs figure learning curves mean and interval returns for are clipped at the number of independent training runs is in parentheses horizontal axis is scaled in thousands of steps in addition epg typically learns more consistently than dpg with ou in three tasks the empirical standard deviation across runs of epg was substantially lower than that of dpg at the end of learning as shown in table for the other two domains the confidence intervals around the empirical standard deviations for dpg and epg were too wide to draw conclusions surprisingly for dpg s learning curve declines late in learning the reason can be seen in the individual runs shown in figure both dpg and spg suffer from severe unlearning this unlearning can not be explained by exploration noise since the expected policy gradients domain table estimated standard deviation mean and interval across runs after learning figure three runs for epg left dpg middle and spg right for the domain demonstrating that epg shows much less unlearning evaluation runs just use the mean action without exploring instead ou exploration in dpg may be too coarse causing the optimiser to exit good optima while spg unlearns due to noise in the gradients the noise also helps speed initial learning as described above but this does not transfer to other domains epg avoids this problem by automatically reducing the noise when it finds a good optimum a hessian with large negative eigenvalues as described is section the fact that epg is stable in this way raises the question whether the instability of an algorithm an inverted or oscillating learning curve is caused primarily by inefficient exploration or by excessivly large differences between subsequent policies to address it we compare our results with proximal policy pptimisation ppo schulman et a policy gradient algorithm designed specifically to include a term penalising the difference between successive policies comparing our epg result for in figure with ppo schulman et figure first row third plot from left blue ppo curve it is clear that epg is more stable this suggests that efficient adaptive exploration of the type used by epg is important for stability even in this relatively simple domain related work in this section we discuss the relationship between epg and several other methods ciosek and whiteson sampling methods for spg epg has some similarities with vine sampling schulman et which uses an intrinsically noisy monte carlo quadrature with many samples however there are important differences first vine relies entirely on reward rollouts and does not use an explicit critic this means that vine has to perform many independent rollouts of q s for each s requiring a simulator with reset a second related difference is that vine uses the same actions in the estimation of that it executes in the environment while this is necessary with purely monte carlo rollouts section shows that there is no such need in general if we have an explicit critic ultimately the main weakness of vine is that it is a purely monte carlo method however the example in figure section shows that even with a computationally expensive monte carlo method the problem of variance in the gradient estimator remains regardless of the baseline epg is also related to variance minimisation techniques that interpolate between two estimators gu et however epg uses a quadric not linear critic which is crucial for exploration furthermore it completely eliminates variance in the inner integral as opposed to just reducing it a more direct way of coping with variance in policy gradients is to simply reduce the learning rate when the variance of the gradient would otherwise explode using adam kingma and ba natural policy gradients kakade amari peters and schaal trust region policy optimisation schulman et proximal policy optimisation schulman et the adaptive step size method pirotta et or newton s method furmston and barber furmston et parisi et however this results in slow learning when the variance is high sarsa and it has been known since the introduction of policy gradient methods sutton et that they represent a kind of policy improvement as opposed to a greedy improvement performed by methods such as expected sarsa the two main reasons for the improvement are that a greedy maximisation operator may not be available for continuous or large discrete action spaces and that a greedy step may be too large because the critic only approximates the value function the argument for a method is that it may converge faster and does not need an additional optimisation for the actor recently approaches combining the features of both methods have been investigated newton s method for that are quadric in the actions has been used to produce a algorithm for continuous domains gu et previously only tractable with policy gradient methods for discrete action spaces softmax a family of methods with a hybrid loss combining sarsa and has recently been linked to policy gradients via an entropy term o donoghue et in this paper gpg with exploration section can be seen as another kind of hybrid specifically it changes the mean of the policy slowly similar to a vanilla policy gradient method and computes the covariance greedily similar to sarsa expected policy gradients dpg the update for the policy mean obtained in corollary is the same as the dpg update linking the two methods s q a s we now formalise the equivalences between epg and dpg first any epg method with a linear critic or an arbitrary critic approximated by the first term in the taylor expansion is equivalent to dpg with actions from a given state s drawn from an exploration policy of the form a s n where a s here the pdf of the exploration noise n must not depend on the policy parameters this fact follows directly from lemma which says that in essence a linear critic only gives information on how to shift the mean of the policy and no information about other moments second gpg with a quadric critic or an arbitrary critic approximated by the first two terms in the taylor expansion is equivalent to dpg with a gaussian exploration policy where the covariance is computed as in section this follows from corollary third and most generally for any critic at all not necessarily quadric dpg is a kind of epg for a particular choice of quadrature using a dirac measure this follows from theorem surprisingly this means that dpg normally considered to be can also be seen as when exploring with gaussian noise defined as above for the quadric critic or any noise for the linear critic furthermore the compatible critic for dpg silver et is indeed linear in the actions hence this relationship holds whenever dpg uses a compatible furthermore lemma lends new legitimacy to the common practice of replacing the critic required by the dpg theory which approximates q with one that approximates q itself as done in spg and epg methods spg sometimes includes an entropy term peters et in the gradient in order to aid exploration by making the policy more stochastic the gradient of the differential entropy h s of the policy at state s is defined as r s a log r r a log a log r r a log a r r a log a z r r a log a log log the notion of compatibility of a critic is different for stochastic and deterministic policy gradients for discrete action spaces the same derivation with integrals replaced by sums holds for the entropy ciosek and whiteson typically we add the entropy update to the policy gradient update with a weight e ig s ig s s r a log q a s log this equation makes clear that performing entropy regularisation is equivalent to using a different critic with shifted by log this holds for both epg and spg including spg with discrete actions where the integral over actions is replaced with a sum this follows because adding entropy regularisation to the objective of optimising the total discounted reward in an rl setting corresponds to shifting the reward function by a term proportional to log neu et nachum et indeed the path consistency learning algorithm nachum et contains a formula similar to though we obtained ours independently next we derive a further specialisation of for the case where the parameters are shared between the actor and the critic we start with the policy gradient identity given by and replace the true critic q with the approximate critic since this holds for any stochastic policy we choose one of the form a s e z s z where z s a s da a for the continuous case we assume that the integral in converges for each state here we assume that the approximate critic is parameterised by because of the form of the policy is parameterised by as well now for the policy class given by we can simplify the gradient update even further obtaining e ig s r a log a s r a log s a log log a s log z s z a s r a log s a s in the above derivation we could drop the term log z s since it does not depend on a as with a baseline this shows that in the case of sharing parameters between the critic and the policy as above methods such as mnih et which have both an entropy loss and a policy gradient loss are redundant since entropy regularisation does nothing except scale the learning alternatively for this shared parameterisation a policy gradient method simply subtracts entropy from the policy in practice this means that a policy gradient method with this kind of parameter sharing is quite similar to learning the critic alone and simply acting according to the argmax of the q values rather than representing the policy explicitly producing a method similar to sarsa in this argument we ignore the effects of sampling on exploration expected policy gradients learning with policy gradients typically follows the framework of actorcritic degris et denote the behaviour policy as b a s and the corresponding measure as the method uses the following reweighting approximation z z s a s log a s q a s za zs s a s log a s q a s s a the approximation is necessary since as the samples are generated using the policy b it is not known how to approximate the integral with from samples while it is easy to do so for an integral with a natural version of epg emerges from this approximation see algorithm which simply replaces the inner integral with z z z s a s log a s q a s s s s a s here we use an analytic solution to s as before the importance sampling term b does not appear because as the integral is computed analytically there is no sampling in s much less sampling with an importance correction of course the algorithm also requires an critic for which an importance sampling correction is typically necessary indeed makes clear that differs from spg in two places the use of as in and the use of an monte carlo estimator rather than regular monte carlo for the inner integral algorithm expected policy gradients with reweighting approximation s t initialise optimiser initialise policy parameterised by while not converged do gt t s gt is the estimated policy gradient as per gt a b s r a s a r b critic algorithm s end while value gradient methods value gradient methods fairbank fairbank and alonso heess et assume the same parameterisation of the policy as policy gradients is parameterised by and maximise j by recursively computing the gradient of the value function in our notation the policy gradient has the following connection with the value gradient of the ciosek and whiteson initial state z value gradient methods use a recursive equation that computes s using where is the successor state in practice this means that a trajectory is truncated and the computation goes backward from the last state all the way to where is applied so that the resulting estimate of can be used to update the policy the recursive formulae for s are based on the differentiated bellman equation z z r s a p s s v s a different value gradient methods differ in the form of the recursive update for the value gradient obtained from for example stochastic value gradients svg introduce a reparameterisation both of and p s p s f a s with a a h s with here we denote the base noise distributions as and while f and h are deterministic functions the function f can be thought of as an mdp transition model svg rewrites using the reparameterisation as follows r r r s h s r s h s r v f h s s f h s s z here the quantities s h s and f h s s can be computed by the chain rule from the known reward model r transition model f svg learns the approximate from samples and using a approximation to to obtain the model value gradient recursion by contrast we now derive a related but simpler value gradient method that does not require a model or a reparameterised starting with r r s a r s a p s v r r r a r s a p s v p s v r a p s log r s a log v now can be approximated from samples s log r s a log svg and svg require a model an a policy reparameterisation while svg requires only a policy reparameterisation however svg is inefficient since it does not directly use the reward in the computation of the value gradient expected policy gradients policy class normal a rd a d a d any policy squashing none a expit b a eb none analytic update q a as a a bs s bs b as b b bs iq as b as b b b s s bs a s bs table a summary of the most useful analytic results for expected policy gradients for bounded action spaces we assume that the bounding interval is or here the pair a corresponds to the action taken at s and the successor state this method requires learning a critic while svg requires a model an additional connection between value gradient methods and policy gradients r is that since the quantity ig s in theorem can be written as ig s s s we can think of this theorem as showing how to obtain a policy gradient from a value gradient without backwards iteration conclusions this paper proposed a new framework for reasoning about policy gradient methods called expected policy gradients epg that integrates across the action selected by the stochastic policy thus reducing variance compared to existing stochastic policy gradient methods we proved a new general policy gradient theorem subsuming the stochastic and deterministic policy gradient theorems which covers any reasonable class of policies we showed that analytical results for the policy update exist and in the most common cases lead to a practical algorithm the analytic updates are summarised in table we also gave universality results which state that under certain broad conditions the quadrature required by epg can be performed analytically for gaussian policies we also developed a novel approach to exploration that infers the exploration covariance from the hessian of the critic the analysis of epg yielded new insights about dpg and delineated the links between the two methods we also discussed the connections between epg and other common rl techniques notably sarsa and entropy regularisation finally we evaluated the gpg algorithm in six practical domains showing that it outperforms existing techniques acknowledgments this project has received funding from the european research council erc under the european union s horizon research and innovation programme grant agreement number the experiments were made possible by a generous equipment grant from nvidia ciosek and whiteson appendix proofs and detailed definitions first we prove two lemmas concerning the measure s which have been implicitly realised for some time but as far as we could find never proved explicitly definition occupancy p s t s z p t i p s p s t i for s definition truncated trajectory define the trajectory truncated after n steps as n sn observation expectation wrt truncated trajectory since sn qn is associated with the density p si we have that hp i n i f s i r n n i f s ds ds ds sn p s s p s i i n p r qn i n sn p si f si dsn p r i n s p s t i f s ds for any function f definition expectation with respect to infinite trajectory for any bounded function f we have n x x i i f si lim f si n here the sum on the side is part of the symbol being defined observation property of expectation with respect to infinite trajectory hp i i n i f s f s lim e i i n pn r limn s p s t i i f s ds z x dp s t i i f s s for any bounded function f definition occupancy measure s x i p s t i expected policy gradients the measure is not normalised in general intuitively it can be thought of as marginalising out the time in the system dynamics lemma property for any bounded function f z s f s s x i f si proof x i f si x i z p s t i f s ds s z x s i p s t i f s ds z s here the first equality follows from observation this property is useful since the expression on the left can be easily manipulated while the expression on the right can be estimated from samples using monte carlo lemma generalised eigenfunction property for any bounded function f z z dp s s f s s s s s f s s f s s proof r s s r r p i dp s f s p s t i p s s f s dsds r dp s t i f s i r s dp s t i f r i r dp s f s dp s t i f s s r r s s f s s s f s here the first equality follows form definition the second one from definition the last equality follows again from definition definition markov reward process a markov reward process is a tuple p r where p is a transition kernel is the distribution over initial states r is a reward distribution conditioned on the state and is the discount constant an mrp can be thought of as an mdp r with a fixed policy and dynamics given by marginalising out the actions s a a s p a s since this paper considers the case of one policy we abuse notation slightly by using the same symbol to denote trajectories including actions and without them ciosek and whiteson lemma second moment bellman equation consider a markov reward process p x where p s is a markov process and x s is some probability density denote the value function of the mrp as v denote the second moment function s as x s s t xt xt x st then s is the value function of the mrp p u where u s is a deterministic random variable given by u s vx x ex x x ep v proof i tx s s t h i p t tx x s s t t h p i t tx s s s x s s e t t z z s s h u s ep s this is exactly the bellman equation of the mrp p u the theorem follows since the bellman equation uniquely determines the value function observation dominated value functions consider two markov reward processes p and p where p s is a markov process common to both mrps and s s are some deterministic random variables meeting the condition s s for every then the value functions and of the respective mrps satisfy s s for every moreover if we have that s s for all states then the inequality between value functions is strict proof follows trivially by expanding the value function as a series and comparing series elementwise computation of moments for an exponential family consider the moment generating function of t a which we denote as mt for the exponential family of the form given in equation mt v note that while x occupies a place in the definition of the mrp usually called reward distribution we are using the symbol x not r since we shall apply the lemma to xes which are constructions distinct from the reward of the mdp we are solving expected policy gradients it is that mt is finite in a neighbourhood of the origin bickel and doksum and hence the cross moments can be obtained as k y p j mt v t a j p p p k vk here we denoted as k the size of the sufficient statistic the length of the vector t a however we seek the of a not t a if t a contains a subset of indices which correspond to the vector a then we can simply use the corresponding indices in the above equation on the other hand if this is not the case we can introduce an extended distribution a s t a a where t is a vector concatenation of t and a we can then use the mgf of t a restricted to a suitable set of indices to get the moments references amari natural gradient works efficiently in learning neural computation asadi allen roderick mohamed konidaris and littman mean actor critic arxiv september leemon baird et al residual algorithms reinforcement learning with function approximation in proceedings of the twelfth international conference on machine learning pages shalabh bhatnagar mohammad ghavamzadeh mark lee and richard s sutton incremental natural algorithms in advances in neural information processing systems pages peter bickel and kjell doksum mathematical statistics basic ideas and selected topics vol edition prentice hall edition isbn greg brockman vicki cheung ludwig pettersson jonas schneider john schulman jie tang and wojciech zaremba openai gym arxiv preprint kamil ciosek and shimon whiteson expected policy gradients in aaai proceedings of the aaai conference on artificial intelligence february john l crassidis and john l junkins optimal estimation of dynamic systems crc press thomas degris martha white and richard s sutton arxiv preprint michael fairbank learning phd thesis city university london michael fairbank and eduardo alonso learning in neural networks ijcnn the international joint conference on pages ieee ciosek and whiteson thomas furmston and david barber a unifying perspective of parametric policy search methods for markov decision processes in advances in neural information processing systems pages thomas furmston guy lever and david barber approximate newton methods for policy search in markov decision processes journal of machine learning research shixiang gu timothy lillicrap zoubin ghahramani richard e turner and sergey levine policy gradient with an critic arxiv preprint shixiang gu timothy lillicrap ilya sutskever and sergey levine continuous deep qlearning with acceleration in international conference on machine learning pages nicolas heess gregory wayne david silver tim lillicrap tom erez and yuval tassa learning continuous control policies by stochastic value gradients in advances in neural information processing systems pages riashat islam peter henderson maziar gomrokchi and doina precup reproducibility of benchmarked deep reinforcement learning tasks for continuous control arxiv preprint sham m kakade a natural policy gradient in advances in neural information processing systems pages diederik kingma and jimmy ba adam a method for stochastic optimization arxiv preprint michail g lagoudakis and ronald parr policy iteration journal of machine learning research dec weiwei li and emanuel todorov iterative linear quadratic regulator design for nonlinear biological movement systems in icinco pages timothy p lillicrap jonathan j hunt alexander pritzel nicolas heess tom erez yuval tassa david silver and daan wierstra continuous control with deep reinforcement learning arxiv preprint volodymyr mnih adria puigdomenech badia mehdi mirza alex graves timothy lillicrap tim harley david silver and koray kavukcuoglu asynchronous methods for deep reinforcement learning in international conference on machine learning pages ofir nachum mohammad norouzi kelvin xu and dale schuurmans bridging the gap between value and policy based reinforcement learning arxiv preprint expected policy gradients gergely neu anders jonsson and a unified view of markov decision processes arxiv preprint brendan o donoghue remi munos koray kavukcuoglu and volodymyr mnih combining policy gradient and simone parisi matteo pirotta and marcello restelli reinforcement learning through continuous pareto manifold approximation journal of artificial intelligence research jan peters and stefan schaal policy gradient methods for robotics in intelligent robots and systems international conference on pages ieee jan peters and stefan schaal natural neurocomputing jan peters and stefan schaal reinforcement learning of motor skills with policy gradients neural networks jan peters katharina and yasemin altun relative entropy policy search in aaai pages atlanta matteo pirotta marcello restelli and luca bascetta adaptive for policy gradient methods in advances in neural information processing systems pages martin l puterman markov decision processes discrete stochastic dynamic programming john wiley sons michael roth gustaf hendeby and fredrik gustafsson nonlinear kalman filters explained a tutorial on moment computations and sigma point methods journal of advances in information fusion gavin a rummery and mahesan niranjan using connectionist systems university of cambridge department of engineering john schulman sergey levine pieter abbeel michael jordan and philipp moritz trust region policy optimization in proceedings of the international conference on machine learning pages john schulman filip wolski prafulla dhariwal alec radford and oleg klimov proximal policy optimization algorithms arxiv preprint david silver guy lever nicolas heess thomas degris daan wierstra and martin riedmiller deterministic policy gradient algorithms in icml marshall h stone the generalized weierstrass approximation theorem mathematics magazine richard s sutton generalization in reinforcement learning successful examples using sparse coarse coding advances in neural information processing systems pages ciosek and whiteson richard s sutton and andrew g barto reinforcement learning an introduction volume mit press cambridge richard s sutton david a mcallester satinder p singh and yishay mansour policy gradient methods for reinforcement learning with function approximation in advances in neural information processing systems pages emanuel todorov tom erez and yuval tassa mujoco a physics engine for modelbased control in intelligent robots and systems iros international conference on pages ieee george e uhlenbeck and leonard s ornstein on the theory of the brownian motion physical review harm van seijen hado van hasselt shimon whiteson and marco wiering a theoretical and empirical analysis of expected sarsa in adprl proceedings of the ieee symposium on adaptive dynamic programming and reinforcement learning pages march url http karl weierstrass die analytische darstellbarkeit sogenannter functionen einer reellen sitzungsberichte der akademie der wissenschaften zu berlin
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.