aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
@cite_4 were the first to suggest the CRDT approach. They give the example of an array with a slot assignment operation. To make concurrent assignments commute, they propose a deterministic procedure (based on vector clocks) whereby one takes precedence over the other.
{ "cite_N": [ "@cite_4" ], "mid": [ "2080920242" ], "abstract": [ "As collaboration over the Internet becomes an everyday affair, it is increasingly important to provide high quality of interactivity. Distributed applications can replicate collaborative objects at every site for the purpose of achieving high interactivity. Replication, however, has a fatal weakness that it is difficult to maintain consistency among replicas. This paper introduces operation commutativity as a key principle in designing operations in order to manage distributed replicas consistent. In addition, we suggest effective schemes that make operations commutative using the relations of objects and operations. Finally, we apply our approaches to some simple replicated abstract data types, and achieve their consistency without serialization and locking." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
This is similar to the well-known Last-Writer Wins algorithm, used in shared file systems. Each file replica is timestamped with the time it was last written. Timestamps are consistent with happens-before @cite_13 . When comparing two versions of the file, the one with the highest timestamp takes precedence. This is correct with respect to successive writes related by happens-before, and constitutes a simple precedence rule for concurrent writes.
{ "cite_N": [ "@cite_13" ], "mid": [ "1973501242" ], "abstract": [ "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
In Lamport's replicated state machine approach @cite_13 , every replica executes the same operations in the same order. This total order is computed either by a consensus algorithm such as Paxos @cite_18 or, equivalently, by using an atomic broadcast mechanism @cite_0 . Such algorithms can tolerate faults.However they are complex and scale poorly; consensus occurs within the critical execution path, adding latency to every operation.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_13" ], "mid": [ "2130264930", "2075854425", "1973501242" ], "abstract": [ "Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.", "Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.", "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
In the treedoc design, common edit operations execute optimistically, with no latency; it uses consensus in the background only. Previously, Golding relied on background consensus for garbage collection @cite_21 . We are not aware of previous instances of background consensus for structural operations, nor of aborting consensus when it conflicts with essential operations.
{ "cite_N": [ "@cite_21" ], "mid": [ "1490833583" ], "abstract": [ "Many distributed systems for wide-area networks can be built conveniently, and operate efficiently and correctly, using a weak consistency group communication mechanism. This mechanism organizes a set of principals into a single logical entity, and provides methods to multicast messages to the members. A weak consistency distributed system allows the principals in the group to differ on the value of shared state at any given instant, as long as they will eventually converge to a single, consistent value. A group containing many principals and using weak consistency can provide the reliability, performance, and scalability necessary for wide-area systems. I have developed a framework for constructing group communication systems, for classifying existing distributed system tools, and for constructing and reasoning about a particular group communication model. It has four components: message delivery, message ordering, group membership, and the application. Each component may have a different implementation, so that the group mechanism can be tailored to application requirements. The framework supports a new message delivery protocol, called timestamped anti-entropy, which provides reliable, eventual message delivery; is efficient; and tolerates most transient processor and network failures. It can be combined with message ordering implementations that provide ordering guarantees ranging from unordered to total, causal delivery. A new group membership protocol completes the set, providing temporarily inconsistent membership views resilient to up to k simultaneous principal failures. The Refdbms distributed bibliographic database system, which has been constructed using this framework, is used as an example. Refdbms databases can be replicated on many different sites, using the group communication system described here." ] }
0710.0528
2035639486
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
In most of the work combining sharing and linearity, freeness information is included in the abstract domain. In fact, freeness may improve the precision of the aliasing component and it is also interesting by itself, for example in the parallelization of logic programs @cite_18 . In this comparison, we do not consider the freeness component.
{ "cite_N": [ "@cite_18" ], "mid": [ "1999281702" ], "abstract": [ "Abstract This paper presents some fundamental properties of independent and- parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency are discussed in the light of this model. Two conditions, “strict” and “nonstrict” independence, are defined and then proved sufficient to ensure correctness and efficiency of parallel execution: If goals which meet these conditions are executed in parallel, the solutions obtained are the same as those produced by standard sequential execution. Also, in the absence of failure, the parallel proof procedure does not generate any additional work (with respect to standard SLD resolution), while the actual execution time is reduced. Finally, in case of failure of any of the goals, no slowdown will occur. For strict independence, the results are shown to hold independently of whether the parallel goals execute in the same environment or in separate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: Compiletime conditions to efficiently check goal independence at run time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler." ] }
0710.0528
2035639486
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
The following is a counterexample to the optimality of the abstract unification in @cite_12 , in the case of finite trees, when pair sharing is equipped with @math or @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "1970730703" ], "abstract": [ "Abstract Sharing information is useful in specialising, optimising and parallelising logic programs and thus sharing analysis is an important topic of both abstract interpretation and logic programming. Sharing analyses infer which pairs of program variables can never be bound to terms that contain a common variable. We generalise a classic pair-sharing analysis from Herbrand unification to trace sharing over rational tree constraints. This is useful for reasoning about programs written in SICStus and Prolog-III because these languages use rational tree unification as the default equation solver." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Kuhn al @cite_14 present a distributed approximation scheme for the packing LP and covering LP. The algorithm provides a local approximation scheme for some families of packing and covering LPs. For example, let @math for all @math . Then for each @math , @math and @math , there is a local algorithm with some constant horizon @math which achieves an @math -approximation. Our work shows that such local approximation schemes do not exist for .
{ "cite_N": [ "@cite_14" ], "mid": [ "1998137177" ], "abstract": [ "Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Another distributed approximation scheme by Kuhn al @cite_14 forms several decompositions of @math into subgraphs, solves the optimisation problem optimally for each subgraph, and combines the solutions. However, the algorithm is not a local approximation algorithm in the strict sense that we use here: to obtain any constant approximation ratio, the local horizon must extend (logarithmically) as the number of variables increases. Also Bartal al @cite_2 present a distributed but not local approximation scheme for the packing LP.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "1998137177", "2129295346" ], "abstract": [ "Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.", "Flow control in high speed networks requires distributed routers to make fast decisions based only on local information in allocating bandwidth to connections. While most previous work on this problem focuses on achieving local objective functions, in many cases it may be necessary to achieve global objectives such as maximizing the total flow. This problem illustrates one of the basic aspects of distributed computing: achieving global objectives using local information. Papadimitriou and Yannakakis (1993) initiated the study of such problems in a framework of solving positive linear programs by distributed agents. We take their model further, by allowing the distributed agents to acquire more information over time. We therefore turn attention to the tradeoff between the running time and the quality of the solution to the linear program. We give a distributed algorithm that obtains a (1+ spl epsiv ) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds. While comparable in running time, our results exhibit a significant improvement on the logarithmic ratio previously obtained by Awerbuch and Azar (1994). Our algorithm, which draws from techniques developed by Luby and Nisan (1993) is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Kuhn and Wattenhofer @cite_9 present a family of local, constant-factor approximation algorithms of the covering LP that is obtained as an LP relaxation of the minimum dominating set problem. Kuhn al @cite_3 present a local, constant-factor approximation of the packing and covering LPs in unit-disk graphs.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2127622665", "2114986147" ], "abstract": [ "Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.", "Many large-scale networks such as ad hoc and sensor networks, peer-to-peer networks, or the Internet have the property that the number of independent nodes does not grow arbitrarily when looking at neighborhoods of increasing size. Due to this bounded \"volume growth,\" one could expect that distributed algorithms are able to solve many problems more efficiently than on general graphs. The goal of this paper is to help understanding the distributed complexity of problems on \"bounded growth\" graphs. We show that on the widely used unit disk graph, covering and packing linear programs can be approximated by constant factors in constant time. For a more general network model which is based on the assumption that nodes are in a metric space of constant doubling dimension, we show that in O(log*!n) rounds it is possible to construct a (O(1), O(1))-network decomposition. This results in asymptotically optimal O(log*!n) time algorithms for many important problems." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
There are few examples of local algorithms which approximate linear problems beyond packing and covering LPs. Kuhn al @cite_11 study an LP relaxation of the @math -fold dominating set problem and obtain a local constant-factor approximation for bounded-degree graphs.
{ "cite_N": [ "@cite_11" ], "mid": [ "2149477108" ], "abstract": [ "In this paper, we study distributed approximation algorithms for fault-tolerant clustering in wireless ad hoc and sensor networks. A k-fold dominating set of a graph G = (V,E) is a subset S of V such that every node v V S has at least k neighbors in S. We study the problem in two network models. In general graphs, for arbitrary parameter t, we propose a distributed algorithm that runs in time O(t^2) and achieves an approximation ratio of O(t ^2 t log ), where n and denote the number of nodes in the network and the maximal degree, respectively. When the network is modeled as a unit disk graph, we give a probabilistic algorithm that runs in time O(log log n) and achieves an O(1) approximation in expectation. Both algorithms require only small messages of size O(log n) bits." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
For combinatorial problems, there are both negative @cite_10 @cite_0 and positive @cite_7 @cite_11 @cite_9 @cite_16 @cite_1 results on the applicability of local algorithms.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_0", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "1547520624", "2127622665", "2075105933", "2054910423", "2017345786", "2592539685", "2149477108" ], "abstract": [ "We study fractional scheduling problems in sensor networks, in particular, sleep scheduling (generalisation of fractional domatic partition) and activity scheduling (generalisation of fractional graph colouring). The problems are hard to solve in general even in a centralised setting; however, we show that there are practically relevant families of graphs where these problems admit a local distributed approximation algorithm; in a local algorithm each node utilises information from its constant-size neighbourhood only. Our algorithm does not need the spatial coordinates of the nodes; it suffices that a subset of nodes is designated as markers during network deployment. Our algorithm can be applied in any marked graph satisfying certain bounds on the marker density; if the bounds are met, guaranteed near-optimal solutions can be found in constant time, space and communication per node.We also show that auxiliary information is necessary--no local algorithm can achieve a satisfactory approximation guarantee on unmarked graphs.", "Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.", "In this paper, we review a recently developed class of algorithms that solve global problems in unit distance wireless networks by means of local algorithms. A local algorithm is one in which any node of a network only has information on nodes at distance at most k from itself, for a constant k. For example, given a unit distance wireless network N, we want to obtain a planar subnetwork of N by means of an algorithm in which all nodes can communicate only with their neighbors in N, perform some operations, and then halt. We review algorithms for obtaining planar subnetworks, approximations to minimum weight spanning trees, Delaunay triangulations, and relative neighbor graphs. Given a unit distance wireless network N, we present new local algorithms to solve the following problems:1.Calculate small dominating sets (not necessarily connected) of N. 2.Extract a bounded degree planar subgraph H of N and obtain a proper edge coloring of H with at most 12 colors. The second of these algorithms can be used in the channel assignment problem.", "This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math .", "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).", "", "In this paper, we study distributed approximation algorithms for fault-tolerant clustering in wireless ad hoc and sensor networks. A k-fold dominating set of a graph G = (V,E) is a subset S of V such that every node v V S has at least k neighbors in S. We study the problem in two network models. In general graphs, for arbitrary parameter t, we propose a distributed algorithm that runs in time O(t^2) and achieves an approximation ratio of O(t ^2 t log ), where n and denote the number of nodes in the network and the maximal degree, respectively. When the network is modeled as a unit disk graph, we give a probabilistic algorithm that runs in time O(log log n) and achieves an O(1) approximation in expectation. Both algorithms require only small messages of size O(log n) bits." ] }
0710.2296
2124886183
We say that a graph G=(V,E) on n vertices is a @b-expander for some constant @b>0 if every U@?V of cardinality |U|@?n2 satisfies |N"G(U)|>=@b|U| where N"G(U) denotes the neighborhood of U. In this work we explore the process of deleting vertices of a @b-expander independently at random with probability n^-^@a for some constant @a>0, and study the properties of the resulting graph. Our main result states that as n tends to infinity, the deletion process performed on a @b-expander graph of bounded degree will result with high probability in a graph composed of a giant component containing n-o(n) vertices that is in itself an expander graph, and constant size components. We proceed by applying the main result to expander graphs with a positive spectral gap. In the particular case of (n,d,@l)-graphs, that are such expanders, we compute the values of @a, under additional constraints on the graph, for which with high probability the resulting graph will stay connected, or will be composed of a giant component and isolated vertices. As a graph sampled from the uniform probability space of d-regular graphs with high probability is an expander and meets the additional constraints, this result strengthens a recent result due to Greenhill, Holt and Wormald about vertex percolation on random d-regular graphs. We conclude by showing that performing the above described deletion process on graphs that expand sub-linear sets by an unbounded expansion ratio, with high probability results in a connected expander graph.
The process of random deletion of vertices of a graph received rather limited attention, mainly in the context of faulty storage (see e.g. @cite_0 ), communication networks, and distributed computing. For instance, the main motivation of @cite_12 is the SWAN peer-to-peer @cite_7 network whose topology possess some properties of @math -regular graphs, and may have faulty nodes. Other works are mainly interested in connectivity and routing in the resulting graph after performing (possibly adversarial) vertex deletions on some prescribed graph topologies.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_12" ], "mid": [ "2048114999", "2477953794", "2058466367" ], "abstract": [ "In this paper, we provide a method to safely store a document in perhaps the most challenging settings, a highly decentralized replicated storage system where up to half of the storage servers may incur arbitrary failures, including alterations to data stored in them. Using an error correcting code (ECC), e.g., a Reed?Solomon code, one can take n pieces of a document, replace each piece with another piece of size larger by a factor of nn?2t+1 such that it is possible to recover the original set even when up to t of the larger pieces are altered. For t close to n 2 the space blowup factor of this scheme is close to n, and the overhead of an ECC such as the Reed?Solomon code degenerates to that of a trivial replication code. We show a technique to reduce this large space overhead for high values of t. Our scheme blows up each piece by a factor slightly larger than two using an erasure code which makes it possible to recover the original set using n 2?O(n d) of the pieces, where d?80 is a fixed constant. Then we attach to each piece O(d log n log d) additional bits to make it possible to identify a large enough set of unmodified pieces, with negligible error probability, assuming that at least half the pieces are unmodified and with low complexity. For values of t close to n 2 we achieve a large asymptotic space reduction over the best possible space blowup of any ECC in deterministic setting. Our approach makes use of a d-regular expander graph to compute the bits required for the identification of n 2?O(n d) good pieces.", "", "We investigate the following vertex percolation process. Starting with a random regular graph of constant degree, delete each vertex independently with probability p, where p=n^-^@a and @[email protected](n) is bounded away from 0. We show that a.a.s. the resulting graph has a connected component of size n-o(n) which is an expander, and all other components are trees of bounded size. Sharper results are obtained with extra conditions on @a. These results have an application to the cost of repairing a certain peer-to-peer network after random failures of nodes." ] }
0709.2252
2953139907
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
Tabbane is one of the firsts to have introduced the node (mobility) profiling to improve location management in . @cite_14 the profiling is operated by the network and shared with the node's subscriber identity module . Thanks to this profiling, whatever the period of time @math the system can find a list of areas where the nodes could be. This list of areas is decreasingly ordered by the probabilities of being in the different areas. One probability is dependent of a function associated with and has several parameters such as the time, the pattern of mobility, the last location, the weather etc. Until the node is in one of these areas it does not update its location. When the system needs to locate him, it asks sequentially the different areas within the list. Two notions are shared with our approach: (i) the node profiling although in it is the nodes which make their self-profiling. (ii) The relation with the time. However, in Otiy the period of time are predefined and only one area (anchor) is assigned to each time slot.
{ "cite_N": [ "@cite_14" ], "mid": [ "2156585334" ], "abstract": [ "Mobile radio communications raise two major problems. First: a very poor radio link quality. Second: the users' mobility, which requires the management of their position, is resource consuming (especially radio bandwidth). This paper focuses on the second issue and proposes an intelligent method for users locating: the alternative strategy (AS). Our proposal is based on the observation that the mobility behavior of a majority of people can be foretold. If taken into consideration by the system, this characteristic can save signaling messages due to mobility management procedures, leading thus to savings in the system resources. Several versions of the AS are described: a basic version for long term events (i.e., received calls and registrations), and versions with increased memory for short and medium term events. The evaluation of the basic versions was performed using analytic and simulation approaches. It shows that storing the mobility related information brings great savings in system resources when the users have medium or high predictable mobility patterns. More generally speaking, this work points out the fact that the future systems will have to integrate users related information in order: firstly: to provide customized services and secondly: to save system resources. On the other hand, current trends in mobile communications show that adaptive and dynamic system capabilities require that more information to be collected and computed. >" ] }
0709.2252
2953139907
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
In @cite_1 , Wu mine the mobility behavior nodely (operated by the nodes) from long term mobility history. From this information they evaluate the time-varying probability of the different nodely-defined regions. The prevalence of a region on the time is defined through a cost model. Finally, they obtain a vector @math of mobility which will define the region to be paged in function of the time. The location updates and paging schemes are approximately the same than the afore-presented proposals. Here, the prevalence of a particular area on the time is more flexible and accurate than ours. However, the complexity of the algorithm is more important. Furthermore, the length of the vector of mobility can be a lot more important than a division in time slots of equal duration.
{ "cite_N": [ "@cite_1" ], "mid": [ "1533191377" ], "abstract": [ "We propose a new location tracking strategy called behavior-based strategy (BBS) based on each mobile's moving behavior. With the help of data mining technologies the moving behavior of each mobile could be mined from long-term collection of the mobile's moving logs. From the moving behavior of each mobile, we first estimate the time-varying probability of the mobile and then the optimal paging area of each time region is derived. To reduce unnecessary computation, we consider the location tracking and computational cost and then derive a cost model. A heuristics is proposed to minimize the cost model through finding the appropriate moving period checkpoints of each mobile. The experimental results show our strategy outperforms fixed paging area strategy currently used in the GSM system and time-based strategy for highly regular moving mobiles." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Untangling was first investigated for the @math , following the question by Watanabe @cite_1 of whether @math . The answer turned out to be negative: Pach and Tardos @cite_10 showed, by a probabilistic argument, that @math . They also showed that @math by applying the Erd o s--Szekeres theorem to the sequence of the indices of the vertices of @math in clockwise order around some specific point. Cibulka @cite_3 recently improved that lower bound to @math by applying the Erd o s--Szekeres theorem not once but @math times.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_3" ], "mid": [ "2758789266", "", "2014466831" ], "abstract": [ "The following problem was raised by M. Watanabe. Let P be a self-intersecting closed polygon with n vertices in general position. How manys steps does it take to untangle P , i.e., to turn it into a simple polygon, if in each step we can arbitrarily relocate one of its vertices. It is shown that in some cases one has to move all but at most O((n log n) 2 3 ) vertices. On the other hand, every polygon P can be untangled in at most @math steps. Some related questions are also considered.", "", "Untangling is a process in which some vertices in a drawing of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C n while keeping Ω(n 2 3) vertices fixed. For any connected graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree, and diameter of G. One consequence is that every 3-connected planar graph has a drawing δ such that at most O((nlog n)2 3) vertices are fixed in every untangling of δ." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Verbitsky @cite_22 investigated planar graphs of higher connectivity. He proved linear upper bounds on @math for three- and four-connected planar graphs. Cibulka @cite_3 gave, for any planar graph @math , an upper bound on @math that is a function of the number of vertices, the maximum degree, and the diameter of @math . This latter bound implies, in particular, that @math for any three-connected planar graph @math and that any graph @math such that @math for some @math must have a vertex of degree @math .
{ "cite_N": [ "@cite_22", "@cite_3" ], "mid": [ "2080201242", "2014466831" ], "abstract": [ "Being motivated by John Tantalo's Planarity Game, we consider straight line plane drawings of a planar graph G with edge crossings and wonder how obfuscated such drawings can be. We define obf(G), the obfuscation complexity of G, to be the maximum number of edge crossings in a drawing of G. Relating obf(G) to the distribution of vertex degrees in G, we show an efficient way of constructing a drawing of G with at least obf(G) 3 edge crossings. We prove bounds (@d(G)^2 24-o(1))n^2@?obf(G) =2. The shift complexity of G, denoted by shift(G), is the minimum number of vertex shifts sufficient to eliminate all edge crossings in an arbitrarily obfuscated drawing of G (after shifting a vertex, all incident edges are supposed to be redrawn correspondingly). If @d(G)>=3, then shift(G) is linear in the number of vertices due to the known fact that the matching number of G is linear. However, in the case @d(G)>=2 we notice that shift(G) can be linear even if the matching number is bounded. As for computational complexity, we show that, given a drawing D of a planar graph, it is NP-hard to find an optimum sequence of shifts making D crossing-free.", "Untangling is a process in which some vertices in a drawing of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C n while keeping Ω(n 2 3) vertices fixed. For any connected graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree, and diameter of G. One consequence is that every 3-connected planar graph has a drawing δ such that at most O((nlog n)2 3) vertices are fixed in every untangling of δ." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
The hardness of computing @math given @math and @math was obtained independently by Verbitsky @cite_22 by a reduction from independent set in line-segment intersection graphs. While our proof is more complicated than his, it is stronger as it also yields hardness of approximation and extends to the problem with given vertex--point correspondence.
{ "cite_N": [ "@cite_22" ], "mid": [ "2080201242" ], "abstract": [ "Being motivated by John Tantalo's Planarity Game, we consider straight line plane drawings of a planar graph G with edge crossings and wonder how obfuscated such drawings can be. We define obf(G), the obfuscation complexity of G, to be the maximum number of edge crossings in a drawing of G. Relating obf(G) to the distribution of vertex degrees in G, we show an efficient way of constructing a drawing of G with at least obf(G) 3 edge crossings. We prove bounds (@d(G)^2 24-o(1))n^2@?obf(G) =2. The shift complexity of G, denoted by shift(G), is the minimum number of vertex shifts sufficient to eliminate all edge crossings in an arbitrarily obfuscated drawing of G (after shifting a vertex, all incident edges are supposed to be redrawn correspondingly). If @d(G)>=3, then shift(G) is linear in the number of vertices due to the known fact that the matching number of G is linear. However, in the case @d(G)>=2 we notice that shift(G) can be linear even if the matching number is bounded. As for computational complexity, we show that, given a drawing D of a planar graph, it is NP-hard to find an optimum sequence of shifts making D crossing-free." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Finally, a somewhat related problem is that of , or isotopy, between two plane drawings @math and @math of the same graph @math , that is, to define for each vertex @math of @math a movement from @math to @math such that at any time during the move the drawing defined by the current vertex positions is plane. We refer the interested reader to the survey by @cite_25 .
{ "cite_N": [ "@cite_25" ], "mid": [ "2118893502" ], "abstract": [ "We give an algorithm to morph between two planar orthogonal drawings of a graph, preserving planarity and orthogonality. The morph uses a polynomial number of discrete steps. Each step is either a linear morph that moves a set of vertices horizontally or vertically; or a \"twist\" that introduces new bends in the edges incident with one vertex. Our morph can be implemented so that inter-vertex distances are well-behaved. This is the first algorithm to provide planarity-preserving morphs with well-behaved complexity for a significant class of graph drawings." ] }
0709.1909
1718410566
Given a sequence of complex square matrices, @math , consider the sequence of their partial products, defined by @math . What can be said about the asymptotics as @math of the sequence @math , where @math is a continuous function? A special case of our most general result addresses this question under the assumption that the matrices @math are an @math perturbation of a sequence of matrices with bounded partial products. We apply our theory to investigate the asymptotics of the approximants of continued fractions. In particular, when a continued fraction is @math limit 1-periodic of elliptic or loxodromic type, we show that its sequence of approximants tends to a circle in @math , or to a finite set of points lying on a circle. Our main theorem on such continued fractions unifies the treatment of the loxodromic and elliptic cases, which are convergent and divergent, respectively. When an approximating sequence tends to a circle, we obtain statistical information about the limiting distribution of the approximants. When the circle is the real line, the points are shown to have a Cauchy distribution with parameters given in terms of modifications of the original continued fraction. As an example of the general theory, a detailed study of a @math -continued fraction in five complex variables is provided. The most general theorem in the paper holds in the context of Banach algebras. The theory is also applied to @math -matrix continued fractions and recurrence sequences of Poincar 'e type and compared with closely related literature.
We are aware of four other places where work related to the results of this section was given previously. Two of these were motivated by the identity of Ramanujan. The first paper is @cite_46 which gave the first proof of . The proof in @cite_46 is particular to the continued fraction . However, section 3 of @cite_46 studied the recurrence @math and showed that when @math , the sequence @math has six limit points and that moreover a continued fraction whose convergents satisfies this recurrence under the @math assumption tends to three limit points (Theorem 3.3 of @cite_46 ). The paper does not consider other numbers of limits, however. Moreover, the role of the sixth roots of unity in the recurrence is not revealed. In the section 6 of the present paper, we treat the general case in which recurrences can have a finite or uncountable number of limits. Previously in @cite_20 we treated such recurrences with a finite number @math of limits as well as the associated continued fractions.
{ "cite_N": [ "@cite_46", "@cite_20" ], "mid": [ "2036434814", "2073569022" ], "abstract": [ "Abstract On page 45 in his lost notebook, Ramanujan asserts that a certain q -continued fraction has three limit points. More precisely, if A n B n denotes its n th partial quotient, and n tends to ∞ in each of three residue classes modulo 3, then each of the three limits of A n B n exists and is explicitly given by Ramanujan. Ramanujan's assertion is proved in this paper. Moreover, general classes of continued fractions with three limit points are established.", "For integers m⩾2, we study divergent continued fractions whose numerators and denominators in each of the m arithmetic progressions modulo m converge. Special cases give, among other things, an infinite sequence of divergence theorems, the first of which is the classical Stern–Stolz theorem. We give a theorem on a class of Poincare-type recurrences which shows that they tend to limits when the limits are taken in residue classes and the roots of their characteristic polynomials are distinct roots of unity. We also generalize a curious q-continued fraction of Ramanujan's with three limits to a continued fraction with k distinct limit points, k⩾2. The k limits are evaluated in terms of ratios of certain q-series. Finally, we show how to use Daniel Bernoulli's continued fraction in an elementary way to create analytic continued fractions with m limit points, for any positive integer m⩾2." ] }
0708.3834
1482001270
We study an evolutionary game of chance in which the probabilities for different outcomes (e.g., heads or tails) depend on the amount wagered on those outcomes. The game is perhaps the simplest possible probabilistic game in which perception affects reality. By varying the reality map', which relates the amount wagered to the probability of the outcome, it is possible to move continuously from a purely objective game in which probabilities have no dependence on wagers, to a purely subjective game in which probabilities equal the amount wagered. The reality map can reflect self-reinforcing strategies or self-defeating strategies. In self-reinforcing games, rational players can achieve increasing returns and manipulate the outcome probabilities to their advantage; consequently, an early lead in the game, whether acquired by chance or by strategy, typically gives a persistent advantage. We investigate the game both in and out of equilibrium and with and without rational players. We introduce a method of measuring the inefficiency of the game and show that in the large time limit the inefficiency decreases slowly in its approach to equilibrium as a power law with an exponent between zero and one, depending on the subjectivity of the game.
There has been considerable past work on situations where subjective factors influence objective outcomes. Some examples include Hommes's studies of cobweb models @cite_8 @cite_3 , studies of increasing returns @cite_1 , Arthur's El Farol model and its close relative the minority game @cite_2 @cite_9 , Blume and Easley's model of the influence of capital markets on natural selection in an economy @cite_5 @cite_12 , and Akiyama and Kaneko's example of a game that changes due to the players' behaviors and states @cite_11 . The model we introduce here has the advantage of being very general yet very simple, providing a tunable way to study this phenomenon under varying levels of feedback.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1529312799", "128085809", "2009202666", "2615072452", "1575989700", "2041636470", "2570384471", "2140643197" ], "abstract": [ "Abstract We investigate the dynamics of the cobweb model with adaptive expectations, a linear demand curve, and a nonlinear, S-shaped, increasing supply curve. Both stable periodic and chaotic price behaviour can occur. We investigate, how the dynamics of the model depend on the parameters. Both infinitely many period doubling and period halving bifurcations can occur, when the demand curve is shifted upwards. The same result holds with respect to the expectations weight factor.", "", "", "Abstract The price-quantity dynamics of the cobweb model with adaptive expectations and nonlinear supply and demand curves is analysed. We prove that chaotic dynamical behaviour can occur, even if both the supply and demand curves are monotonic. The introduction of adaptive expectations into the cobweb model leads to price-quantity fluctuations with a smaller amplitude. However, at the same time the price-quantity cycles may become unstable and chaotic oscillations may arise. We present a geometric explanation why chaos can occur for a large class of nonlinear, monotonic supply and demand curves.", "", "Abstract In a conventional asset market model we study the evolutionary process generated by wealth flows between investors. Asymptotic behavior of our model is completely determined by the investors' expected growth rates of wealth share. Investment rules are more or less “fit” depending upon the value of this expectation, and more fit rules survive in the market at the expense of the less fit. Using this criterion we examine the long run behavior of asset prices and the common belief that the market selects for rational investors. We find that fit rules need not be rational, and rational rules not be fit. Finally, we investigate how the market selects over various adaptive decision rules.", "Evolutionary arguments are often used to justify the fundamental behavioral postulates of competive equilibrium. Economists such as Milton Friedman have argued that natural selection favors profit maximizing firms over firms engaging in other behaviors. Consequently, producer efficiency, and therefore Pareto efficiency, are justified on evolutionary grounds. We examine these claims in an evolutionary general equilibrium model. If the economic environment were held constant, profitable firms would grow and unprofitable firms would shrink. In the general equilibrium model, prices change as factor demands and output supply evolves. Without capital markets, when firms can grow only through retained earnings, our model verifies Friedman's claim that natural selection favors profit maximization. But we show through examples that this does not imply that equilibrium allocations converge over time to efficient allocations. Consequently, Koopmans critique of Friedman is correct. When capital markets are added, and firms grow by attracting investment, Friedman's claim may fail. In either model the long-run outcomes of evolutionary market models are not well described by conventional General Equilibrium analysis with profit maximizing firms. Submitted to Journal of Economic Theory. (This abstract was borrowed from another version of this item.)", "A theoretical framework we call dynamical systems game is presented, in which the game itself can change due to the influence of players’ behaviors and states. That is, the nature of the game itself is described as a dynamical system. The relation between game dynamics and the evolution of strategies is discussed by applying this framework. Computer experiments are carried out for simple one-person games to demonstrate the evolution of dynamical systems with the effective use of dynamical resources. © 2000 Elsevier Science B.V. All rights reserved. PACS: 02.50.Le" ] }
0708.1211
2949126942
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
Compressed Sensing (CS) methods @cite_2 @cite_20 @cite_22 @cite_15 @cite_27 provide a robust framework for reducing the number of measurements required to estimate a sparse signal. For this reason CS methods are useful in areas such as MR imaging @cite_8 @cite_6 and analog-to-digital conversion @cite_10 @cite_12 where measurement costs are high. The general CS setup is as follows: Let be an @math -length signal vector with complex valued entries and @math be a full rank @math change of basis matrix. Furthermore, suppose that @math is sparse (i.e., only @math entries of @math are significant large in magnitude). CS methods deal with generating a @math measurement matrix, @math , with the smallest number of rows possible (i.e., @math minimized) so that the @math significant entries of @math can be well approximated by the @math -element vector result of Note that CS is inherently algorithmic since a procedure for recovering @math 's largest @math -entries from the result of Equation must be specified.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_6", "@cite_27", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "", "2061171222", "", "", "2145096794", "2973707709", "2542170364", "", "2127271355" ], "abstract": [ "", "An efficient method for the calculation of the interactions of a 2' factorial ex- periment was introduced by Yates and is widely known by his name. The generaliza- tion to 3' was given by (1). Good (2) generalized these methods and gave elegant algorithms for which one class of applications is the calculation of Fourier series. In their full generality, Good's methods are applicable to certain problems in which one must multiply an N-vector by an N X N matrix which can be factored into m sparse matrices, where m is proportional to log N. This results inma procedure requiring a number of operations proportional to N log N rather than N2. These methods are applied here to the calculation of complex Fourier series. They are useful in situations where the number of data points is, or can be chosen to be, a highly composite number. The algorithm is here derived and presented in a rather different form. Attention is given to the choice of N. It is also shown how special advantage can be obtained in the use of a binary computer with N = 2' and how the entire calculation can be performed within the array of N data storage locations used for the given Fourier coefficients. Consider the problem of calculating the complex Fourier series N-1 (1) X(j) = EA(k)-Wjk, j = 0 1, * ,N- 1, k=0", "", "", "This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.", "In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement", "We develop a framework for analog-to-information conversion that enables sub-Nyquist acquisition and processing of wideband signals that are sparse in a local Fourier representation. The first component of the framework is a random sampling system that can be implemented in practical hardware. The second is an efficient information recovery algorithm to compute the spectrogram of the signal, which we dub the sparsogram. A simulated acquisition of a frequency hopping signal operates at 33times sub-Nyquist average sampling rate with little degradation in signal quality", "", "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems." ] }
0708.1211
2949126942
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
For the remainder of this paper we will consider the special CS case where @math is the @math Discrete Fourier Transform matrix. Hence, we have Our problem of interest is to find, and estimate the coefficients of, the @math significant entries (i.e., frequencies) of @math given a frequency-sparse (i.e., smooth) signal . In this case the deterministic Fourier CS measurement matrixes, @math , produced by @cite_20 @cite_22 @cite_15 @cite_27 require super-linear @math -time to multiply by in Equation . Similarly, the energetic frequency recovery procedure of @cite_2 @cite_13 requires super-linear time in @math . Hence, none of @cite_2 @cite_20 @cite_13 @cite_22 @cite_15 @cite_27 have both sub-linear measurement and reconstruction time.
{ "cite_N": [ "@cite_22", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "", "", "2145096794", "2973707709", "2156043924", "2127271355" ], "abstract": [ "", "", "This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.", "In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement", "Compressed sensing is a new area of signal processing. Its goal is to minimize the number of samples that need to be taken from a signal for faithful reconstruction. The performance of compressed sensing on signal classes is directly related to Gelfand widths. Similar to the deeper constructions of optimal subspaces in Gelfand widths, most sampling algorithms are based on randomization. However, for possible circuit implementation, it is important to understand what can be done with purely deterministic sampling. In this note, we show how to construct sampling matrices using finite fields. One such construction gives cyclic matrices which are interesting for circuit implementation. While the guaranteed performance of these deterministic constructions is not comparable to the random constructions, these matrices have the best known performance for purely deterministic constructions.", "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems." ] }
0708.0961
2950164889
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
Most isosurface extraction methods operate only on structured data usually a structured or unstructured grid @cite_10 . Livnat @cite_10 and Sutton . @cite_22 provide overviews of popular isosurface extraction techniques.
{ "cite_N": [ "@cite_10", "@cite_22" ], "mid": [ "2125039738", "2165729714" ], "abstract": [ "The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues of storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task. In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh. Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality.", "Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published, leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques, yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern Computer architectures" ] }
0708.0961
2950164889
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
Value-space decomposition techniques, such as NOISE @cite_13 and interval trees @cite_15 @cite_39 , can extract isosurfaces from datasets that lack structure, as can the various techniques of Co . @cite_11 @cite_36 and Rosenthal . @cite_27 . Unfortunately, implementations of these techniques are usually not freely available.
{ "cite_N": [ "@cite_36", "@cite_39", "@cite_27", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "", "", "2167302705", "2151005058", "2111734174", "2156548284" ], "abstract": [ "", "", "Isosurface extraction is a standard visualization method for scalar volume data and has been subject to research for decades. Nevertheless, to our knowledge, no isosurface extraction method exists that directly extracts surfaces from scattered volume data without 3D mesh generation or reconstruction over a structured grid. We propose a method based on spatial domain partitioning using a kd-tree and an indexing scheme for efficient neighbor search. Our approach consists of a geometry extraction and a rendering step. The geometry extraction step computes points on the isosurface by linearly interpolating between neighboring pairs of samples. The neighbor information is retrieved by partitioning the 3D domain into cells using a kd-tree. The cells are merely described by their index and bitwise index operations allow for a fast determination of potential neighbors. We use an angle criterion to select appropriate neighbors from the small set of candidates. The output of the geometry step is a point cloud representation of the isosurface. The final rendering step uses point-based rendering techniques to visualize the point cloud. Our direct isosurface extraction algorithm for scattered volume data produces results of quality close to the results from standard isosurface extraction algorithms for gridded volume data (like marching cubes). In comparison to 3D mesh generation algorithms (like Delaunay tetrahedrization), our algorithm is about one order of magnitude faster for the examples used in this paper.", "A method is proposed which supports the extraction of isosurfaces from irregular volume data, represented by tetrahedral decomposition, in optimal time. The method is based on a data structure called interval tree, which encodes a set of intervals on the real line, and supports efficient retrieval of all intervals containing a given value. Each cell in the volume data is associated with an interval bounded by the extreme values of the field in the cell. All cells intersected by a given isosurface are extracted in O(m+log h) time, with m the output size and h the number of different extreme values (min or max). The implementation of the method is simple. Tests have shown that its practical performance reflects the theoretical optimality.", "Presents the \"Near Optimal IsoSurface Extraction\" (NOISE) algorithm for rapidly extracting isosurfaces from structured and unstructured grids. Using the span space, a new representation of the underlying domain, we develop an isosurface extraction algorithm with a worst case complexity of o( spl radic n+k) for the search phase, where n is the size of the data set and k is the number of cells intersected by the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(n log n). We utilize the span space representation as a tool for comparing isosurface extraction methods on structured and unstructured grids. We also present a fast triangulation scheme for generating and displaying unstructured tetrahedral grids.", "We propose a meshless method for the extraction of high-quality continuous isosurfaces from volumetric data represented by multiple grids, also called \"multiblock\" data sets. Multiblock data sets are commonplace in computational mechanics applications. Relatively little research has been performed on contouring multiblock data sets, particularly when the grids overlap one another. Our algorithm proceeds in two steps. In the first step, we determine a continuous interpolant using a set of locally defined radial basis functions (RBFs) in conjunction with a partition of unity method to blend smoothly between these functions. In the second step, we extract isosurface geometry by sampling points on Marching Cubes triangles and projecting these point samples onto the isosurface defined by our interpolant. A surface splatting algorithm is employed for visualizing the resulting point set representing the isosurface. Because of our method's generality, it inherently solves the \"crack problem\" in isosurface generation. Results using a set of synthetic data sets and a discussion of practical considerations are presented. The importance of our method is that it can be applied to arbitrary grid data regardless of mesh layout or orientation." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Several scholarly ontologies are available in the DAML Ontology Library DAML Ontology Library available at: http: www.daml.org ontologies . While they focus on bibliographic constructs, they do not model usage events. The same is true of the Semantic Community Web Portal ontology @cite_9 , which, in addition maintains many detailed classes whose instantiation is unrealistic given what is recorded by modern scholarly information systems.
{ "cite_N": [ "@cite_9" ], "mid": [ "2013848664" ], "abstract": [ "Abstract Community Web portals serve as portals for the information needs of particular communities on the Web. We here discuss how a comprehensive and flexible strategy for building and maintaining a high-value community Web portal has been conceived and implemented. The strategy includes collaborative information provisioning by the community members. It is based on an ontology as a semantic backbone for accessing information on the portal, for contributing information, as well as for developing and maintaining the portal. We have also implemented a set of ontology-based tools that have facilitated the construction of our show case — the community Web portal of the knowledge acquisition community." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
The ScholOnto ontology was developed as part of an effort aimed at enabling researchers to describe and debate, via a semantic network, the contributions of a document, and its relationship to the literature @cite_4 . While this ontology supports the concept of a scholarly document and a scholarly agent, it focuses on formally summarizing and interactively debating claims made in documents, not on expressing the actual use of documents. Moreover, support for bibliographic data is minimal whereas support for discourse constructs, not required for MESUR, is very detailed.
{ "cite_N": [ "@cite_4" ], "mid": [ "2130307546" ], "abstract": [ "The internet is rapidly becoming the first place for researchers to publish documents, but at present they receive little support in searching, tracking, analysing or debating concepts in a literature from scholarly perspectives. This paper describes the design rationale and implementation of ScholOnto, an ontology-based digital library server to support scholarly interpretation and discourse. It enables researchers to describe and debate via a semantic network the contributions a document makes, and its relationship to the literature. The paper discusses the computational services that an ontology-based server supports, alternative user interfaces to support interaction with a large semantic network, usability issues associated with knowledge formalisation, new work practices that could emerge, and related work." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
The ABC ontology @cite_25 was primarily engineered as a common conceptual model for the interoperability of a variety of metadata ontologies from different domains. Although the ABC ontology is able to represent bibliographic and usage concepts by means of constructs such as artifact (e.g. article), agent (e.g. author), and action (e.g. use), it is designed at a level of generality that does not directly support the granularity required by the MESUR project.
{ "cite_N": [ "@cite_25" ], "mid": [ "1552027408" ], "abstract": [ "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Finally, in the realm of usage data representation, no ontology-based efforts were found. Nevertheless, the following existing schema-driven approaches were explored and served as inspiration: the OpenURL ContextObject approach to facilitate OAI-PMH-based harvesting of scholarly usage events @cite_2 , the XML Log standard to represent digital library logs @cite_8 , and the COUNTER schema to express journal level usage statistics @cite_16 .
{ "cite_N": [ "@cite_8", "@cite_16", "@cite_2" ], "mid": [ "1572538753", "", "2084177123" ], "abstract": [ "Log analysis can be a primary source of knowledge about how digital library patrons actually use DL systems and services and how systems behave while trying to support user information seeking activities. Log recording and analysis allow evaluation assessment, and open opportunities to improvements and enhanced new services. In this paper, we propose an XML-based digital library log format standard that captures a rich, detailed set of system and user behaviors supported by current digital library services. The format is implemented in a generic log component tool, which can be plugged into any digital library system. The focus of the work is on interoperability, reusability, and completeness. Specifications, implementation details, and examples of use within the MARIAN digital library system are described.", "", "Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period." ] }
0708.1337
2164790530
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
There has also been work on Quantum Markov networks within the quantum probability literature @cite_35 @cite_5 @cite_52 , although Belief Propagation has not been investigated in this literature. This is closer to the spirit of the present work, in the sense that it is based on the generalization of classical probability to a noncommutative, operator-valued probability theory. These works are primarily concerned with defining the Markov condition in such a way that it can be applied to systems with an infinite number of degrees of freedom, and hence an operator algebraic formalism is used. This is important for applications to statistical physics because the thermodynamic limit can be formally defined as the limit of an infinite number of systems, but it is not so important for numerical simulations, since these necessarily operate with a finite number of discretized degrees of freedom. Also conditional independence is defined in a different way via quantum conditional expectations, rather than the approach based on conditional mutual information and conditional density operators used in the present work. Nevertheless, it seems likely that there are connections to our approach that should to be investigated in future work.
{ "cite_N": [ "@cite_35", "@cite_5", "@cite_52" ], "mid": [ "139328743", "2095079284", "2017632243" ], "abstract": [ "", "Abstract The program relative to the investigation of quantum Markov states for general one-dimensional spin models is carried on, following a strategy developed in the last years. In such a way, the emerging structure is fully clarified. This analysis is a starting point for the solution of the basic (still open) problem concerning the construction of a satisfactory theory of quantum Markov fields, i.e. quantum Markov processes with multi-dimensional indices.", "We review recent developments in the theory of quantum Markov states on the standard @math --spin lattice. A Dobrushin theory for quantum Markov fields is proposed. In the one--di -men -sional case where the order plays a crucial role, the structure arising from a quantum Markov state is fully understood. In this situation we obtain a splitting of a Markov state into a classical part, and a purely quantum part. This result allows us to provide a reconstruction theorem for quantum Markov states on chains" ] }
0708.1337
2164790530
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
Lastly, during the final stage of preparation of this manuscript, two related papers have appeared on the physics archive. An article by Laumann, Scardicchio and Sondhi @cite_15 used a QBP-like to solve quantum models on sparse graphs. Hastings @cite_20 proposed a QBP algorithm for the simulation of quantum many-body systems based on ideas similar to the ones presented here. The connection between the two approaches, and in particular the application of the Lieb-Robinson bound @cite_7 to conditional mutual information, is worthy of further investigation.
{ "cite_N": [ "@cite_15", "@cite_7", "@cite_20" ], "mid": [ "2003272677", "2011290825", "1514195450" ], "abstract": [ "We propose a generalization of the cavity method to quantum spin glasses on fixed connectivity lattices. Our work is motivated by the recent refinements of the classical technique and its potential application to quantum computational problems. We numerically solve for the phase structure of a connectivity @math transverse field Ising model on a Bethe lattice with @math couplings and investigate the distribution of various classical and quantum observables.", "It is shown that if Ф is a finite range interaction of a quantum spin system, τ t Ф the associated group of time translations, τ x the group of space translations, and A, B local observables, then @math (1) whenever v is sufficiently large (ν > V Ф ,) where μ(ν) > 0. The physical content of the statement is that information can propagate in the system only with a finite group velocity.", "We present an accurate numerical algorithm, called quantum belief propagation, for simulation of one-dimensional quantum systems at nonzero temperature. The algorithm exploits the fact that quantum effects are short-range in these systems at nonzero temperature, decaying on a length scale inversely proportional to the temperature. We compare to exact results on a spin- @math Heisenberg chain. Even a very modest calculation, requiring diagonalizing only ten-by-ten matrices, reproduces the peak susceptibility with a relative error of less than @math , while more elaborate calculations further reduce the error." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Another idea is to use light instead of electrical power. It is hoped that optical computing could advance computer architecture and can improve the speed of data input and output by several orders of magnitude @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1672611793" ], "abstract": [ "Optical Computers provides the first in-depth review of the possibilities and limitations of optical data processing." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Many theoretical and practical light-based devices have been proposed for dealing with various problems. Optical computation has some advantages, one of them being the fact that it can perform some operations faster than conventional devices. An example is the @math -point discrete Fourier transform computation which can be performed in only unit time @cite_1 @cite_8 .
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "36503682", "2139842859" ], "abstract": [ "An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral.", "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14]." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
A recent paper @cite_25 introduces the idea of sorting by using some properties of light. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength (see Figure ). For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_25" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Naughton (et al) proposed and investigated @cite_14 @cite_2 a model called the continuous space machine which operates in discrete time-steps over a number of two-dimensional complex-valued images of constant size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, copying and scaling.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2027561983", "2101615041" ], "abstract": [ "We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.", "We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
There are also other devices which are taking into account the quantum properties of light. This idea has been used for solving the Traveling Salesman Problem @cite_26 @cite_3 using a special purpose device.
{ "cite_N": [ "@cite_26", "@cite_3" ], "mid": [ "2169306227", "2137342394" ], "abstract": [ "In this paper we discuss physical aspects of intractable (NP-complete) computing problems. We show, using a specibc model, that a quantum-mechanical computer can in principle solve an NP-complete problem in polynomial time; however, it would use an exponentially large energy for that computation. We conjecture that our model reflects a complementarity principle concerning the time and the energy needed to perform an NP-complete computation", "This paper uses instances of SAT, 3SAT and TSP to describe how evolutionary search (running on a classical computer) differs from quantum search (running on a quantum computer) for solving NP problems." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
Using light instead of electric power for performing computations is not a new idea. Optical Character Recognition (OCR) machines @cite_4 were one of the first modern devices which are based on light for solving a difficult problem. Later, various researchers have shown how light can solve problems faster than modern computers. An example is the @math -point discrete Fourier transform computation which can be performed in only unit time @cite_1 @cite_8 .
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_8" ], "mid": [ "36503682", "", "2139842859" ], "abstract": [ "An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral.", "", "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14]." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
In @cite_19 was presented a new, principally non-dissipative digital logic architecture which involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. This new logic paradigm was specially developed with optical implementation in mind.
{ "cite_N": [ "@cite_19" ], "mid": [ "2027217447" ], "abstract": [ "Conventional architectures for the implementation of Boolean logic are based on a network of bistable elements assembled to realize cascades of simple Boolean logic gates. Since each such gate has two input signals and only one output signal, such architectures are fundamentally dissipative in information and energy. Their serial nature also induces a latency in the processing time. In this paper we present a new, principally non-dissipative digital logic architecture which mitigates the above impediments. Unlike traditional computing architectures, the proposed architecture involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. The architecture employs a novel conception of cascading which utilizes the strengths of both optics and electronics while avoiding their weaknesses. It is inherently non-dissipative, respects the linear nature of interactions in pure optics, and harnesses the control advantages of electrons without reducing the speed advantages of optics. This new logic paradigm was specially developed with optical implementation in mind. However, it is suitable for other implementations as well, including conventional electronic devices." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
In @cite_29 the idea of sorting by using some properties of light is introduced. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_29" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
An optical solution for solving the traveling salesman problem (TSP) was proposed in @cite_9 . The power of optics in this method was done by using a fast matrix-vector multiplication between a binary matrix, representing all feasible TSP tours, and a gray-scale vector, representing the weights among the TSP cities. The multiplication was performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm was provided. However, since the number of all tours is exponential the method is difficult to be implemented even for small instances.
{ "cite_N": [ "@cite_9" ], "mid": [ "2061959054" ], "abstract": [ "We present a new optical method for solving bounded (input-length-restricted) NP-complete combinatorial problems. We have chosen to demonstrate the method with an NP-complete problem called the traveling salesman problem (TSP). The power of optics in this method is realized by using a fast matrix-vector multiplication between a binary matrix, representing all feasible TSP tours, and a gray-scale vector, representing the weights among the TSP cities. The multiplication is performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm is provided. Simulations and experimental results prove the validity of the new method." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
An optical system which finds solutions to the 6-city TSP using a Kohonen-type network was proposed in @cite_7 . The system shows robustness with regard to the light intensity fluctuations and weight discretization which have been simulated. Using these heuristic methods, a relatively large number of TSP cities can be handled.
{ "cite_N": [ "@cite_7" ], "mid": [ "1552943972" ], "abstract": [ "A systems is described which finds solutions to the 6-city TSP using a Kohonen-type network. The system shows robustness with regard to the light intensity fluctuations and weight discretization which have been simulated. Scalability to larger size problems appears straightforward." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
A similar idea was used in @cite_24 for solving the TSP problem. The device uses white light interferometry for find the shortest TSP path.
{ "cite_N": [ "@cite_24" ], "mid": [ "2169259398" ], "abstract": [ "We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Another idea is to use light instead of electrical power. It is hoped that optical computing could advance computer architecture and can improve the speed of data input and output by several orders of magnitude @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1672611793" ], "abstract": [ "Optical Computers provides the first in-depth review of the possibilities and limitations of optical data processing." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Many theoretical and practical light-based devices have been proposed for dealing with various problems. Optical computation has some advantages, one of them being the fact that it can perform some operations faster than conventional devices. An example is the @math -point discrete Fourier transform computation which can be performed, optically, in only unit time @cite_1 @cite_5 . Based on that, a solution to the subset sum problem can be obtained by discrete convolution. The idea is that the convolution of 2 functions is the same as the product of their frequencies representation @cite_9 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_1" ], "mid": [ "2139842859", "2169259398", "36503682" ], "abstract": [ "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14].", "We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.", "An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
A recent paper @cite_28 introduces the idea of sorting by using some properties of light. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength (see Figure (b)). For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_28" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Naughton (et al) proposed and investigated @cite_14 @cite_2 a model called the continuous space machine which operates in discrete time-steps over a number of two-dimensional complex-valued images of constant size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, copying and scaling.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2027561983", "2101615041" ], "abstract": [ "We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.", "We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
A system which solves the Hamiltonian path problem (HPP) @cite_4 by using light and its properties has been proposed in @cite_21 @cite_27 . The device has the same structure as the graph where the solution is to be found. The light is delayed within nodes, whereas the delays introduced by arcs are constants. Because the problem asks that each node has to be visited exactly once, a special delaying system was designed. At the destination node we will search for a ray which has visited each node exactly once. This is very easy due to the special properties of the delaying system.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "2081798814", "1532824777", "141549700" ], "abstract": [ "In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.", "In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.", "" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Paper @cite_12 was the first of a series of papers offering for the thesis, for particular classes of algorithms. They all follow the same general pattern; Describe axiomatically a class A of algorithms. Define behavioral equivalence of A algorithms. Define a class M of abstract state machines. Prove the following characterization theorem for A : @math , and every @math is behaviorally equivalent to some @math . The characterization provides a theoretical programming language for A and opens the way for more practical languages for A . The justification of the ASM Thesis thus obtained is speculative in two ways: The claim that A captures the intuitive class of intended algorithms is open to criticism. Definition of behavioral equivalence is open to criticism. But the characterization of A by M is precise, and in this sense the procedure the ASM thesis for the class of algorithms A modulo the chosen behavioral equivalence.
{ "cite_N": [ "@cite_12" ], "mid": [ "1998333922" ], "abstract": [ "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In this subsection we briefly overview the realization of this program for isolated small-step algorithms in @cite_12 and ordinary interactive small-step algorithms in @cite_27 @cite_21 @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4", "@cite_12" ], "mid": [ "197061278", "", "2142611800", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "", "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The algorithms of @cite_12 are executed by a single sequential agent and are isolated in the following sense: there is no interaction with the environment during the execution of a step. The environment can intervene in between algorithm's steps. But we concentrate on step-for-step simulation, and so inter-step interaction with the environment can be ignored. This class of algorithms is axiomatized by three simple postulates.
{ "cite_N": [ "@cite_12" ], "mid": [ "1998333922" ], "abstract": [ "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The says that an algorithm defines a deterministic transition system, a (not necessarily finite-state) automaton. More explicitly, the algorithm determines a nonempty collection of states, a nonempty subcollection of initial states, and a state-transition function. The algorithm is presumed to be deterministic. Nondeterministic choices involve interaction with the environment; see @cite_15 @cite_12 @cite_27 for discussion. The term state is used in a comprehensive way. For example, in case of a Turing machine, a state would include not only the control state but also the head position and the tape contents.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_12" ], "mid": [ "197061278", "1969402593", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The says that all states are first-order structures of a fixed vocabulary, the transition function does not change the base set of a state, and isomorphism of structures preserves everything, which here means states, initial states and the transition function. It reflects the vast experience of mathematics and mathematical logic according to which every static mathematical situation can be adequately represented as a first-order structure. The idea behind the second requirement is that, even when the base set seems to increase with the creation of new objects, those objects can be regarded as having been already present in a reserve'' part of the state. What looks like creation is then regarded as taking an element from out of the reserve and into the active part of the state. (The nondeterministic choice of the element is made by the environment.) See @cite_15 @cite_12 @cite_27 @cite_21 and the next section for discussion. The idea behind the third requirement is that all relevant state information is reflected in the vocabulary: if your algorithm can distinguish red integers from green integers, then it is not just about integers.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_21", "@cite_12" ], "mid": [ "197061278", "1969402593", "", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7", "", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The expresses the idea that a sequential algorithm (in the traditional meaning of the term) computes in steps of bounded complexity'' @cite_7 . More explicitly, it asserts that the values of a finite set @math of terms (also called expressions), that depends only on the algorithm and not on the input or state, determine the state change (more exactly the set of location updates) for every step; see @cite_15 @cite_12 or the next section for precise definitions of locations and updates.
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_7" ], "mid": [ "1969402593", "1998333922", "" ], "abstract": [ "2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.", "" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The characterization theorem of @cite_12 establishes the ASM thesis for the class A of algorithms defined by Sequential Time, Abstract State, and Bounded Exploration and the class M of machines defined by the basic ASM language of update rules, parallel rules and conditional rules @cite_3 @cite_15 @cite_12 .
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_12" ], "mid": [ "1969402593", "121166499", "1998333922" ], "abstract": [ "2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7", "A firing kiln, especially for use as a vacuum firing kiln for dental ceramic purposes, having a lower portion having a fixed firing platform, an upper portion raisable from the lower portion and in abutting relationship therewith, the lower portion having a hollow firing chamber in facing relationship with a fixed firing platform. The firing chamber includes means for emitting heat into the hollow chamber and toward the fixed firing platform, the surface of the fixed firing platform being at or above the level of the upper edge of the lower portion.", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
While the intent in @cite_12 is to capture algorithms executing steps in isolation from the environment, a degree of is accommodated in the ASM literature since @cite_15 : (i) using the import command to create new elements, and (ii) marking certain functions as and allowing the environment to provide the values of external functions. One pretends that the interaction is inter-step. This requires the environment to anticipate some actions of the algorithm. Also, in @cite_15 , nesting of external functions was prohibited; the first study of ASMs with nested external functions was @cite_21 . The notion of determines whether interaction is allowed, see @cite_15 @cite_12 for precise definitions and discussion.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_12" ], "mid": [ "1969402593", "", "1998333922" ], "abstract": [ "2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7", "", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In @cite_27 it is argued at length why the inter-step form of interaction cannot suffice for all modeling needs. As a small example take the computation of @math where @math is an external call (a query) with argument 7, whose result @math is used as the argument for a new query @math . An attempt to model this as inter-step interaction would force splitting the computation into substeps. But at some level of abstraction we may want to evaluate @math within a single step. Limiting interaction to the inter-step mode would necessarily lower the abstraction level.
{ "cite_N": [ "@cite_27" ], "mid": [ "197061278" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Thus @cite_27 sets modeling interaction as its goal. Different forms of interaction, such as message-passing, database queries, remote procedure calls, inputs, outputs, signals, all reduce to a single universal form: a single-reply zero-or-more-arguments not-necessarily-blocking . All arguments and the reply (if any) should be elements of the state if they are to make sense to the algorithm. For a formal definition of queries see @cite_27 ; a reminder is given in the next section. For a detailed discussion and arguments for the universality of the query-reply approach see @cite_27 .
{ "cite_N": [ "@cite_27" ], "mid": [ "197061278" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Articles @cite_27 @cite_21 @cite_4 limit themselves to interactive algorithms which are in the sense that they obey the following two restrictions: the actions of the algorithm depend only on the state and the replies to queries, and not on other aspects, such as relative timing of replies, and the algorithm cannot complete its step unless it has received replies to all queries issued. The first restriction means that an algorithm can be seen as operating on pairs of form @math where @math is a state and @math an over @math : a partial function mapping queries over @math to their replies. The second restriction means that all queries issued are ; the algorithm cannot complete its step without a reply. (Some uses of non-blocking, asynchronous queries can still be modeled, by assuming that some forms of queries always obtain a default answer. But this is an assumption on environment behavior.) The present paper lifts both restrictions, and thus extends the theory to interactive algorithms.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "197061278", "", "2142611800" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "", "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Several of the postulates of @cite_27 cover the same ground as the postulates of @cite_12 , but of course taking answer functions into account. The most important new postulate is the , saying that the algorithm, for each state @math , determines a @math between finite answer functions and queries. The intuition behind @math is this: if, over state @math , the environment behaves according to answer function @math then the algorithm issues @math . The causality relation is an abstract representation of potential interaction of the algorithm with the environment.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "197061278", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
This refines the transition relation of Sequential Time of @cite_12 . The possibility of explicit failure is new here; the algorithm may obtain replies that are absurd or inconsistent from its point of view, and it can fail in such a case. The next state, if there is one, is defined by an update set, which can also contain trivial updates: updating'' a location to the old value. Trivial updates do not contribute to the next state, but in composition with other algorithms can contribute to a clash, see @cite_27 and also the next section for discussion.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "197061278", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The inductive character of the context definition is unwound and analyzed in detail in @cite_27 . The answer functions which can occur as stages in the inductive construction of contexts are called . This captures the intuition of answer functions which can actually arise as records of interaction of an algorithm and its environment. Two causality relations (over the same state) are if they make the same answer functions well-founded. Equivalent causality relations have the same contexts but the converse is not in general true: intermediate intra-step behavior matters.
{ "cite_N": [ "@cite_27" ], "mid": [ "197061278" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The of @cite_27 extends Bounded Exploration of @cite_12 to queries. As a consequence every well-founded answer function is finite. Furthermore, there is a uniform bound on the size of well-founded answer functions.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "197061278", "1998333922" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The characterization theorem for the class A of algorithms defined in @cite_27 and the class M of machines defined in @cite_21 is proved in @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "197061278", "", "2142611800" ], "abstract": [ "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "", "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Andrei N. Kolmogorov @cite_7 has given another mathematical description of computation, presumably motivated by the physics of computation rather than by an analysis of the actions of a human computer. For a detailed presentation of Kolmogorov's approach, see @cite_10 . Also see @cite_17 and the references there for information about research on pointer machines. Like Turing's model, these computation models also lower the abstraction level of algorithms.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_17" ], "mid": [ "2008838944", "", "143607683" ], "abstract": [ "From the Publisher: Mobile systems, whose components communicate and change their structure, now pervade the informational world and the wider world of which it is a part. The science of mobile systems is as yet immature, however. This book presents the pi-calculus, a theory of mobile systems. The pi-calculus provides a conceptual framework for understanding mobility, and mathematical tools for expressing systems and reasoning about their behaviors. The book serves both as a reference for the theory and as an extended demonstration of how to use pi-calculus to describe systems and analyze their properties. It covers the basic theory of pi-calculus, typed pi-calculi, higher-order processes, the relationship between pi-calculus and lambda-calculus, and applications of pi-calculus to object-oriented design and programming. The book is written at the graduate level, assuming no prior acquaintance with the subject, and is intended for computer scientists interested in mobile systems.", "", "What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the Church-Turing thesis and contrast Church’s and Turing’s approaches, and we finish with some recent investigations." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Yiannis Moschovakis @cite_23 proposed that the informal notion of algorithm be identified with the formal notion of . A recursor is a monotone operator over partial functions whose least fixed point includes (as one component) the function that the algorithm computes. The approach does not seem to scale to algorithms interacting with an unknown environment. See [Section 4.3] abs for a critique of Moschovakis's computation model.
{ "cite_N": [ "@cite_23" ], "mid": [ "2116849571" ], "abstract": [ "When algorithms are defined rigorously in Computer Science literature (which only happens rarely), they are generally identified with abstract machines, mathematical models of computers, sometimes idealized by allowing access to “unbounded memory”.1 My aims here are to argue that this does not square with our intuitions about algorithms and the way we interpret and apply results about them; to promote the problem of defining algorithms correctly; and to describe briefly a plausible solution, by which algorithms are recursive definitions while machines model implementations, a special kind of algorithms." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
An approach to interactive computing was pioneered by Peter Wegner and developed in particular in @cite_0 . The approach is based on special interactive variants of Turing machines called , in short PTMs. Interactive ASMs can step-for-step simulate PTMs. Goldin and Wegner assert that any sequential interactive computation can be performed by a persistent Turing machine'' @cite_19 . But this is not so if one intends to preserve the abstraction level of the given interactive algorithm. In particular, PTMs cannot step-for step simulate interactive ASMs @cite_25 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_25" ], "mid": [ "2048671682", "2152271719", "2152271719" ], "abstract": [ "This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of ''normal'' Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence. A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.", "A sequential algorithm just follows its instructions and thus cannot make a nondeterministic choice all by itself, but it can be instructed to solicit outside help to make a choice. Similarly, an object-oriented program cannot create a new object all by itself; a create-a-new-object command solicits outside help. These are but two examples of intrastep interaction of an algorithm with its environment. Here we motivate and survey recent work on interactive algorithms within the Behavioral Computation Theory project.", "A sequential algorithm just follows its instructions and thus cannot make a nondeterministic choice all by itself, but it can be instructed to solicit outside help to make a choice. Similarly, an object-oriented program cannot create a new object all by itself; a create-a-new-object command solicits outside help. These are but two examples of intrastep interaction of an algorithm with its environment. Here we motivate and survey recent work on interactive algorithms within the Behavioral Computation Theory project." ] }
0707.4333
2082245279
We prove a nearly optimal bound on the number of stable homotopy types occurring in a k-parameter semi-algebraic family of sets in @math , each defined in terms of m quadratic inequalities. Our bound is exponential in k and m, but polynomial in @math . More precisely, we prove the following. Let @math be a real closed field and let [ P = P_1,...,P_m [Y_1,...,Y_ ,X_1,...,X_k], ] with @math . Let @math be a semi-algebraic set, defined by a Boolean formula without negations, whose atoms are of the form, @math . Let @math be the projection on the last k co-ordinates. Then, the number of stable homotopy types amongst the fibers @math is bounded by [ (2^m k d)^ O(mk) . ]
In another direction Agrachev @cite_5 studied the topology of semi-algebraic sets defined by quadratic inequalities, and he defined a certain spectral sequence converging to the homology groups of such sets. A parametrized version of Agrachev's construction is in fact a starting point of our proof of the main theorem in this paper.
{ "cite_N": [ "@cite_5" ], "mid": [ "1972610374" ], "abstract": [ "In this paper we study sets of real solutions of systems of quadratic equations and inequalities. The results are used for the local study of more general systems of smooth equations and inequalities." ] }
0705.1364
1773390463
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No ecient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n 2 X †) Steiner points so that after an O ‡ n 2 X † log i nX
One generalization of the Weighted Region Problem is finding a shortest aniso-tropic path @cite_18 , where the weight assigned to a region depends on the direction of travel. The weights in this problem capture, for example, the effect the gravity and friction on a vehicle moving on a slope. @cite_12 , Sun and Reif @cite_16 and Sun and Bu @cite_0 solved this problem by placing Steiner points along the edges.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_16", "@cite_12" ], "mid": [ "2048876543", "2050710547", "2109032771", "1480698831" ], "abstract": [ "The optimal path planning problems are very difficult in the case where the cost metric varies not only in different regions of the space, but also in different directions inside the same region. If the classic discretization approach is adopted to compute an @?-approximation of the optimal path, the size of the discretization (and thus the complexity of the approximation algorithm) is usually dictated by a number of geometric parameters and thus can be very large. In this paper we show a general method for choosing the variables of the discretization to maximally reduce the dependency of the size of the discretization on various geometric parameters. We use this method to improve the previously reported results on two optimal path problems with direction-dependent cost metrics.", "The authors address anisotropic friction and gravity effects as well as ranges of impermissible-traversal headings due to overturn danger or power limitations. The method does not require imposition of a uniform grid, nor does it average effects in different directions, but reasons about a polyhedral approximation of terrain. It reduces the problem to a finite but provably optimal set of possibilities and then uses A* search to find the cost-optimal path. However, the possibilities are not physical locations but path subspaces. The method also exploits the insight that there are only four ways to optimally traverse an anisotropic homogeneous region: (1) straight across without braking, which is the standard isotropic-weighted-region traversal; (2) straight across without braking but as close as possible to a desired impermissible heading; (3) making impermissibility-avoiding switchbacks on the path across a region; and (4) straight across with braking. The authors prove specific optimality criteria for transitions on the boundaries of regions for each combination of traversal types. >", "We discuss the problem of computing optimal paths on terrains for a mobile robot, where the cost of a path is defined to be the energy expended due to both friction and gravity. The physical model used by this problem allows for ranges of impermissible traversal directions caused by overturn danger or power limitations. The model is interesting and challenging, as it incorporates constraints found in realistic situations, and these constraints affect the computation of optimal paths. We give some upper- and lower-bound results on the combinatorial size of optimal paths on terrains under this model. With some additional assumptions, we present an efficient approximation algorithm that computes for two given points a path whose cost is within a user-defined relative error ratio. Compared with previous results using the same approach, this algorithm improves the time complexity by using 1) a discretization with reduced size, and 2) an improved discrete algorithm for finding optimal paths in the discretization. We present some experimental results to demonstrate the efficiency of our algorithm. We also provide a similar discretization for a more difficult variant of the problem due to less restricted assumptions.", "We discuss the problem of computing shortest anisotropic paths on terrains. Anisotropic path costs take into account the length of the path traveled, possibly weighted, and the direction of travel along the faces of the terrain. Considering faces to be weighted has added realism to the study of (pure) Euclidean shortest paths. Parameters such as the varied nature of the terrain, friction, or slope of each face, can be captured via face weights. Anisotropic paths add further realism by taking into consideration the direction of travel on each face thereby e.g., eliminating paths that are too steep for vehicles to travel and preventing the vehicles from turning over. Prior to this work an O(nn) time algorithm had been presented for computing anisotropic paths. Here we present the first polynomial time approximation algorithm for computing shortest anisotropic paths. Our algorithm is simple to implement and allows for the computation of shortest anisotropic paths within a desired accuracy. Our result addresses the corresponding problem posed in [12]." ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
The most closely related work is that of Yao, Leonard et. al. @cite_6 . They model heterogeneous user churn and local resilience of unstructured P2P networks. They also concede early that balancing model complexity and its fidelity is required to make advances in this area. They examine both the Poisson and Pareto distribution for user churn and provide a deep analysis on this front. Their work focuses on how churn affects connectivity in the network and we have separated this aspect from our work and concentrated on message throughput.
{ "cite_N": [ "@cite_6" ], "mid": [ "2119971988" ], "abstract": [ "Previous analytical results on the resilience of un-structured P2P systems have not explicitly modeled heterogeneity of user churn (i.e., difference in online behavior) or the impact of in-degree on system resilience. To overcome these limitations, we introduce a generic model of heterogeneous user churn, derive the distribution of the various metrics observed in prior experimental studies (e.g., lifetime distribution of joining users, joint distribution of session time of alive peers, and residual lifetime of a randomly selected user), derive several closed-form results on the transient behavior of in-degree, and eventually obtain the joint in out degree isolation probability as a simple extension of the out-degree model in [13]." ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Other closely related work concerns mobile and ad hoc networks, and sensor networks, because these applications require robust communication techniques and tend to have limited buffer space at each node. The recent work of Lindemann and Waldhorst @cite_9 considers the use of epidemiology in mobile devices with finite buffers and they follow the seven degrees of separation system @cite_3 . In particular they use models for power conservation" where each mobile device is ON with probability @math and OFF with probability @math . Their analytical model gives very close predictions to their simulation results. In our work we describe these states using arrival rate, @math , and departure rate, @math , which allows us to naturally relate this to a rate of message arrivals, @math . We focus solely on these parameters so that we can show precisely how they affect message coverage rate.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2144588366", "2014306135" ], "abstract": [ "Epidemic algorithms have recently been proposed as an effective solution for disseminating information in large-scale peer-to-peer (P2P) systems and in mobile ad hoc networks (MANET). In this paper, we present a modeling approach for steady-state analysis of epidemic dissemination of information in MANET. As major contribution, the introduced approach explicitly represents the spread of multiple data items, finite buffer capacity at mobile devices and a least recently used buffer replacement scheme. Using the introduced modeling approach, we analyze seven degrees of separation (7DS) as one well-known approach for implementing P2P data sharing in a MANET using epidemic dissemination of information. A validation of results derived from the analytical model against simulation shows excellent agreement. Quantitative performance curves derived from the analytical model yield several insights for optimizing the system design of 7DS.", "This paper presents 7DS, a novel peer-to-peer data sharing system. 7DS is an architecture, a set of protocols and an implementation enabling the exchange of data among peers that are not necessarily connected to the Internet. Peers can be either mobile or stationary. It anticipates the information needs of users and fulfills them by searching from information among peers. We evaluate via extensive simulations the effectiveness of our system for data dissemination among mobile devices with a large number of user mobility scenarios. We model several general data dissemination approaches and investigate the effect of the wireless converage range, 7DS, host density, query interval and cooperation strategy among the mobile hosts. Using theory from random walks, random environments and diffusion of controlled processes, we model one of these data dissemination schemes and show that the analysis confirms the simulation results for scheme" ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Other closely related work such as in @cite_4 looks at the rate of file transmission in a file sharing system that is based on epidemics. The use of epidemics for large scale communication is also reviewed in @cite_0 . The probabilistic multicast technique in @cite_2 attempts to increase the probability that peers receive messages for which they are interested and to decrease the probability that peers receive messages for which they are not interested. Hence it introduces a notion of membership which is not too different to being online offline. Autonomous Gossiping presented in @cite_5 provides further examples of using epidemics for selective information dissemination.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_2" ], "mid": [ "2139022689", "2249121382", "2152948161", "" ], "abstract": [ "Building very large computing systems is extremely challenging, given the lack of robust scalable communication technologies. This threatens a new generation of mission-critical but very large computing systems. Fortunately, a new generation of \"gossip-based\" or epidemic communication primitives can overcome a number of these scalability problems, offering robustness and reliability even in the most demanding settings. Epidemic protocols emulate the spread of an infection in a crowded population, and are both reliable and stable under forms of stress that will disable most traditional protocols. This paper describes some of the common problems that arise in scalable group communication systems and how epidemic techniques have been used to successfully address these problems.", "We introduce autonomous gossiping (A G), a new genre epidemic algorithm for selective dissemination of information in contrast to previous usage of epidemic algorithms which flood the whole network. A G is a paradigm which suits well in a mobile ad-hoc networking (MANET) environment because it does not require any infrastructure or middleware like multicast tree and (un)subscription maintenance for publish subscribe, but uses ecological and economic principles in a self-organizing manner in order to achieve any arbitrary selectivity (flexible casting). The trade-off of using a stateless self-organizing mechanism like A G is that it does not guarantee completeness deterministically as is one of the original objectives of alternate selective dissemination schemes like publish subscribe. We argue that such incompleteness is not a problem in many non-critical real-life civilian application scenarios and realistic node mobility patterns, where the overhead of infrastructure maintenance may outweigh the benefits of completeness, more over, at present there exists no mechanism to realize publish subscribe or other paradigms for selective dissemination in MANET environments.", "Peer-to-peer applications have become highly popular in today's pervasive environments due to the spread of different file sharing platforms. In such a multiclient environment, if users have mobility characteristics, asymmetry in communication causes a degradation of reliability. This work proposes an approach based on the advantages of epidemic selective resource placement through mobile Infostations. Epidemic placement policy combines the strengths of both proactive multicast group establishment and hybrid Infostation concept. With epidemic selective placement we face the flooding problem locally (in geographic region landscape) and enable end to end reliability by forwarding requested packets to epidemically 'selected' mobile users in the network on a recursive basis. The selection of users is performed based on their remaining capacity, weakness of their signal and other explained mobility limitations. Examination through simulation is performed for the response and reliability offered by epidemic placement policy which reveals the robustness and reliability in file sharing among mobile peers.", "" ] }
0705.1309
2953319296
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
The work by Gordon and Bentley @cite_5 differs from previous approaches by considering only communication and differentiation in the substrata. The grid starts with a cell at all available grid points, and cells communicate by diffusing chemicals to neighboring cells only. Each cell then receives as input one chemical concentration, computed as the average of the concentrations of all neighboring cells: hence, no orientation information is available. In the Cellular Automata context, such system is called a totalistic automaton. One drawback of this approach is that it requires that some cells have different chemicals concentration at start-up. Furthermore, it makes the whole model biased toward symmetrical patterns ("four-fold dihedral symmetry"). The controller is a set of 20 rules that produce one of the four chemicals and sends it towards neighboring cells. The set of rules is represented by a bit vector and is evolved using a classical bitstring GA. The paper ends with some comparisons with previous works, namely @cite_8 @cite_2 , demonstrating comparable and sometimes better results. But a possible explanation for that success could be the above-mentionned bias of the method toward symmetrical patterns.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "2054742532", "1546662014", "1958651193" ], "abstract": [ "Today's software is brittle. A tiny corruption in an executable will normally result in terminal failure of that program. But nature does not seem to suffer from the same problems. A multicellular organism, its genes evolved and developed, shows graceful degradation: should it be damaged, it is designed to continue to work. This paper describes an investigation into software with the same properties. Three programs, one human-designed, one evolved using genetic programming, and one evolved and developed using a fractal developmental system are compared. All three calculate the square root of a number. The programs are damaged by corrupting their compiled executable code, and the ability for each of them to survive such damage is assessed. Experiments demonstrate that only the evolutionary developmental code shows graceful degradation after damage.", "A method for evolving programs that construct multicellular structures (organisms) is described. The paper concentrates on the difficult problem of evolving a cell program that constructs a fixed size French flag. We obtain and analyze an organism that shows a remarkable ability to repair itself when subjected to severe damage. Its behaviour resembles the regenerative power of some living organisms.", "" ] }
0705.1309
2953319296
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
However, there are even greater similarities between the present work and that in @cite_5 . In both works, the grid is filled with cells at iteration 0 of the growth process (i.e. no replication is allowed) and chemicals are propagated only in a cell-cell fashion without the diffusion mechanisms used in @cite_8 @cite_2 . Indeed, a pure cell-cell communication is theoretically sufficient for modelling any kind of temporal diffusion function, since diffusion in the substrata is the result of successive transformation with non-linear functions (such as the ones implemented by sigmoidal neural networks with hidden neurons). However, this means that the optimization algorithm must tune both the diffusion reaction and the differentiation of the cells. On the other hand, whereas @cite_5 only consider the average of the chemical concentrations of the neighboring cells (i.e. is totalistic in the Cellular Automata terminology), our approach does take into account the topology of the organism at the controller level, de facto benefitting from orientation information. This results in a more general approach, though probably less efficient to reach symmetrical targets. Here again, further experiments must be run to give a solid answer.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "2054742532", "1546662014", "1958651193" ], "abstract": [ "Today's software is brittle. A tiny corruption in an executable will normally result in terminal failure of that program. But nature does not seem to suffer from the same problems. A multicellular organism, its genes evolved and developed, shows graceful degradation: should it be damaged, it is designed to continue to work. This paper describes an investigation into software with the same properties. Three programs, one human-designed, one evolved using genetic programming, and one evolved and developed using a fractal developmental system are compared. All three calculate the square root of a number. The programs are damaged by corrupting their compiled executable code, and the ability for each of them to survive such damage is assessed. Experiments demonstrate that only the evolutionary developmental code shows graceful degradation after damage.", "A method for evolving programs that construct multicellular structures (organisms) is described. The paper concentrates on the difficult problem of evolving a cell program that constructs a fixed size French flag. We obtain and analyze an organism that shows a remarkable ability to repair itself when subjected to severe damage. Its behaviour resembles the regenerative power of some living organisms.", "" ] }
0705.1999
1634556542
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
Javier Pinto has extended situation calculus in order to integrate time @cite_3 . He conserves the framework of situation calculus and introduces a notion of time. Intuitively, every situation @math has a starting time and an ending time, where @math meaning that situation @math ends when the succeeding situation @math is reached. The end of the situation @math is the same time point as the beginning of the next situation resulting from the occurrence of action @math in @math . The obvious asymmetry of the @math and @math functions is due to the fact that the situation space has the form of a tree whose root is the beginning state @math . Thus, every state has a unique preceding state but eventually more that one succeeding state.
{ "cite_N": [ "@cite_3" ], "mid": [ "1992170615" ], "abstract": [ "The Situation Calculus is a logic of time and change in which there is a distinguished initial situation and all other situations arise from the different sequences of actions that might be performed starting in the initial one. Within this framework, it is difficult to incorporate the notion of an occurrence, since all situations after the initial one are hypothetical. These occurrences are important, for instance, when one wants to represent narratives. There have been proposals to incorporate the notion of an action occurrence in the language of the Situation Calculus, namely Miller and Shanahan’s work on narratives [22] and Pinto and Reiter’s work on actual lines of situations [27, 29]. Both approaches have in common the idea of incorporating a linear sequence of situations into the tree described by theories written in the Situation Calculus language. Unfortunately, several advantages of the Situation Calculus are lost when reasoning with a narrative line or with an actual line of occurrences. In this paper we propose a different approach to dealing with action occurrences and narratives, which can be seen as a generalization of narrative lines to narrative trees. In this approach we exploit the fact that, in the discrete Situation Calculus [13], each situation has a unique history. Then, occurrences are interpreted as constraints on valid histories. We argue that this new approach subsumes the linear approaches of Miller and Shanahan’s, and Pinto and Reiter’s. In this framework, we are able to represent various kinds of occurrences; namely, conditional, preventable and non-preventable occurrences. Other types of occurrences, not discussed in this article, can also be accommodated." ] }
0705.1999
1634556542
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
Paolo Tereziani proposes in @cite_2 a system that can handle temporal constraints between events and temporal constraints between instances of events.
{ "cite_N": [ "@cite_2" ], "mid": [ "2101006942" ], "abstract": [ "Representing and reasoning with both temporal constraints between classes of events (e.g., between the types of actions needed to achieve a goal) and temporal constraints between instances of events (e.g., between the specific actions being executed) is a ubiquitous task in many areas of computer science, such as planning, workflow, guidelines and protocol management. The temporal constraints between the classes of events must be inherited by the instances, and the consistency of both types of constraints must be checked. We propose a general-purpose domain-independent knowledge server dealing with these issues. In particular, we propose a formalism to represent temporal constraints, we show two algorithms to deal with inheritance and to perform temporal consistency checking, and we study the properties of the algorithms." ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
How often do people create blog posts and links? Extensive work has been published on patterns relating to human behavior, which often generates bursty traffic. Disk accesses, network traffic, web-server traffic all exhibit burstiness. in @cite_15 provide fast algorithms for modeling such burstiness. Burstiness is often related to self-similarity, which was studied in the context of World Wide Web traffic @cite_17 . @cite_22 demonstrate the bursty behavior in web page visits and corresponding response times.
{ "cite_N": [ "@cite_15", "@cite_22", "@cite_17" ], "mid": [ "2146603609", "2073689275", "" ], "abstract": [ "Network, Web, and disk I O traffic are usually bursty and self-similar and therefore cannot be modeled adequately with Poisson arrivals. However, we wish to model these types of traffic and generate realistic traces, because of obvious applications for disk scheduling, network management, and Web server design. Previous models (like fractional Brownian motion and FARIMA, etc.) tried to capture the 'burstiness'. However, the proposed models either require too many parameters to fit and or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model, which solves both problems: it requires just one parameter, and can easily generate large traces. In addition, it has many more attractive properties: (a) with our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86 Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and Web traces show that our synthetic traces match the real ones very well in terms of queuing behavior.", "terized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution Pw w with =3 2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display = 1, the surface mail based communication belongs to the =3 2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.", "" ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
Most work on modeling link behavior in large-scale on-line data has been done in the blog domain @cite_21 @cite_12 @cite_11 . The authors note that, while information propagates between blogs, examples of genuine cascading behavior appeared relatively rare. This may, however, be due in part to the Web-crawling and text analysis techniques used to infer relationships among posts @cite_12 @cite_1 . Our work here differs in a way that we concentrate solely on the propagation of links, and do not infer additional links from text of the post, which gives us more accurate information.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_12", "@cite_11" ], "mid": [ "2152284345", "", "2107666336", "2150739536" ], "abstract": [ "In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.", "", "Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of \"infection.\" Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.", "We propose two new tools to address the evolution of hyperlinked corpora. First, we define time graphs to extend the traditional notion of an evolving directed graph, capturing link creation as a point phenomenon in time. Second, we develop definitions and algorithms for time-dense community tracking, to crystallize the notion of community evolution. We develop these tools in the context of Blogspace , the space of weblogs (or blogs). Our study involves approximately 750K links among 25K blogs. We create a time graph on these blogs by an automatic analysis of their internal time stamps. We then study the evolution of connected component structure and microscopic community structure in this time graph. We show that Blogspace underwent a transition behavior around the end of 2001, and has been rapidly expanding over the past year, not just in metrics of scale, but also in metrics of community structure and connectedness. This expansion shows no sign of abating, although measures of connectedness must plateau within two years. By randomizing link destinations in Blogspace, but retaining sources and timestamps, we introduce a concept of randomized Blogspace . Herein, we observe similar evolution of a giant component, but no corresponding increase in community structure. Having demonstrated the formation of micro-communities over time, we then turn to the ongoing activity within active communities. We extend recent work of Kleinberg [11] to discover dense periods of \"bursty\" intra-community link creation." ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
Information cascades are phenomena in which an action or idea becomes widely adopted due to the influence of others, typically, neighbors in some network @cite_0 @cite_2 @cite_8 . Cascades on random graphs using a threshold model have been theoretically analyzed @cite_5 . Empirical analysis of the topological patterns of cascades in the context of a large product recommendation network is in @cite_19 and @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_0", "@cite_19", "@cite_2", "@cite_5" ], "mid": [ "2105535951", "2041157860", "2091087160", "", "1495750374", "2114696370" ], "abstract": [ "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.", "Models of collective behavior are developed for situations where actors have two alternatives and the costs and or benefits of each depend on how many other actors choose which alternative. The key concept is that of \"threshold\": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or \"equilibrium\" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ...", "An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual without regard to his own information. We argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades.", "", "Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties).", "The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascades—herein called global cascades—that occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Much work has been focused on the problem of understanding the mixing time of the Ising model in various contexts. In a series of results @cite_11 @cite_16 @cite_3 culminating in @cite_21 it was shown that the Gibbs sampler on integer lattice mixes rapidly when the model has the strong spatial mixing property. In @math strong spatial mixing, and therefore rapid mixing, holds in the entire uniqueness regime (see e.g. @cite_7 ). On the regular tree the mixing time is always polynomial but is only @math up to the threshold for extremity @cite_18 . For completely general graphs the best known results are given by the Dobrushin condition which establishes rapid mixing when @math where @math is the maximum degree.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_21", "@cite_3", "@cite_16", "@cite_11" ], "mid": [ "2611336766", "2011373957", "2088794759", "", "", "1770973266" ], "abstract": [ "We study discrete time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap 1 - _2 for the dynamics on trees and on certain hyperbolic graphs is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that if the relaxation time T2 satisfies T2 = O(n), then the correlation coefficient, and the mutual information, between any local function (which dependsonly on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp.", "Various finite volume mixing conditions in classical statistical mechanics are reviewed and critically analyzed. In particular somefinite size conditions are discussed, together with their implications for the Gibbs measures and for the approach to equilibrium of Glauber dynamics inarbitrarily large volumes. It is shown that Dobrushin-Shlosman's theory ofcomplete analyticity and its dynamical counterpart due to Stroock and Zegarlinski, cannot be applied, in general, to the whole one phase region since it requires mixing properties for regions ofarbitrary shape. An alternative approach, based on previous ideas of Oliveri, and Picco, is developed, which allows to establish results on rapid approach to equilibrium deeply inside the one phase region. In particular, in the ferromagnetic case, we considerably improve some previous results by Holley and Aizenman and Holley. Our results are optimal in the sene that, for example, they show for the first time fast convergence of the dynamicsfor any temperature above the critical one for thed-dimensional Ising model with or without an external field. In part II we extensively consider the general case (not necessarily attractive) and we develop a new method, based on renormalizations group ideas and on an assumption of strong mixing in a finite cube, to prove hypercontractivity of the Markov semigroup of the Glauber dynamics.", "For finite range lattice gases with a finite spin space, it is shown that the Dobrushin-Shlosman mixing condition is equivalent to the existence of a logarithmic Sobolev inequality for the associated (unique) Gibbs state. In addition, implications of these considerations for the ergodic properties of the corresponding Glauber dynamics are examined.", "", "", "We show that, under the conditions of the Dobrushin Shlosman theorem for uniqueness of the Gibbs state, the reversible stochastic Ising model converges to equilibrium exponentially fast on the L2 space of that Gibbs state. For stochastic Ising models with attractive interactions and under conditions which are somewhat stronger than Dobrushin’s, we prove that the semi-group of the stochastic Ising model converges to equilibrium exponentially fast in the uniform norm. We also give a new, much shorter, proof of a theorem which says that if the semi-group of an attractive spin flip system converges to equilibrium faster than 1 td where d is the dimension of the underlying lattice, then the convergence must be exponentially fast." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Previous attempts at studying this problem, with bounded average degree but some large degrees, for sampling uniform colorings yielded weaker results. @cite_10 it is shown that Gibbs sampling rapidly mixes on @math if @math where @math and that a variant of the algorithm rapidly mixes if @math . Indeed the main open problem of @cite_10 is to determine if one can take @math to be a function of @math only. Our results here provide a positive answer to the analogous question for the Ising model. We further note that other results where the conditions on degree are relaxed @cite_13 do not apply in our setting.
{ "cite_N": [ "@cite_13", "@cite_10" ], "mid": [ "1999150260", "1978769275" ], "abstract": [ "Spin systems are a general way to describe local interactions between nodes in a graph. In statistical mechanics, spin systems are often used as a model for physical systems. In computer science, they comprise an important class of families of combinatorial objects, for which approximate counting and sampling algorithms remain an elusive goal. The Dobrushin condition states that every row sum of the \"influence matrix\" for a spin system is less than 1 - epsiv, where epsiv > 0. This criterion implies rapid convergence (O(n log n) mixing time) of the single-site (Glauber) dynamics for a spin system, as well as uniqueness of the Gibbs measure. The dual criterion that every column sum of the influence matrix is less than 1 - epsiv has also been shown to imply the same conclusions. We examine a common generalization of these conditions, namely that the maximum eigenvalue of the influence matrix is less than 1 epsiv. Our main result is that this criterion implies O(n log n) mixing time for the Glauber dynamics. As applications, we consider the Ising model, hard-core lattice gas model, and graph colorings, relating the mixing time of the Glauber dynamics to the maximum eigenvalue for the adjacency matrix of the graph. For the special case of planar graphs, this leads to improved bounds on mixing time with quite simple proofs", "We analyze Markov chains for generating a random k-coloring of a random graph Gn,d n. When the average degree d is constant, a random graph has maximum degree Θ(log n log log n), with high probability. We show that, with high probability, an efficient procedure can generate an almost uniformly random k-coloring when k = Θ(log log n log log log n), i.e., with many fewer colors than the maximum degree. Previous results hold for a more general class of graphs, but always require more colors than the maximum degree. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006" ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
It is natural to conjecture that properties of the Ising model on the branching process with @math offspring distribution determines the mixing time of the dynamics on @math . In particular, it is natural to conjecture that the critical point for uniqueness of Gibbs measures plays a fundamental role @cite_19 @cite_12 as results of similar flavor were recently obtained for the hard-core model on random bi-partite @math regular graphs @cite_14 .
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_12" ], "mid": [ "", "2048759201", "2143695333" ], "abstract": [ "", "We consider local Markov chain Monte–Carlo algorithms for sampling from the weighted distribution of independent sets with activity λ, where the weight of an independent set I is λ|I|. A recent result has established that Gibbs sampling is rapidly mixing in sampling the distribution for graphs of maximum degree d and λ λ c it is NP-hard to approximate the above weighted sum over independent sets to within a factor polynomial in the size of the graph.", "We study several statistical mechanical models on a general tree. Particular attention is devoted to the classical Heisenberg models, where the state space is the d-dimensional unit sphere and the interactions are proportional to the cosines of the angles between neighboring spins. The phenomenon of interest here is the classification of phase transition (non-uniqueness of the Gibbs state) according to whether it is robust. In many cases, including all of the Heisenberg and Potts models, occurrence of robust phase transition is determined by the geometry (branching number) of the tree in a way that parallels the situation with independent percolation and usual phase transition for the Ising model. The critical values for robust phase transition for the Heisenberg and Potts models are also calculated exactly. In some cases, such as the q > 3 Potts model, robust phase transition and usual phase transition do not coincide, while in other cases, such as the Heisenberg models, we conjecture that robust phase transition and usual phase transition are equivalent. In addition, we show that symmetry breaking is equivalent to the existence of a phase transition, a fact believed but not known for the rotor model on Z 2 ." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
After proposing the conjecture we have recently learned that Antoine Gerschenfeld and Andrea Montanari have found an elegant proof for estimating the partition function (that is the normalizing constant @math ) for the Ising model on random @math -regular graphs @cite_8 . Their result together with a standard conductance argument shows exponentially slow mixing above the uniqueness threshold which in the context of random regular graphs is @math .
{ "cite_N": [ "@cite_8" ], "mid": [ "2950361113" ], "abstract": [ "Consider a collection of random variables attached to the vertices of a graph. The reconstruction problem requires to estimate one of them given far away' observations. Several theoretical results (and simple algorithms) are available when their joint probability distribution is Markov with respect to a tree. In this paper we consider the case of sequences of random graphs that converge locally to trees. In particular, we develop a sufficient condition for the tree and graph reconstruction problem to coincide. We apply such condition to colorings of random graphs. Further, we characterize the behavior of Ising models on such graphs, both with attractive and random interactions (respectively, ferromagnetic' and spin glass')." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
After the Fischler and Susskind exposition of the problematic application of the holographic principle for spatially closed models @cite_23 and R. Easther and D. Lowe confirmed these difficulties @cite_28 , several authors proposed feasible solutions. Kalyana Rama @cite_26 proposed a two-fluid cosmological model, and found that when one was of quintessence type, the FS prescription would be verified under some additional conditions. N. Cruz and S. Lepe @cite_22 studied cosmological models with spatial dimension @math , and found also that models with negative pressure could verify the FS prescription. There are some alternative ways such as @cite_8 which are worth quoting. All these authors analyzed mathematically the functional behavior of relation @math ; our work however claims to endorse the mathematical work with a simple picture: ever expanding spatially closed cosmological models could verify the FS holographic prescription, since, due to the cosmological acceleration, future light cones could not reconverge into focal points and, so, the particle horizon area would never shrink to zero.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_28", "@cite_23" ], "mid": [ "1977747956", "2061462698", "1631663583", "2060766232", "1492093992" ], "abstract": [ "Abstract A closed universe containing pressureless dust, a more generally perfect fluid matter with pressure-to-density ratio w in the range ( 1 3 ,− 1 3 ) , violates the holographic principle applied according to the Fischler–Susskind proposal. We show, first for a class of two-fluid solutions and then for the general multifluid case, that the closed universe will obey the holographic principle if it also contains matter with w 1 3 , and if the present value of its total density is sufficiently close to the critical density. It is possible that such matter can be realised by some form of quintessence', much studied recently.", "Abstract We examine in details Friedmann–Robertson–Walker models in 2+1 dimensions in order to investigate the cosmic holographic principle suggested by Fischler and Susskind. Our results are rigorously derived differing from the previous one found by Wang and Abdalla. We discuss the erroneous assumptions done in this work. The matter content of the models is composed of a perfect fluid, with a γ -law equation of state. We found that closed universes satisfy the holographic principle only for exotic matter with a negative pressure. We also analyze the case of a collapsing flat universe.", "The holographic bound states that the entropy in a region cannot exceed one quarter of the area (in Planck units) of the bounding surface. A version of the holographic principle that can be applied to cosmological spacetimes has recently been given by Fischler and Susskind. This version can be shown to fail in closed spacetimes and they concluded that the holographic bound may rule out such universes. In this paper I give a modified definition of the holographic bound that holds in a large class of closed universes. Fischler and Susskind also showed that the dominant energy condition follows from the holographic principle applied to cosmological spacetimes with @math . Here I show that the dominant energy condition can be violated by cosmologies satisfying the holographic principle with more general scale factors.", "We propose that the holographic principle be replaced by the generalized second law of thermodynamics when applied to time-dependent backgrounds. For isotropic open and flat universes with a fixed equation of state, this agrees with the cosmological holographic principle proposed by Fischler and Susskind (hep-th 9806039). However, in more general situations, it does not. copyright ital 1999 ital The American Physical Society", "A cosmological version of the holographic principle is proposed. Various consequences are discussed including bounds on equation of state and the requirement that the universe be infinite." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
As one can imagine, by virtue of the previous argument there are many spatially closed cosmological models which fulfill the FS holographic prescription; ensuring a sufficiently accelerated final era is enough. Examples other than quintessence concern spatially closed models with conventional matter and a positive cosmological constant, the so-called @cite_30 . In fact, the late evolution of this family of models is dominated by the cosmological constant which is compatible with @math , and this value verifies ). Roughly speaking, an asymptotically exponential expansion will provide acceleration enough to avoid the reconvergence of future light cones.
{ "cite_N": [ "@cite_30" ], "mid": [ "1658962388" ], "abstract": [ "Foreword by Professor Sir Fred Hoyle 1. The large-scale structure of the universe 2. General relativity 3. From relativity to cosmology 4. The Friedman models 5. Relics of the Big Bang 6. The very early universe 7. The formation of structures in the universe 8. Alternative cosmologies 9. Local observations of cosmological significance 10. Observations of distant parts of the universe 11. A critical overview." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
One more remark about observational result comes to support the study of quintessence models. If the fundamental character of the Holographic Principle as a primary principle guiding the behavior of our universe is assumed, it looks reasonable to suppose the saturation of the holographic limit. This is one of the arguments used by T. Banks and W. Fischler @cite_4 @cite_16 to propose a holographic cosmology based on a an early universe, spatially flat, dominated by a fluid with @math Banks and Fischler propose a scenario where black holes of the maximum possible size --the size of the particle horizon-- coalesce saturating the holographic limit; this fluid'' evolves according to @math . . According to ) this value saturates the FS prescription for spatially flat FRW models, but it seems fairly incompatible with observational results. However, for spatially closed FRW cosmological models, it has been found that the saturation of the Holographic Principle is related to the value @math which is compatible with current observations (according to @cite_12 , @math at the 95 Taking @math gives @math in agreement the measured value @cite_32 .
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_32", "@cite_12" ], "mid": [ "2046493255", "2088673235", "2951034705", "2139440433" ], "abstract": [ "We present a complete quantum mechanical description of a flat Friedmann-Robertson-Walker universe with equation of state p= rho . We find a detailed correspondence with our heuristic picture of such a universe as a dense black hole fluid. Features of the geometry are derived from purely quantum input.", "We present a new version of holographic cosmology, which is compatible with present observations. A primordial p = ? phase of the universe is followed by a brief matter dominated era and a brief period of inflation, whose termination heats the universe. The flatness and horizon problems are solved by the p = ? dynamics. The model is characterized by two parameters, which should be calculated in a more fundamental approach to the theory. For a large range in the phenomenologically allowed parameter space, the observed fluctuations in the cosmic microwave background were generated during the p = ? era, and are exactly scale invariant. The scale invariant spectrum cuts off sharply at both upper and lower ends, and this may have observational consequences. We argue that the amplitude of fluctuations is small but cannot yet calculate it precisely.", "A simple cosmological model with only six parameters (matter density, mh 2 , baryon density, bh 2 , Hubble Constant, H0, amplitude of fluctua", "We have discovered 16 Type Ia supernovae (SNe Ia) with the Hubble Space Telescope (HST) and have used them to provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration. These objects, discovered during the course of the GOODS ACS Treasury program, include 6 of the 7 highest redshift SNe Ia known, all at z > 1.25, and populate the Hubble diagram in unexplored territory. The luminosity distances to these objects and to 170 previously reported SNe Ia have been determined using empirical relations between light-curve shape and luminosity. A purely kinematic interpretation of the SN Ia sample provides evidence at the greater than 99 confidence level for a transition from deceleration to acceleration or, similarly, strong evidence for a cosmic jerk. Using a simple model of the expansion history, the transition between the two epochs is constrained to be at z = 0.46 ± 0.13. The data are consistent with the cosmic concordance model of ΩM ≈ 0.3, ΩΛ ≈ 0.7 (χ = 1.06) and are inconsistent with a simple model of evolution or dust as an alternative to dark energy. For a flat universe with a cosmological constant, we measure ΩM = 0.29 ± (equivalently, ΩΛ = 0.71). When combined with external flat-universe constraints, including the cosmic microwave background and large-scale structure, we find w = -1.02 ± (and w < -0.76 at the 95 confidence level) for an assumed static equation of state of dark energy, P = wρc2. Joint constraints on both the recent equation of state of dark energy, w0, and its time evolution, dw dz, are a factor of 8 more precise than the first estimates and twice as precise as those without the SNe Ia discovered with HST. Our constraints are consistent with the static nature of and value of w expected for a cosmological constant (i.e., w0 = -1.0, dw dz = 0) and are inconsistent with very rapid evolution of dark energy. We address consequences of evolving dark energy for the fate of the universe." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
Finally, two recent conjectures concerning holography in spatially closed universes deserve some comments. W. Zimdahl and D. Pavon @cite_17 claim that dynamics of the holographic dark energy in a spatially closed universe could solve the coincidence problem; however the cosmological scale necessary for the definition of the holographic dark energy seems to be incompatible with the particle horizon @cite_24 @cite_2 @cite_9 . In a more recent paper F. Simpson @cite_31 proposed an imaginative mechanism in which the non-monotonic evolution of the particle horizon over a spatially closed universe controls the equation of state of the dark energy. The abundant work in that line is still inconclusive but it seems to be a fairly promising line of work.
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "2046990005", "2022232745", "1981945064", "2157971336", "2207073528" ], "abstract": [ "We employ the holographic model of interacting dark energy to obtain the equation of state for the holographic energy density in non-flat (closed) universe enclosed by the event horizon measured from the sphere of horizon named L.", "Abstract Entropy bounds render quantum corrections to the cosmological constant Λ finite. Under certain assumptions, the natural value of Λ is of order the observed dark energy density ∼10 −10 eV 4 , thereby resolving the cosmological constant problem. We note that the dark energy equation of state in these scenarios is w ≡ p ρ =0 over cosmological distances, and is strongly disfavored by observational data. Alternatively, Λ in these scenarios might account for the diffuse dark matter component of the cosmological energy density.", "A model for holographic dark energy is proposed, following the idea that the short distance cut-off is related to the infrared cut-off. We assume that the infrared cut-off relevant to the dark energy is the size of the event horizon. With the input Omega(Lambda) = 0.73, we predict the equation of state of the dark energy at the present time be characterized by w = -0.90. The cosmic coincidence problem can be resolved by inflation in our scenario, provided we assume the minimal number of e-foldings. (C) 2004 Elsevier B.V. All rights reserved.", "Here we consider a scenario in which dark energy is associated with the apparent area of a surface in the early universe. In order to resemble the cosmological constant at late times, this hypothetical reference scale should maintain an approximately constant physical size during an asymptotically de Sitter expansion. This is found to arise when the particle horizon—anticipated to be significantly greater than the Hubble length—is approaching the antipode of a closed universe. Depending on the constant of proportionality, either the ensuing inflationary period prevents the particle horizon from vanishing, or it may lead to a sequence of 'big rips'.", "" ] }
math0703675
2950427145
Let @math be a product of @math independent, identically distributed random matrices @math , with the properties that @math is bounded in @math , and that @math has a deterministic (constant) invariant vector. Assuming that the probability of @math having only the simple eigenvalue 1 on the unit circle does not vanish, we show that @math is the sum of a fluctuating and a decaying process. The latter converges to zero almost surely, exponentially fast as @math . The fluctuating part converges in Cesaro mean to a limit that is characterized explicitly by the deterministic invariant vector and the spectral data of @math associated to 1. No additional assumptions are made on the matrices @math ; they may have complex entries and not be invertible. We apply our general results to two classes of dynamical systems: inhomogeneous Markov chains with random transition matrices (stochastic matrices), and random repeated interaction quantum systems. In both cases, we prove ergodic theorems for the dynamics, and we obtain the form of the limit states.
Getting informations on the fluctuations of the process around its limiting value is certainly an interesting and important issue. It amounts to getting informations about the law of the vector valued random variable @math of Theorem , which is quite difficult in general. There are recent partial results about aspects of the law of such random vectors in case they are obtained by means of matrices belonging to some subgroups of @math satisfying certain irreducibility conditions, see e.g. @cite_20 . However, these results do not apply to our situation.
{ "cite_N": [ "@cite_20" ], "mid": [ "168790500" ], "abstract": [ "Abstract We study the behavior at infinity of the tail of the stationary solution of a multidimensional linear auto-regressive process with random coefficients. We exhibit an extended class of multiplicative coefficients satisfying a condition of irreducibility and proximality that yield to a heavy tail behavior. To cite this article: B. de , C. R. Acad. Sci. Paris, Ser. I 339 (2004)." ] }
cs0703042
1981868151
Users of online dating sites are facing information overload that requires them to manually construct queries and browse huge amount of matching user profiles. This becomes even more problematic for multimedia profiles. Although matchmaking is frequently cited as a typical application for recommender systems, there is a surprising lack of work published in this area. In this paper we describe a recommender system we implemented and perform a quantitative comparison of two collaborative filtering (CF) and two global algorithms. Results show that collaborative filtering recommenders significantly outperform global algorithms that are currently used by dating sites. A blind experiment with real users also confirmed that users prefer CF based recommendations to global popularity recommendations. Recommender systems show a great potential for online dating where they could improve the value of the service to users and improve monetization of the service.
Recommender systems @cite_8 are a popular and successful way of tackling the information overload. Recommender systems have been popularized by applications such as Amazon @cite_17 or Netflix recommenders http: amazon.com , http: netflix.com . The most widely used recommender systems are based on collaborative filtering algorithms. One of the first collaborative filtering systems was Tapestry @cite_1 . Other notable CF systems include jester @cite_18 , Ringo @cite_14 , Movielens and Launch.com.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_1", "@cite_17" ], "mid": [ "2117354486", "1608058569", "", "1966553486", "2159094788" ], "abstract": [ "Eigentaste is a collaborative filtering algorithm that uses i>universal queries to elicit real-valued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline clustering of users and rapid computation of recommendations. For a database of i>n users, standard nearest-neighbor techniques require i>O(i>n) processing time to compute recommendations, whereas Eigentaste requires i>O(1) (constant) time. We compare Eigentaste to alternative algorithms using data from i>Jester, an online joke recommending system. Jester has collected approximately 2,500,000 ratings from 57,000 users. We use the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms. In the Appendix we use Uniform and Normal distribution models to derive analytic estimates of NMAE when predictions are random. On the Jester dataset, Eigentaste computes recommendations two orders of magnitude faster with no loss of accuracy. Jester is online at: http: eigentaste.berkeley.edu", "Thesis (M. Eng)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.", "", "The Tapestry experimental mail system developed at the Xerox Palo Alto Research Center is predicated on the belief that information filtering can be more effective when humans are involved in the filtering process. Tapestry was designed to support both content-based filtering and collaborative filtering, which entails people collaborating to help each other perform filtering by recording their reactions to documents they read. The reactions are called annotations; they can be accessed by other people’s filters. Tapestry is intended to handle any incoming stream of electronic documents and serves both as a mail filter and repository; its components are the indexer, document store, annotation store, filterer, little box, remailer, appraiser and reader browser. Tapestry’s client server architecture, its various components, and the Tapestry query language are described.", "Recommendation algorithms are best known for their use on e-commerce Web sites, where they use input about a customer's interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. There are three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods. Here, we compare these methods with our algorithm, which we call item-to-item collaborative filtering. Unlike traditional collaborative filtering, our algorithm's online computation scales independently of the number of customers and number of items in the product catalog. Our algorithm produces recommendations in real-time, scales to massive data sets, and generates high quality recommendations." ] }
cs0703074
2949893529
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
Instead of relying on the structure of C types, we chose to represent the memory as flat sequences of bytes. This allows shifting to a representation of pointers as pairs: a symbolic base and a numeric offset. It is a common practice---it is used, for instance, by Wilson and Lam in @cite_8 . This also suggests combining the pointer and value analyses into a single one---offsets being treated as integer variables. There is experimental proof @cite_25 that this is more precise than a pointer analysis followed by a value analysis. Some authors rely on non-relational abstractions of offsets--- e.g. , a reduced product of intervals and congruences @cite_21 , or intervals together byte-size factors @cite_24 . Others, such as @cite_18 @cite_12 or ourself, permit more precise, relational offset abstractions.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_24", "@cite_25", "@cite_12" ], "mid": [ "1840408880", "2087612811", "1582456956", "", "153574562", "2107742417" ], "abstract": [ "In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.", "This paper proposes an efficient technique for context-sensitive pointer analysis that is applicable to real C programs. For efficiency, we summarize the effects of procedures using partial transfer functions . A partial transfer function (PTF) describes the behavior of a procedure assuming that certain alias relationships hold when it is called. We can reuse a PTF in many calling contexts as long as the aliases among the inputs to the procedure are the same. Our empirical results demonstrate that this technique is successful—a single PTF per procedure is usually sufficient to obtain completely context-sensitive results. Because many C programs use features such as type casts and pointer arithmetic to circumvent the high-level type system, our algorithm is based on a low-level representation of memory locations that safely handles all the features of C. We have implemented our algorithm in the SUIF compiler system and we show that it runs efficiently for a set of C benchmarks.", "This paper concerns static-analysis algorithms for analyzing x86 executables. The aim of the work is to recover intermediate representations that are similar to those that can be created for a program written in a high-level language. Our goal is to perform this task for programs such as plugins, mobile code, worms, and virus-infected code. For such programs, symbol-table and debugging information is either entirely absent, or cannot be relied upon if present; hence, the technique described in the paper makes no use of symbol-table debugging information. Instead, an analysis is carried out to recover information about the contents of memory locations and how they are manipulated by the executable.", "", "", "This article presents a novel framework for the symbolic bounds analysis of pointers, array indices, and accessed memory regions. Our framework formulates each analysis problem as a system of inequality constraints between symbolic bound polynomials. It then reduces the constraint system to a linear program. The solution to the linear program provides symbolic lower and upper bounds for the values of pointer and array index variables and for the regions of memory that each statement and procedure accesses. This approach eliminates fundamental problems associated with applying standard fixed-point approaches to symbolic analysis problems. Experimental results from our implemented compiler show that the analysis can solve several important problems, including static race detection, automatic parallelization, static detection of array bounds violations, elimination of array bounds checks, and reduction of the number of bits used to store computed values." ] }
cs0703074
2949893529
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
Finally, note that most articles--- @cite_18 being a notable exception---directly leap from a memory model informally described in English to the formal description of a static analysis. Following the Abstract Interpretation framework, we give a full mathematical description of the memory model before presenting computable abstractions proved correct with respect to the model.
{ "cite_N": [ "@cite_18" ], "mid": [ "1840408880" ], "abstract": [ "In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications." ] }
cs0703083
1561850421
Search engines provide cached copies of indexed content so users will have something to "click on" if the remote resource is temporarily or permanently unavailable. Depending on their proprietary caching strategies, search engines will purge their indexes and caches of resources that exceed a threshold of unavailability. Although search engine caches are provided only as an aid to the interactive user, we are interested in building reliable preservation services from the aggregate of these limited caching services. But first, we must understand the contents of search engine caches. In this paper, we have examined the cached contents of Ask, Google, MSN and Yahoo to profile such things as overlap between index and cache, size, MIME type and "staleness" of the cached resources. We also examined the overlap of the various caches with the holdings of the Internet Archive.
Besides a study by @cite_13 which examined the freshness of 38 German web pages in SE caches, we are unaware of any research that has characterized the SE caches or attempted to find the overlap of SE caches with the IA.
{ "cite_N": [ "@cite_13" ], "mid": [ "2141138294" ], "abstract": [ "This study measures the frequency with which search engines update their indices. Therefore, 38 websites that are updated on a daily basis were analysed within a time-span of six weeks. The analysed search engines were Google, Yahoo and MSN. We find that Google performs best overall with the most pages updated on a daily basis, but only MSN is able to update all pages within a time-span of less than 20 days. Both other engines have outliers that are older. In terms of indexing patterns, we find different approaches at the different engines. While MSN shows clear update patterns, Google shows some outliers and the update process of the Yahoo index seems to be quite chaotic. Implications are that the quality of different search engine indices varies and more than one engine should be used when searching for current content." ] }
cs0703133
2952084784
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
Our approximation scheme (Theorem and Theorem ) shows a contrast between the games that we study and two-player @math -action games, for which the corresponding problems are usually intractable. For two-player @math -action games, the problem of finding Nash equilibria with special properties is typically NP-hard. In particular, this is the case for Nash equilibria that maximize the social welfare @cite_15 @cite_5 . Moreover, it is likely to be intractable even to approximate such equilibria. In particular, Chen, Deng and Teng @cite_9 show that there exists some @math , inverse polynomial in @math , for which computing an @math -Nash equilibrium in 2-player games with @math actions per player is PPAD-complete.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_9" ], "mid": [ "1807884544", "2021939019", "" ], "abstract": [ "Noncooperative game theory provides a normative framework for analyzing strategic interactions. However, for the toolbox to be operational, the solutions it defines will have to be computed. In this paper, we provide a single reduction that 1) demonstrates NP-hardness of determining whether Nash equilibria with certain natural properties exist, and 2) demonstrates the NP-hardness of counting Nash equilibria (or connected sets of Nash equilibria). We also show that 3) determining whether a purestrategy Bayes-Nash equilibrium exists is NP-hard, and that 4) determining whether a pure-strategy Nash equilibrium exists in a stochastic (Markov) game is PSP ACE-hard even if the game is invisible (this remains NP-hard if the game is finite). All of our hardness results hold even if there are only two players and the game is symmetric.", "This paper deals with the complexity of computing Nash and correlated equilibria for a finite game in normal form. We examine the problems of checking the existence of equilibria satisfying a certain condition, such as “Given a game G and a number r, is there a Nash (correlated) equilibrium of G in which all players obtain an expected payoff of at least r?” or “Is there a unique Nash (correlated) equilibrium in G?” etc. We show that such problems are typically “hard” (NP-hard) for Nash equilibria but “easy” (polynomial) for correlated equilibria.", "" ] }
cs0703133
2952084784
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
Lipton and Markakis @cite_14 study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them. Note that these algorithms are not polynomial-time in general. The games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.
{ "cite_N": [ "@cite_14" ], "mid": [ "1576554202" ], "abstract": [ "We consider the problem of computing a Nash equilibrium in multiple-player games. It is known that there exist games, in which all the equilibria have irrational entries in their probability distributions [19]. This suggests that either we should look for symbolic representations of equilibria or we should focus on computing approximate equilibria. We show that every finite game has an equilibrium such that all the entries in the probability distributions are algebraic numbers and hence can be finitely represented. We also propose an algorithm which computes an approximate equilibrium in the following sense: the strategies output by the algorithm are close with respect to l ∞ -norm to those of an exact Nash equilibrium and also the players have only a negligible incentive to deviate to another strategy. The running time of the algorithm is exponential in the number of strategies and polynomial in the digits of accuracy. We obtain similar results for approximating market equilibria in the neoclassical exchange model under certain assumptions." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Dynamic scheduling of parallel applications has been an active area of research for several years. Much of the early work targets shared memory architectures although several recent efforts focus on grid environments. @cite_1 propose an algorithm for dynamic scheduling on parallel machines under a PRAM programming model. @cite_7 propose a dynamic processor allocation policy for shared memory multiprocessors and study space-sharing vs. time-sharing in this context. @cite_3 present a scheduling policy for shared memory systems that allocates processors based on the performance of the application.
{ "cite_N": [ "@cite_3", "@cite_1", "@cite_7" ], "mid": [ "", "2127960094", "2094587335" ], "abstract": [ "", "We study the problem of on-line job-scheduling on parallel machines with different network topologies. An on-line scheduling algorithm schedules a collection of parallel jobs with known resource requirements but unknown running times on a parallel machine. We give an O(log log N)-competitive algorithm for on-line scheduling on a two-dimensional mesh of N processors and we prove a matching lower bound of Ω(log log N) on the competitive ratio. Furthermore, we show tight constant bounds of 2 for PRAMs and hypercubes, and present a 2.5-competitive algorithm for lines. We also generalize our two-dimensional mesh result to higher dimensions. Surprisingly, our algorithms become less and less greedy as the geometric structure of the network topology becomes more complicated. The proof of our lower bound for the two- dimensional mesh actually shows that no greedy-like algorithm can perform well.", "We propose and evaluate empirically the performance of a dynamic processor-scheduling policy for multiprogrammed shared-memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized parallelism of those jobs. The policy is suitable for implementation in production systems in that: —It interacts well with very efficient user-level thread packages, leaving to them many low-level thread operations that do not require kernel intervention. —It deals with thread blocking due to user I O and page faults. —It ensures fairness in delivering resources to jobs. —Its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in existing systems. It provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, and the importance of cooperation between the kernel and the application in reallocating processors between jobs. We have also compared the policies according to other criteia important in real implementations, in particular, fairness and respone time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Moreira and Naik @cite_17 propose a technique for dynamic resource management on distributed systems using a checkpointing framework called Distributed Resource Management Systems (DRMS). The framework supports jobs that can change their active number of tasks during program execution, map the new set of tasks to execution units, and redistribute data among the new set of tasks. DRMS does not make reconfiguration decisions based on application performance however, and it uses file-based checkpointing for data redistribution. A more recent work by Kale @cite_2 achieves reconfiguration of MPI-based message passing programs. However, the reconfiguration is achieved using Adaptive MPI (AMPI), which in turn relies on Charm++ @cite_5 for the processor virtualization layer, and requires that the application be run with many more threads than processors.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_17" ], "mid": [ "2079577430", "2140085532", "2089818961" ], "abstract": [ "We describe Charm++, an object oriented portable parallel programming language based on C++. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of C++ with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latency-tolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects, all of which are more difficult problems in a parallel context. Charm++ provides specific modes for sharing information between parallel objects. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.", "Malleable jobs are parallel programs that can change the number of processors on which they are executing at run time in response to an external command. One of the advantages of such jobs is that a job scheduler for malleable jobs can provide improved system utilization and average response time over a scheduler for traditional jobs. In this paper, we present a programming system for creating malleable jobs that is more general than other current malleable systems. In particular, it is not limited to the master-worker paradigm or the Fortran SPMD programming model, but can also support general purpose parallel programs including those written in MPI and Charm++, and has built-in migration and load-balancing, among other features.", "Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
@cite_11 describe an application-aware job scheduler that dynamically controls resource allocation among concurrently executing jobs. The scheduler implements policies for adding or removing resources from jobs based on performance predictions from the Prophet system @cite_18 . All processors send data to the root node for data redistribution. The authors present simulated results based on supercomputer workload traces. Cirne and Berman @cite_9 use the term to describe jobs which can adapt to different processor sizes. In their work the application scheduler AppLeS selects the job with the least estimated turn-around time out of a set of moldable jobs, based on the current state of the parallel computer. Possible processor configurations are specified by the user, and the number of processors assigned to a job does not change after job-initiation time.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_11" ], "mid": [ "2000300079", "", "2084814071" ], "abstract": [ "Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated.", "", "This paper presents a new paradigm for parallel job scheduling called integrated scheduling or iScheduling. The iScheduler is an application-aware job scheduler as opposed to a general-purpose system scheduler. It dynamically controls resource allocation among a set of competing applications, but unlike a traditional job scheduler, it can interact directly with an application during execution to optimize resource allocation. An iScheduler may add or remove resources from a running application to improve the performance of other applications. Such fluid resource management can support both improved application and system performance. We propose a framework for building iSchedulers and evaluate the concept on several workload traces obtained both from supercomputer centers and from a set of real parallel jobs. The results indicate that iScheduling can improve both waiting time and overall turnaround time substantially for these workload classes, outperforming standard policies such as backfilling and moldable job scheduling." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Vadhiyar and Dongarra @cite_13 @cite_8 describe a user-level checkpointing framework called Stop Restart Software (SRS) for developing malleable and migratable applications for distributed and Grid computing systems. The framework implements a rescheduler which monitors application progress and can migrate the application to a better resource. Data redistribution is done via user-level file-based checkpointing.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2014594876", "2129222772" ], "abstract": [ "The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead.", "At least three factors in the existing migration frameworks make them less suitable in Grid systems especially when the goal is to improve the response times for individual applications. These factors are the separate policies for suspension and migration of executing applications employed by these migration frameworks, the use of pre-defined conditions for suspension and migration and the lack of knowledge of the remaining execution time of the applications. In this paper we describe a migration framework for performance oriented Grid systems that implements tightly coupled policies for both suspension and migration of executing applications and takes into account both system load and application characteristics. The main goal of our migration framework is to improve the response times for individual applications. We also present some results that demonstrate the usefulness of our migration framework." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
The framework described in this paper has several aspects that differentiate it from the above work. is designed for applications running on distributed-memory clusters. Like @cite_9 @cite_11 , applications must be moldable in order to take advantage of ; but in our case the user is not required to specify the legal partition sizes ahead of time. Instead, can dynamically calculate partition sizes based on the run-time performance of the application. Our framework uses neither file-based checkpointing nor a single node for redistribution. Instead, we use an efficient data redistribution algorithm which remaps data on-the-fly using message-passing over the high-performance cluster interconnect. Finally, we evaluate our system using experimental data from a real cluster, allowing us to investigate potential benefits both for individual job turn-around time and overall system utilization and throughput.
{ "cite_N": [ "@cite_9", "@cite_11" ], "mid": [ "2000300079", "2084814071" ], "abstract": [ "Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated.", "This paper presents a new paradigm for parallel job scheduling called integrated scheduling or iScheduling. The iScheduler is an application-aware job scheduler as opposed to a general-purpose system scheduler. It dynamically controls resource allocation among a set of competing applications, but unlike a traditional job scheduler, it can interact directly with an application during execution to optimize resource allocation. An iScheduler may add or remove resources from a running application to improve the performance of other applications. Such fluid resource management can support both improved application and system performance. We propose a framework for building iSchedulers and evaluate the concept on several workload traces obtained both from supercomputer centers and from a set of real parallel jobs. The results indicate that iScheduling can improve both waiting time and overall turnaround time substantially for these workload classes, outperforming standard policies such as backfilling and moldable job scheduling." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Marbach, Mihatsch and Tsitsiklis @cite_20 have applied an actor-critic (value-search) algorithm to address resource allocation within communication networks by tackling both routing and call admission control. They adopt a decompositional approach, representing the network as consisting of link processes, each with its own differential reward. Unfortunately, the empirical results even on small networks, @math and @math nodes, show little advantage over heuristic techniques.
{ "cite_N": [ "@cite_20" ], "mid": [ "2125132331" ], "abstract": [ "In integrated service communication networks, an important problem is to exercise call admission control and routing so as to optimally use the network resources. This problem is naturally formulated as a dynamic programming problem, which, however, is too complex to be solved exactly. We use methods of reinforcement learning (RL), together with a decomposition approach, to find call admission control and routing policies. The performance of our policy for a network with approximately 1045 different feature configurations is compared with a commonly used heuristic policy." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Carlstr "om @cite_5 introduces another rl strategy based on decomposition called predictive gain scheduling. The control problem of admission control is decomposed into a time-series prediction of near-future call arrival rates and precomputation of control policies for Poisson call arrival processes. This approach results in faster learning without performance loss. Online convergence rate increases 50 times on a simulated link with capacity @math units sec.
{ "cite_N": [ "@cite_5" ], "mid": [ "1593397173" ], "abstract": [ "When a user requests. a connection to another user or a computer in a communications network, a routing algorithm selects a path for transferring the resulting data stream. If all suitable paths ar ..." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Generally speaking, value-search algorithms have been more extensively investigated than policy search ones in the domain of communications. Value-search (Q-learning) algorithms have arrived at promising results. Boyan and Littman's @cite_9 algorithm - Q-routing, proves superior to non-adaptive techniques based on shortest path, and robust with respect to dynamic variations in the simulation on a variety of network topology, including an irregular @math grid and 116-node lata phone network. It regulates the trade-off between the number of nodes a packet has to traverse and the possibility of congestion.
{ "cite_N": [ "@cite_9" ], "mid": [ "2156666755" ], "abstract": [ "This paper describes the Q-routing algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used by each node to keep accurate statistics on which routing decisions lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route efficiently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically. The paper concludes with a discussion of the tradeoff between discovering shortcuts and maintaining stable policies." ] }