id
stringlengths
1
5
document_id
stringlengths
1
5
text_1
stringlengths
78
2.56k
text_2
stringlengths
95
23.3k
text_1_name
stringclasses
1 value
text_2_name
stringclasses
1 value
29601
29600
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
As collaboration over the Internet becomes an everyday affair, it is increasingly important to provide high quality of interactivity. Distributed applications can replicate collaborative objects at every site for the purpose of achieving high interactivity. Replication, however, has a fatal weakness that it is difficult to maintain consistency among replicas. This paper introduces operation commutativity as a key principle in designing operations in order to manage distributed replicas consistent. In addition, we suggest effective schemes that make operations commutative using the relations of objects and operations. Finally, we apply our approaches to some simple replicated abstract data types, and achieve their consistency without serialization and locking.
Abstract of query paper
Cite abstracts
29602
29601
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.
Abstract of query paper
Cite abstracts
29603
29602
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order. Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems. The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.
Abstract of query paper
Cite abstracts
29604
29603
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Many distributed systems for wide-area networks can be built conveniently, and operate efficiently and correctly, using a weak consistency group communication mechanism. This mechanism organizes a set of principals into a single logical entity, and provides methods to multicast messages to the members. A weak consistency distributed system allows the principals in the group to differ on the value of shared state at any given instant, as long as they will eventually converge to a single, consistent value. A group containing many principals and using weak consistency can provide the reliability, performance, and scalability necessary for wide-area systems. I have developed a framework for constructing group communication systems, for classifying existing distributed system tools, and for constructing and reasoning about a particular group communication model. It has four components: message delivery, message ordering, group membership, and the application. Each component may have a different implementation, so that the group mechanism can be tailored to application requirements. The framework supports a new message delivery protocol, called timestamped anti-entropy, which provides reliable, eventual message delivery; is efficient; and tolerates most transient processor and network failures. It can be combined with message ordering implementations that provide ordering guarantees ranging from unordered to total, causal delivery. A new group membership protocol completes the set, providing temporarily inconsistent membership views resilient to up to k simultaneous principal failures. The Refdbms distributed bibliographic database system, which has been constructed using this framework, is used as an example. Refdbms databases can be replicated on many different sites, using the group communication system described here.
Abstract of query paper
Cite abstracts
29605
29604
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
Abstract This paper presents some fundamental properties of independent and- parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency are discussed in the light of this model. Two conditions, “strict” and “nonstrict” independence, are defined and then proved sufficient to ensure correctness and efficiency of parallel execution: If goals which meet these conditions are executed in parallel, the solutions obtained are the same as those produced by standard sequential execution. Also, in the absence of failure, the parallel proof procedure does not generate any additional work (with respect to standard SLD resolution), while the actual execution time is reduced. Finally, in case of failure of any of the goals, no slowdown will occur. For strict independence, the results are shown to hold independently of whether the parallel goals execute in the same environment or in separate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: Compiletime conditions to efficiently check goal independence at run time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.
Abstract of query paper
Cite abstracts
29606
29605
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
Abstract Sharing information is useful in specialising, optimising and parallelising logic programs and thus sharing analysis is an important topic of both abstract interpretation and logic programming. Sharing analyses infer which pairs of program variables can never be bound to terms that contain a common variable. We generalise a classic pair-sharing analysis from Herbrand unification to trace sharing over rational tree constraints. This is useful for reasoning about programs written in SICStus and Prolog-III because these languages use rational tree unification as the default equation solver.
Abstract of query paper
Cite abstracts
29607
29606
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.
Abstract of query paper
Cite abstracts
29608
29607
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood. Flow control in high speed networks requires distributed routers to make fast decisions based only on local information in allocating bandwidth to connections. While most previous work on this problem focuses on achieving local objective functions, in many cases it may be necessary to achieve global objectives such as maximizing the total flow. This problem illustrates one of the basic aspects of distributed computing: achieving global objectives using local information. Papadimitriou and Yannakakis (1993) initiated the study of such problems in a framework of solving positive linear programs by distributed agents. We take their model further, by allowing the distributed agents to acquire more information over time. We therefore turn attention to the tradeoff between the running time and the quality of the solution to the linear program. We give a distributed algorithm that obtains a (1+ spl epsiv ) approximation to the global optimum solution and runs in a polylogarithmic number of distributed rounds. While comparable in running time, our results exhibit a significant improvement on the logarithmic ratio previously obtained by Awerbuch and Azar (1994). Our algorithm, which draws from techniques developed by Luby and Nisan (1993) is considerably simpler than previous approximation algorithms for positive linear programs, and thus may have practical value in both centralized and distributed settings.
Abstract of query paper
Cite abstracts
29609
29608
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds. Many large-scale networks such as ad hoc and sensor networks, peer-to-peer networks, or the Internet have the property that the number of independent nodes does not grow arbitrarily when looking at neighborhoods of increasing size. Due to this bounded "volume growth," one could expect that distributed algorithms are able to solve many problems more efficiently than on general graphs. The goal of this paper is to help understanding the distributed complexity of problems on "bounded growth" graphs. We show that on the widely used unit disk graph, covering and packing linear programs can be approximated by constant factors in constant time. For a more general network model which is based on the assumption that nodes are in a metric space of constant doubling dimension, we show that in O(log*!n) rounds it is possible to construct a (O(1), O(1))-network decomposition. This results in asymptotically optimal O(log*!n) time algorithms for many important problems.
Abstract of query paper
Cite abstracts
29610
29609
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
In this paper, we study distributed approximation algorithms for fault-tolerant clustering in wireless ad hoc and sensor networks. A k-fold dominating set of a graph G = (V,E) is a subset S of V such that every node v V S has at least k neighbors in S. We study the problem in two network models. In general graphs, for arbitrary parameter t, we propose a distributed algorithm that runs in time O(t^2) and achieves an approximation ratio of O(t ^2 t log ), where n and denote the number of nodes in the network and the maximal degree, respectively. When the network is modeled as a unit disk graph, we give a probabilistic algorithm that runs in time O(log log n) and achieves an O(1) approximation in expectation. Both algorithms require only small messages of size O(log n) bits.
Abstract of query paper
Cite abstracts
29611
29610
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
We study fractional scheduling problems in sensor networks, in particular, sleep scheduling (generalisation of fractional domatic partition) and activity scheduling (generalisation of fractional graph colouring). The problems are hard to solve in general even in a centralised setting; however, we show that there are practically relevant families of graphs where these problems admit a local distributed approximation algorithm; in a local algorithm each node utilises information from its constant-size neighbourhood only. Our algorithm does not need the spatial coordinates of the nodes; it suffices that a subset of nodes is designated as markers during network deployment. Our algorithm can be applied in any marked graph satisfying certain bounds on the marker density; if the bounds are met, guaranteed near-optimal solutions can be found in constant time, space and communication per node.We also show that auxiliary information is necessary--no local algorithm can achieve a satisfactory approximation guarantee on unmarked graphs. Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds. In this paper, we review a recently developed class of algorithms that solve global problems in unit distance wireless networks by means of local algorithms. A local algorithm is one in which any node of a network only has information on nodes at distance at most k from itself, for a constant k. For example, given a unit distance wireless network N, we want to obtain a planar subnetwork of N by means of an algorithm in which all nodes can communicate only with their neighbors in N, perform some operations, and then halt. We review algorithms for obtaining planar subnetworks, approximations to minimum weight spanning trees, Delaunay triangulations, and relative neighbor graphs. Given a unit distance wireless network N, we present new local algorithms to solve the following problems:1.Calculate small dominating sets (not necessarily connected) of N. 2.Extract a bounded degree planar subgraph H of N and obtain a proper edge coloring of H with at most 12 colors. The second of these algorithms can be used in the channel assignment problem. This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math . The purpose of this paper is a study of computation that can be done locally in a distributed network, where "locally" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs). In this paper, we study distributed approximation algorithms for fault-tolerant clustering in wireless ad hoc and sensor networks. A k-fold dominating set of a graph G = (V,E) is a subset S of V such that every node v V S has at least k neighbors in S. We study the problem in two network models. In general graphs, for arbitrary parameter t, we propose a distributed algorithm that runs in time O(t^2) and achieves an approximation ratio of O(t ^2 t log ), where n and denote the number of nodes in the network and the maximal degree, respectively. When the network is modeled as a unit disk graph, we give a probabilistic algorithm that runs in time O(log log n) and achieves an O(1) approximation in expectation. Both algorithms require only small messages of size O(log n) bits.
Abstract of query paper
Cite abstracts
29612
29611
We say that a graph G=(V,E) on n vertices is a @b-expander for some constant @b>0 if every U@?V of cardinality |U|@?n2 satisfies |N"G(U)|>=@b|U| where N"G(U) denotes the neighborhood of U. In this work we explore the process of deleting vertices of a @b-expander independently at random with probability n^-^@a for some constant @a>0, and study the properties of the resulting graph. Our main result states that as n tends to infinity, the deletion process performed on a @b-expander graph of bounded degree will result with high probability in a graph composed of a giant component containing n-o(n) vertices that is in itself an expander graph, and constant size components. We proceed by applying the main result to expander graphs with a positive spectral gap. In the particular case of (n,d,@l)-graphs, that are such expanders, we compute the values of @a, under additional constraints on the graph, for which with high probability the resulting graph will stay connected, or will be composed of a giant component and isolated vertices. As a graph sampled from the uniform probability space of d-regular graphs with high probability is an expander and meets the additional constraints, this result strengthens a recent result due to Greenhill, Holt and Wormald about vertex percolation on random d-regular graphs. We conclude by showing that performing the above described deletion process on graphs that expand sub-linear sets by an unbounded expansion ratio, with high probability results in a connected expander graph.
In this paper, we provide a method to safely store a document in perhaps the most challenging settings, a highly decentralized replicated storage system where up to half of the storage servers may incur arbitrary failures, including alterations to data stored in them. Using an error correcting code (ECC), e.g., a Reed?Solomon code, one can take n pieces of a document, replace each piece with another piece of size larger by a factor of nn?2t+1 such that it is possible to recover the original set even when up to t of the larger pieces are altered. For t close to n 2 the space blowup factor of this scheme is close to n, and the overhead of an ECC such as the Reed?Solomon code degenerates to that of a trivial replication code. We show a technique to reduce this large space overhead for high values of t. Our scheme blows up each piece by a factor slightly larger than two using an erasure code which makes it possible to recover the original set using n 2?O(n d) of the pieces, where d?80 is a fixed constant. Then we attach to each piece O(d log n log d) additional bits to make it possible to identify a large enough set of unmodified pieces, with negligible error probability, assuming that at least half the pieces are unmodified and with low complexity. For values of t close to n 2 we achieve a large asymptotic space reduction over the best possible space blowup of any ECC in deterministic setting. Our approach makes use of a d-regular expander graph to compute the bits required for the identification of n 2?O(n d) good pieces. We investigate the following vertex percolation process. Starting with a random regular graph of constant degree, delete each vertex independently with probability p, where p=n^-^@a and @[email protected](n) is bounded away from 0. We show that a.a.s. the resulting graph has a connected component of size n-o(n) which is an expander, and all other components are trees of bounded size. Sharper results are obtained with extra conditions on @a. These results have an application to the cost of repairing a certain peer-to-peer network after random failures of nodes.
Abstract of query paper
Cite abstracts
29613
29612
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
Mobile radio communications raise two major problems. First: a very poor radio link quality. Second: the users' mobility, which requires the management of their position, is resource consuming (especially radio bandwidth). This paper focuses on the second issue and proposes an intelligent method for users locating: the alternative strategy (AS). Our proposal is based on the observation that the mobility behavior of a majority of people can be foretold. If taken into consideration by the system, this characteristic can save signaling messages due to mobility management procedures, leading thus to savings in the system resources. Several versions of the AS are described: a basic version for long term events (i.e., received calls and registrations), and versions with increased memory for short and medium term events. The evaluation of the basic versions was performed using analytic and simulation approaches. It shows that storing the mobility related information brings great savings in system resources when the users have medium or high predictable mobility patterns. More generally speaking, this work points out the fact that the future systems will have to integrate users related information in order: firstly: to provide customized services and secondly: to save system resources. On the other hand, current trends in mobile communications show that adaptive and dynamic system capabilities require that more information to be collected and computed. >
Abstract of query paper
Cite abstracts
29614
29613
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
We propose a new location tracking strategy called behavior-based strategy (BBS) based on each mobile's moving behavior. With the help of data mining technologies the moving behavior of each mobile could be mined from long-term collection of the mobile's moving logs. From the moving behavior of each mobile, we first estimate the time-varying probability of the mobile and then the optimal paging area of each time region is derived. To reduce unnecessary computation, we consider the location tracking and computational cost and then derive a cost model. A heuristics is proposed to minimize the cost model through finding the appropriate moving period checkpoints of each mobile. The experimental results show our strategy outperforms fixed paging area strategy currently used in the GSM system and time-based strategy for highly regular moving mobiles.
Abstract of query paper
Cite abstracts
29615
29614
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
The following problem was raised by M. Watanabe. Let P be a self-intersecting closed polygon with n vertices in general position. How manys steps does it take to untangle P , i.e., to turn it into a simple polygon, if in each step we can arbitrarily relocate one of its vertices. It is shown that in some cases one has to move all but at most O((n log n) 2 3 ) vertices. On the other hand, every polygon P can be untangled in at most @math steps. Some related questions are also considered. Untangling is a process in which some vertices in a drawing of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C n while keeping Ω(n 2 3) vertices fixed. For any connected graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree, and diameter of G. One consequence is that every 3-connected planar graph has a drawing δ such that at most O((nlog n)2 3) vertices are fixed in every untangling of δ.
Abstract of query paper
Cite abstracts
29616
29615
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Being motivated by John Tantalo's Planarity Game, we consider straight line plane drawings of a planar graph G with edge crossings and wonder how obfuscated such drawings can be. We define obf(G), the obfuscation complexity of G, to be the maximum number of edge crossings in a drawing of G. Relating obf(G) to the distribution of vertex degrees in G, we show an efficient way of constructing a drawing of G with at least obf(G) 3 edge crossings. We prove bounds (@d(G)^2 24-o(1))n^2@?obf(G) =2. The shift complexity of G, denoted by shift(G), is the minimum number of vertex shifts sufficient to eliminate all edge crossings in an arbitrarily obfuscated drawing of G (after shifting a vertex, all incident edges are supposed to be redrawn correspondingly). If @d(G)>=3, then shift(G) is linear in the number of vertices due to the known fact that the matching number of G is linear. However, in the case @d(G)>=2 we notice that shift(G) can be linear even if the matching number is bounded. As for computational complexity, we show that, given a drawing D of a planar graph, it is NP-hard to find an optimum sequence of shifts making D crossing-free. Untangling is a process in which some vertices in a drawing of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C n while keeping Ω(n 2 3) vertices fixed. For any connected graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree, and diameter of G. One consequence is that every 3-connected planar graph has a drawing δ such that at most O((nlog n)2 3) vertices are fixed in every untangling of δ.
Abstract of query paper
Cite abstracts
29617
29616
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Being motivated by John Tantalo's Planarity Game, we consider straight line plane drawings of a planar graph G with edge crossings and wonder how obfuscated such drawings can be. We define obf(G), the obfuscation complexity of G, to be the maximum number of edge crossings in a drawing of G. Relating obf(G) to the distribution of vertex degrees in G, we show an efficient way of constructing a drawing of G with at least obf(G) 3 edge crossings. We prove bounds (@d(G)^2 24-o(1))n^2@?obf(G) =2. The shift complexity of G, denoted by shift(G), is the minimum number of vertex shifts sufficient to eliminate all edge crossings in an arbitrarily obfuscated drawing of G (after shifting a vertex, all incident edges are supposed to be redrawn correspondingly). If @d(G)>=3, then shift(G) is linear in the number of vertices due to the known fact that the matching number of G is linear. However, in the case @d(G)>=2 we notice that shift(G) can be linear even if the matching number is bounded. As for computational complexity, we show that, given a drawing D of a planar graph, it is NP-hard to find an optimum sequence of shifts making D crossing-free.
Abstract of query paper
Cite abstracts
29618
29617
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
We give an algorithm to morph between two planar orthogonal drawings of a graph, preserving planarity and orthogonality. The morph uses a polynomial number of discrete steps. Each step is either a linear morph that moves a set of vertices horizontally or vertically; or a "twist" that introduces new bends in the edges incident with one vertex. Our morph can be implemented so that inter-vertex distances are well-behaved. This is the first algorithm to provide planarity-preserving morphs with well-behaved complexity for a significant class of graph drawings.
Abstract of query paper
Cite abstracts
29619
29618
Given a sequence of complex square matrices, @math , consider the sequence of their partial products, defined by @math . What can be said about the asymptotics as @math of the sequence @math , where @math is a continuous function? A special case of our most general result addresses this question under the assumption that the matrices @math are an @math perturbation of a sequence of matrices with bounded partial products. We apply our theory to investigate the asymptotics of the approximants of continued fractions. In particular, when a continued fraction is @math limit 1-periodic of elliptic or loxodromic type, we show that its sequence of approximants tends to a circle in @math , or to a finite set of points lying on a circle. Our main theorem on such continued fractions unifies the treatment of the loxodromic and elliptic cases, which are convergent and divergent, respectively. When an approximating sequence tends to a circle, we obtain statistical information about the limiting distribution of the approximants. When the circle is the real line, the points are shown to have a Cauchy distribution with parameters given in terms of modifications of the original continued fraction. As an example of the general theory, a detailed study of a @math -continued fraction in five complex variables is provided. The most general theorem in the paper holds in the context of Banach algebras. The theory is also applied to @math -matrix continued fractions and recurrence sequences of Poincar 'e type and compared with closely related literature.
Abstract On page 45 in his lost notebook, Ramanujan asserts that a certain q -continued fraction has three limit points. More precisely, if A n B n denotes its n th partial quotient, and n tends to ∞ in each of three residue classes modulo 3, then each of the three limits of A n B n exists and is explicitly given by Ramanujan. Ramanujan's assertion is proved in this paper. Moreover, general classes of continued fractions with three limit points are established. For integers m⩾2, we study divergent continued fractions whose numerators and denominators in each of the m arithmetic progressions modulo m converge. Special cases give, among other things, an infinite sequence of divergence theorems, the first of which is the classical Stern–Stolz theorem. We give a theorem on a class of Poincare-type recurrences which shows that they tend to limits when the limits are taken in residue classes and the roots of their characteristic polynomials are distinct roots of unity. We also generalize a curious q-continued fraction of Ramanujan's with three limits to a continued fraction with k distinct limit points, k⩾2. The k limits are evaluated in terms of ratios of certain q-series. Finally, we show how to use Daniel Bernoulli's continued fraction in an elementary way to create analytic continued fractions with m limit points, for any positive integer m⩾2.
Abstract of query paper
Cite abstracts
29620
29619
We study an evolutionary game of chance in which the probabilities for different outcomes (e.g., heads or tails) depend on the amount wagered on those outcomes. The game is perhaps the simplest possible probabilistic game in which perception affects reality. By varying the reality map', which relates the amount wagered to the probability of the outcome, it is possible to move continuously from a purely objective game in which probabilities have no dependence on wagers, to a purely subjective game in which probabilities equal the amount wagered. The reality map can reflect self-reinforcing strategies or self-defeating strategies. In self-reinforcing games, rational players can achieve increasing returns and manipulate the outcome probabilities to their advantage; consequently, an early lead in the game, whether acquired by chance or by strategy, typically gives a persistent advantage. We investigate the game both in and out of equilibrium and with and without rational players. We introduce a method of measuring the inefficiency of the game and show that in the large time limit the inefficiency decreases slowly in its approach to equilibrium as a power law with an exponent between zero and one, depending on the subjectivity of the game.
Abstract We investigate the dynamics of the cobweb model with adaptive expectations, a linear demand curve, and a nonlinear, S-shaped, increasing supply curve. Both stable periodic and chaotic price behaviour can occur. We investigate, how the dynamics of the model depend on the parameters. Both infinitely many period doubling and period halving bifurcations can occur, when the demand curve is shifted upwards. The same result holds with respect to the expectations weight factor. Abstract The price-quantity dynamics of the cobweb model with adaptive expectations and nonlinear supply and demand curves is analysed. We prove that chaotic dynamical behaviour can occur, even if both the supply and demand curves are monotonic. The introduction of adaptive expectations into the cobweb model leads to price-quantity fluctuations with a smaller amplitude. However, at the same time the price-quantity cycles may become unstable and chaotic oscillations may arise. We present a geometric explanation why chaos can occur for a large class of nonlinear, monotonic supply and demand curves. Abstract In a conventional asset market model we study the evolutionary process generated by wealth flows between investors. Asymptotic behavior of our model is completely determined by the investors' expected growth rates of wealth share. Investment rules are more or less “fit” depending upon the value of this expectation, and more fit rules survive in the market at the expense of the less fit. Using this criterion we examine the long run behavior of asset prices and the common belief that the market selects for rational investors. We find that fit rules need not be rational, and rational rules not be fit. Finally, we investigate how the market selects over various adaptive decision rules. Evolutionary arguments are often used to justify the fundamental behavioral postulates of competive equilibrium. Economists such as Milton Friedman have argued that natural selection favors profit maximizing firms over firms engaging in other behaviors. Consequently, producer efficiency, and therefore Pareto efficiency, are justified on evolutionary grounds. We examine these claims in an evolutionary general equilibrium model. If the economic environment were held constant, profitable firms would grow and unprofitable firms would shrink. In the general equilibrium model, prices change as factor demands and output supply evolves. Without capital markets, when firms can grow only through retained earnings, our model verifies Friedman's claim that natural selection favors profit maximization. But we show through examples that this does not imply that equilibrium allocations converge over time to efficient allocations. Consequently, Koopmans critique of Friedman is correct. When capital markets are added, and firms grow by attracting investment, Friedman's claim may fail. In either model the long-run outcomes of evolutionary market models are not well described by conventional General Equilibrium analysis with profit maximizing firms. Submitted to Journal of Economic Theory. (This abstract was borrowed from another version of this item.) A theoretical framework we call dynamical systems game is presented, in which the game itself can change due to the influence of players’ behaviors and states. That is, the nature of the game itself is described as a dynamical system. The relation between game dynamics and the evolution of strategies is discussed by applying this framework. Computer experiments are carried out for simple one-person games to demonstrate the evolution of dynamical systems with the effective use of dynamical resources. © 2000 Elsevier Science B.V. All rights reserved. PACS: 02.50.Le
Abstract of query paper
Cite abstracts
29621
29620
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
An efficient method for the calculation of the interactions of a 2' factorial ex- periment was introduced by Yates and is widely known by his name. The generaliza- tion to 3' was given by (1). Good (2) generalized these methods and gave elegant algorithms for which one class of applications is the calculation of Fourier series. In their full generality, Good's methods are applicable to certain problems in which one must multiply an N-vector by an N X N matrix which can be factored into m sparse matrices, where m is proportional to log N. This results inma procedure requiring a number of operations proportional to N log N rather than N2. These methods are applied here to the calculation of complex Fourier series. They are useful in situations where the number of data points is, or can be chosen to be, a highly composite number. The algorithm is here derived and presented in a rather different form. Attention is given to the choice of N. It is also shown how special advantage can be obtained in the use of a binary computer with N = 2' and how the entire calculation can be performed within the array of N data storage locations used for the given Fourier coefficients. Consider the problem of calculating the complex Fourier series N-1 (1) X(j) = EA(k)-Wjk, j = 0 1, * ,N- 1, k=0 This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f. In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement We develop a framework for analog-to-information conversion that enables sub-Nyquist acquisition and processing of wideband signals that are sparse in a local Fourier representation. The first component of the framework is a random sampling system that can be implemented in practical hardware. The second is an efficient information recovery algorithm to compute the spectrogram of the signal, which we dub the sparsogram. A simulated acquisition of a frequency hopping signal operates at 33times sub-Nyquist average sampling rate with little degradation in signal quality This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
Abstract of query paper
Cite abstracts
29622
29621
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f. In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement Compressed sensing is a new area of signal processing. Its goal is to minimize the number of samples that need to be taken from a signal for faithful reconstruction. The performance of compressed sensing on signal classes is directly related to Gelfand widths. Similar to the deeper constructions of optimal subspaces in Gelfand widths, most sampling algorithms are based on randomization. However, for possible circuit implementation, it is important to understand what can be done with purely deterministic sampling. In this note, we show how to construct sampling matrices using finite fields. One such construction gives cyclic matrices which are interesting for circuit implementation. While the guaranteed performance of these deterministic constructions is not comparable to the random constructions, these matrices have the best known performance for purely deterministic constructions. This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
Abstract of query paper
Cite abstracts
29623
29622
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues of storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task. In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh. Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality. Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published, leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques, yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern Computer architectures
Abstract of query paper
Cite abstracts
29624
29623
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
Isosurface extraction is a standard visualization method for scalar volume data and has been subject to research for decades. Nevertheless, to our knowledge, no isosurface extraction method exists that directly extracts surfaces from scattered volume data without 3D mesh generation or reconstruction over a structured grid. We propose a method based on spatial domain partitioning using a kd-tree and an indexing scheme for efficient neighbor search. Our approach consists of a geometry extraction and a rendering step. The geometry extraction step computes points on the isosurface by linearly interpolating between neighboring pairs of samples. The neighbor information is retrieved by partitioning the 3D domain into cells using a kd-tree. The cells are merely described by their index and bitwise index operations allow for a fast determination of potential neighbors. We use an angle criterion to select appropriate neighbors from the small set of candidates. The output of the geometry step is a point cloud representation of the isosurface. The final rendering step uses point-based rendering techniques to visualize the point cloud. Our direct isosurface extraction algorithm for scattered volume data produces results of quality close to the results from standard isosurface extraction algorithms for gridded volume data (like marching cubes). In comparison to 3D mesh generation algorithms (like Delaunay tetrahedrization), our algorithm is about one order of magnitude faster for the examples used in this paper. A method is proposed which supports the extraction of isosurfaces from irregular volume data, represented by tetrahedral decomposition, in optimal time. The method is based on a data structure called interval tree, which encodes a set of intervals on the real line, and supports efficient retrieval of all intervals containing a given value. Each cell in the volume data is associated with an interval bounded by the extreme values of the field in the cell. All cells intersected by a given isosurface are extracted in O(m+log h) time, with m the output size and h the number of different extreme values (min or max). The implementation of the method is simple. Tests have shown that its practical performance reflects the theoretical optimality. Presents the "Near Optimal IsoSurface Extraction" (NOISE) algorithm for rapidly extracting isosurfaces from structured and unstructured grids. Using the span space, a new representation of the underlying domain, we develop an isosurface extraction algorithm with a worst case complexity of o( spl radic n+k) for the search phase, where n is the size of the data set and k is the number of cells intersected by the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(n log n). We utilize the span space representation as a tool for comparing isosurface extraction methods on structured and unstructured grids. We also present a fast triangulation scheme for generating and displaying unstructured tetrahedral grids. We propose a meshless method for the extraction of high-quality continuous isosurfaces from volumetric data represented by multiple grids, also called "multiblock" data sets. Multiblock data sets are commonplace in computational mechanics applications. Relatively little research has been performed on contouring multiblock data sets, particularly when the grids overlap one another. Our algorithm proceeds in two steps. In the first step, we determine a continuous interpolant using a set of locally defined radial basis functions (RBFs) in conjunction with a partition of unity method to blend smoothly between these functions. In the second step, we extract isosurface geometry by sampling points on Marching Cubes triangles and projecting these point samples onto the isosurface defined by our interpolant. A surface splatting algorithm is employed for visualizing the resulting point set representing the isosurface. Because of our method's generality, it inherently solves the "crack problem" in isosurface generation. Results using a set of synthetic data sets and a discussion of practical considerations are presented. The importance of our method is that it can be applied to arbitrary grid data regardless of mesh layout or orientation.
Abstract of query paper
Cite abstracts
29625
29624
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Abstract Community Web portals serve as portals for the information needs of particular communities on the Web. We here discuss how a comprehensive and flexible strategy for building and maintaining a high-value community Web portal has been conceived and implemented. The strategy includes collaborative information provisioning by the community members. It is based on an ontology as a semantic backbone for accessing information on the portal, for contributing information, as well as for developing and maintaining the portal. We have also implemented a set of ontology-based tools that have facilitated the construction of our show case — the community Web portal of the knowledge acquisition community.
Abstract of query paper
Cite abstracts
29626
29625
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
The internet is rapidly becoming the first place for researchers to publish documents, but at present they receive little support in searching, tracking, analysing or debating concepts in a literature from scholarly perspectives. This paper describes the design rationale and implementation of ScholOnto, an ontology-based digital library server to support scholarly interpretation and discourse. It enables researchers to describe and debate via a semantic network the contributions a document makes, and its relationship to the literature. The paper discusses the computational services that an ontology-based server supports, alternative user interfaces to support interaction with a large semantic network, usability issues associated with knowledge formalisation, new work practices that could emerge, and related work.
Abstract of query paper
Cite abstracts
29627
29626
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow.
Abstract of query paper
Cite abstracts
29628
29627
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Log analysis can be a primary source of knowledge about how digital library patrons actually use DL systems and services and how systems behave while trying to support user information seeking activities. Log recording and analysis allow evaluation assessment, and open opportunities to improvements and enhanced new services. In this paper, we propose an XML-based digital library log format standard that captures a rich, detailed set of system and user behaviors supported by current digital library services. The format is implemented in a generic log component tool, which can be plugged into any digital library system. The focus of the work is on interoperability, reusability, and completeness. Specifications, implementation details, and examples of use within the MARIAN digital library system are described. Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period.
Abstract of query paper
Cite abstracts
29629
29628
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
Abstract The program relative to the investigation of quantum Markov states for general one-dimensional spin models is carried on, following a strategy developed in the last years. In such a way, the emerging structure is fully clarified. This analysis is a starting point for the solution of the basic (still open) problem concerning the construction of a satisfactory theory of quantum Markov fields, i.e. quantum Markov processes with multi-dimensional indices. We review recent developments in the theory of quantum Markov states on the standard @math --spin lattice. A Dobrushin theory for quantum Markov fields is proposed. In the one--di -men -sional case where the order plays a crucial role, the structure arising from a quantum Markov state is fully understood. In this situation we obtain a splitting of a Markov state into a classical part, and a purely quantum part. This result allows us to provide a reconstruction theorem for quantum Markov states on chains
Abstract of query paper
Cite abstracts
29630
29629
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
We propose a generalization of the cavity method to quantum spin glasses on fixed connectivity lattices. Our work is motivated by the recent refinements of the classical technique and its potential application to quantum computational problems. We numerically solve for the phase structure of a connectivity @math transverse field Ising model on a Bethe lattice with @math couplings and investigate the distribution of various classical and quantum observables. It is shown that if Ф is a finite range interaction of a quantum spin system, τ t Ф the associated group of time translations, τ x the group of space translations, and A, B local observables, then @math (1) whenever v is sufficiently large (ν > V Ф ,) where μ(ν) > 0. The physical content of the statement is that information can propagate in the system only with a finite group velocity. We present an accurate numerical algorithm, called quantum belief propagation, for simulation of one-dimensional quantum systems at nonzero temperature. The algorithm exploits the fact that quantum effects are short-range in these systems at nonzero temperature, decaying on a length scale inversely proportional to the temperature. We compare to exact results on a spin- @math Heisenberg chain. Even a very modest calculation, requiring diagonalizing only ten-by-ten matrices, reproduces the peak susceptibility with a relative error of less than @math , while more elaborate calculations further reduce the error.
Abstract of query paper
Cite abstracts
29631
29630
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Optical Computers provides the first in-depth review of the possibilities and limitations of optical data processing.
Abstract of query paper
Cite abstracts
29632
29631
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral. Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14].
Abstract of query paper
Cite abstracts
29633
29632
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this "rainbow effect" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort.
Abstract of query paper
Cite abstracts
29634
29633
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors. We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model.
Abstract of query paper
Cite abstracts
29635
29634
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
In this paper we discuss physical aspects of intractable (NP-complete) computing problems. We show, using a specibc model, that a quantum-mechanical computer can in principle solve an NP-complete problem in polynomial time; however, it would use an exponentially large energy for that computation. We conjecture that our model reflects a complementarity principle concerning the time and the energy needed to perform an NP-complete computation This paper uses instances of SAT, 3SAT and TSP to describe how evolutionary search (running on a classical computer) differs from quantum search (running on a quantum computer) for solving NP problems.
Abstract of query paper
Cite abstracts
29636
29635
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral. Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14].
Abstract of query paper
Cite abstracts
29637
29636
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
Conventional architectures for the implementation of Boolean logic are based on a network of bistable elements assembled to realize cascades of simple Boolean logic gates. Since each such gate has two input signals and only one output signal, such architectures are fundamentally dissipative in information and energy. Their serial nature also induces a latency in the processing time. In this paper we present a new, principally non-dissipative digital logic architecture which mitigates the above impediments. Unlike traditional computing architectures, the proposed architecture involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. The architecture employs a novel conception of cascading which utilizes the strengths of both optics and electronics while avoiding their weaknesses. It is inherently non-dissipative, respects the linear nature of interactions in pure optics, and harnesses the control advantages of electrons without reducing the speed advantages of optics. This new logic paradigm was specially developed with optical implementation in mind. However, it is suitable for other implementations as well, including conventional electronic devices.
Abstract of query paper
Cite abstracts
29638
29637
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this "rainbow effect" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort.
Abstract of query paper
Cite abstracts
29639
29638
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
We present a new optical method for solving bounded (input-length-restricted) NP-complete combinatorial problems. We have chosen to demonstrate the method with an NP-complete problem called the traveling salesman problem (TSP). The power of optics in this method is realized by using a fast matrix-vector multiplication between a binary matrix, representing all feasible TSP tours, and a gray-scale vector, representing the weights among the TSP cities. The multiplication is performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm is provided. Simulations and experimental results prove the validity of the new method.
Abstract of query paper
Cite abstracts
29640
29639
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
A systems is described which finds solutions to the 6-city TSP using a Kohonen-type network. The system shows robustness with regard to the light intensity fluctuations and weight discretization which have been simulated. Scalability to larger size problems appears straightforward.
Abstract of query paper
Cite abstracts
29641
29640
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.
Abstract of query paper
Cite abstracts
29642
29641
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Optical Computers provides the first in-depth review of the possibilities and limitations of optical data processing.
Abstract of query paper
Cite abstracts
29643
29642
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14]. We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment. An all nonionic liquid shampoo which includes an amine oxide, a polyoxyethylene hexitan mono-higher fatty acid ester, and at least one of a higher alkoxy polyoxyethylene ethanol, an alkyl glycoside and a mixture of glycoside, a higher fatty acid lower alkanolamide and polyacrylamide. Optionally, the mixture of higher fatty acid lower alkanolamide and polyacrylamide may be present in the liquid shampoo containing amine oxide, polyoxyethylene hexitan mono-higher fatty acid ester and the higher alkoxy polyoxyethylene ethanol and or alkyl glycoside. Another optional constituent is a polyethylene glycol higher fatty acid ester. The shampoos are essentially free of ions and are desirably completely free of ionic materials with the pH essentially neutral.
Abstract of query paper
Cite abstracts
29644
29643
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this "rainbow effect" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort.
Abstract of query paper
Cite abstracts
29645
29644
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors. We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model.
Abstract of query paper
Cite abstracts
29646
29645
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time. In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Abstract of query paper
Cite abstracts
29647
29646
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29648
29647
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs. We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29649
29648
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29650
29649
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu 2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7 We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29651
29650
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu 2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7 We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29652
29651
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7 We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29653
29652
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7 A firing kiln, especially for use as a vacuum firing kiln for dental ceramic purposes, having a lower portion having a fixed firing platform, an upper portion raisable from the lower portion and in abutting relationship therewith, the lower portion having a hollow firing chamber in facing relationship with a fixed firing platform. The firing chamber includes means for emitting heat into the hollow chamber and toward the fixed firing platform, the surface of the fixed firing platform being at or above the level of the upper edge of the lower portion. We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29654
29653
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
2 Static Algebras and Updates 4 2.1 Static Algebras: Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vocabularies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Definition of Static Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Locations and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 Update Sets and Families of Update Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 Conservative Determinism vs. Local Nondeterminism . . . . . . . . . . . . . . . . . . 7 We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29655
29654
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu
Abstract of query paper
Cite abstracts
29656
29655
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu
Abstract of query paper
Cite abstracts
29657
29656
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.
Abstract of query paper
Cite abstracts
29658
29657
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29659
29658
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29660
29659
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu
Abstract of query paper
Cite abstracts
29661
29660
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments.
Abstract of query paper
Cite abstracts
29662
29661
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.
Abstract of query paper
Cite abstracts
29663
29662
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
From the Publisher: Mobile systems, whose components communicate and change their structure, now pervade the informational world and the wider world of which it is a part. The science of mobile systems is as yet immature, however. This book presents the pi-calculus, a theory of mobile systems. The pi-calculus provides a conceptual framework for understanding mobility, and mathematical tools for expressing systems and reasoning about their behaviors. The book serves both as a reference for the theory and as an extended demonstration of how to use pi-calculus to describe systems and analyze their properties. It covers the basic theory of pi-calculus, typed pi-calculi, higher-order processes, the relationship between pi-calculus and lambda-calculus, and applications of pi-calculus to object-oriented design and programming. The book is written at the graduate level, assuming no prior acquaintance with the subject, and is intended for computer scientists interested in mobile systems. What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the Church-Turing thesis and contrast Church’s and Turing’s approaches, and we finish with some recent investigations.
Abstract of query paper
Cite abstracts
29664
29663
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
When algorithms are defined rigorously in Computer Science literature (which only happens rarely), they are generally identified with abstract machines, mathematical models of computers, sometimes idealized by allowing access to “unbounded memory”.1 My aims here are to argue that this does not square with our intuitions about algorithms and the way we interpret and apply results about them; to promote the problem of defining algorithms correctly; and to describe briefly a plausible solution, by which algorithms are recursive definitions while machines model implementations, a special kind of algorithms.
Abstract of query paper
Cite abstracts
29665
29664
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of ''normal'' Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence. A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation. A sequential algorithm just follows its instructions and thus cannot make a nondeterministic choice all by itself, but it can be instructed to solicit outside help to make a choice. Similarly, an object-oriented program cannot create a new object all by itself; a create-a-new-object command solicits outside help. These are but two examples of intrastep interaction of an algorithm with its environment. Here we motivate and survey recent work on interactive algorithms within the Behavioral Computation Theory project. A sequential algorithm just follows its instructions and thus cannot make a nondeterministic choice all by itself, but it can be instructed to solicit outside help to make a choice. Similarly, an object-oriented program cannot create a new object all by itself; a create-a-new-object command solicits outside help. These are but two examples of intrastep interaction of an algorithm with its environment. Here we motivate and survey recent work on interactive algorithms within the Behavioral Computation Theory project.
Abstract of query paper
Cite abstracts
29666
29665
We prove a nearly optimal bound on the number of stable homotopy types occurring in a k-parameter semi-algebraic family of sets in @math , each defined in terms of m quadratic inequalities. Our bound is exponential in k and m, but polynomial in @math . More precisely, we prove the following. Let @math be a real closed field and let [ P = P_1,...,P_m [Y_1,...,Y_ ,X_1,...,X_k], ] with @math . Let @math be a semi-algebraic set, defined by a Boolean formula without negations, whose atoms are of the form, @math . Let @math be the projection on the last k co-ordinates. Then, the number of stable homotopy types amongst the fibers @math is bounded by [ (2^m k d)^ O(mk) . ]
In this paper we study sets of real solutions of systems of quadratic equations and inequalities. The results are used for the local study of more general systems of smooth equations and inequalities.
Abstract of query paper
Cite abstracts
29667
29666
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No ecient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n 2 X †) Steiner points so that after an O ‡ n 2 X † log i nX
The optimal path planning problems are very difficult in the case where the cost metric varies not only in different regions of the space, but also in different directions inside the same region. If the classic discretization approach is adopted to compute an @?-approximation of the optimal path, the size of the discretization (and thus the complexity of the approximation algorithm) is usually dictated by a number of geometric parameters and thus can be very large. In this paper we show a general method for choosing the variables of the discretization to maximally reduce the dependency of the size of the discretization on various geometric parameters. We use this method to improve the previously reported results on two optimal path problems with direction-dependent cost metrics. The authors address anisotropic friction and gravity effects as well as ranges of impermissible-traversal headings due to overturn danger or power limitations. The method does not require imposition of a uniform grid, nor does it average effects in different directions, but reasons about a polyhedral approximation of terrain. It reduces the problem to a finite but provably optimal set of possibilities and then uses A* search to find the cost-optimal path. However, the possibilities are not physical locations but path subspaces. The method also exploits the insight that there are only four ways to optimally traverse an anisotropic homogeneous region: (1) straight across without braking, which is the standard isotropic-weighted-region traversal; (2) straight across without braking but as close as possible to a desired impermissible heading; (3) making impermissibility-avoiding switchbacks on the path across a region; and (4) straight across with braking. The authors prove specific optimality criteria for transitions on the boundaries of regions for each combination of traversal types. > We discuss the problem of computing optimal paths on terrains for a mobile robot, where the cost of a path is defined to be the energy expended due to both friction and gravity. The physical model used by this problem allows for ranges of impermissible traversal directions caused by overturn danger or power limitations. The model is interesting and challenging, as it incorporates constraints found in realistic situations, and these constraints affect the computation of optimal paths. We give some upper- and lower-bound results on the combinatorial size of optimal paths on terrains under this model. With some additional assumptions, we present an efficient approximation algorithm that computes for two given points a path whose cost is within a user-defined relative error ratio. Compared with previous results using the same approach, this algorithm improves the time complexity by using 1) a discretization with reduced size, and 2) an improved discrete algorithm for finding optimal paths in the discretization. We present some experimental results to demonstrate the efficiency of our algorithm. We also provide a similar discretization for a more difficult variant of the problem due to less restricted assumptions. We discuss the problem of computing shortest anisotropic paths on terrains. Anisotropic path costs take into account the length of the path traveled, possibly weighted, and the direction of travel along the faces of the terrain. Considering faces to be weighted has added realism to the study of (pure) Euclidean shortest paths. Parameters such as the varied nature of the terrain, friction, or slope of each face, can be captured via face weights. Anisotropic paths add further realism by taking into consideration the direction of travel on each face thereby e.g., eliminating paths that are too steep for vehicles to travel and preventing the vehicles from turning over. Prior to this work an O(nn) time algorithm had been presented for computing anisotropic paths. Here we present the first polynomial time approximation algorithm for computing shortest anisotropic paths. Our algorithm is simple to implement and allows for the computation of shortest anisotropic paths within a desired accuracy. Our result addresses the corresponding problem posed in [12].
Abstract of query paper
Cite abstracts
29668
29667
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Previous analytical results on the resilience of un-structured P2P systems have not explicitly modeled heterogeneity of user churn (i.e., difference in online behavior) or the impact of in-degree on system resilience. To overcome these limitations, we introduce a generic model of heterogeneous user churn, derive the distribution of the various metrics observed in prior experimental studies (e.g., lifetime distribution of joining users, joint distribution of session time of alive peers, and residual lifetime of a randomly selected user), derive several closed-form results on the transient behavior of in-degree, and eventually obtain the joint in out degree isolation probability as a simple extension of the out-degree model in [13].
Abstract of query paper
Cite abstracts
29669
29668
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Epidemic algorithms have recently been proposed as an effective solution for disseminating information in large-scale peer-to-peer (P2P) systems and in mobile ad hoc networks (MANET). In this paper, we present a modeling approach for steady-state analysis of epidemic dissemination of information in MANET. As major contribution, the introduced approach explicitly represents the spread of multiple data items, finite buffer capacity at mobile devices and a least recently used buffer replacement scheme. Using the introduced modeling approach, we analyze seven degrees of separation (7DS) as one well-known approach for implementing P2P data sharing in a MANET using epidemic dissemination of information. A validation of results derived from the analytical model against simulation shows excellent agreement. Quantitative performance curves derived from the analytical model yield several insights for optimizing the system design of 7DS. This paper presents 7DS, a novel peer-to-peer data sharing system. 7DS is an architecture, a set of protocols and an implementation enabling the exchange of data among peers that are not necessarily connected to the Internet. Peers can be either mobile or stationary. It anticipates the information needs of users and fulfills them by searching from information among peers. We evaluate via extensive simulations the effectiveness of our system for data dissemination among mobile devices with a large number of user mobility scenarios. We model several general data dissemination approaches and investigate the effect of the wireless converage range, 7DS, host density, query interval and cooperation strategy among the mobile hosts. Using theory from random walks, random environments and diffusion of controlled processes, we model one of these data dissemination schemes and show that the analysis confirms the simulation results for scheme
Abstract of query paper
Cite abstracts
29670
29669
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Building very large computing systems is extremely challenging, given the lack of robust scalable communication technologies. This threatens a new generation of mission-critical but very large computing systems. Fortunately, a new generation of "gossip-based" or epidemic communication primitives can overcome a number of these scalability problems, offering robustness and reliability even in the most demanding settings. Epidemic protocols emulate the spread of an infection in a crowded population, and are both reliable and stable under forms of stress that will disable most traditional protocols. This paper describes some of the common problems that arise in scalable group communication systems and how epidemic techniques have been used to successfully address these problems. We introduce autonomous gossiping (A G), a new genre epidemic algorithm for selective dissemination of information in contrast to previous usage of epidemic algorithms which flood the whole network. A G is a paradigm which suits well in a mobile ad-hoc networking (MANET) environment because it does not require any infrastructure or middleware like multicast tree and (un)subscription maintenance for publish subscribe, but uses ecological and economic principles in a self-organizing manner in order to achieve any arbitrary selectivity (flexible casting). The trade-off of using a stateless self-organizing mechanism like A G is that it does not guarantee completeness deterministically as is one of the original objectives of alternate selective dissemination schemes like publish subscribe. We argue that such incompleteness is not a problem in many non-critical real-life civilian application scenarios and realistic node mobility patterns, where the overhead of infrastructure maintenance may outweigh the benefits of completeness, more over, at present there exists no mechanism to realize publish subscribe or other paradigms for selective dissemination in MANET environments. Peer-to-peer applications have become highly popular in today's pervasive environments due to the spread of different file sharing platforms. In such a multiclient environment, if users have mobility characteristics, asymmetry in communication causes a degradation of reliability. This work proposes an approach based on the advantages of epidemic selective resource placement through mobile Infostations. Epidemic placement policy combines the strengths of both proactive multicast group establishment and hybrid Infostation concept. With epidemic selective placement we face the flooding problem locally (in geographic region landscape) and enable end to end reliability by forwarding requested packets to epidemically 'selected' mobile users in the network on a recursive basis. The selection of users is performed based on their remaining capacity, weakness of their signal and other explained mobility limitations. Examination through simulation is performed for the response and reliability offered by epidemic placement policy which reveals the robustness and reliability in file sharing among mobile peers.
Abstract of query paper
Cite abstracts
29671
29670
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
Today's software is brittle. A tiny corruption in an executable will normally result in terminal failure of that program. But nature does not seem to suffer from the same problems. A multicellular organism, its genes evolved and developed, shows graceful degradation: should it be damaged, it is designed to continue to work. This paper describes an investigation into software with the same properties. Three programs, one human-designed, one evolved using genetic programming, and one evolved and developed using a fractal developmental system are compared. All three calculate the square root of a number. The programs are damaged by corrupting their compiled executable code, and the ability for each of them to survive such damage is assessed. Experiments demonstrate that only the evolutionary developmental code shows graceful degradation after damage. A method for evolving programs that construct multicellular structures (organisms) is described. The paper concentrates on the difficult problem of evolving a cell program that constructs a fixed size French flag. We obtain and analyze an organism that shows a remarkable ability to repair itself when subjected to severe damage. Its behaviour resembles the regenerative power of some living organisms.
Abstract of query paper
Cite abstracts
29672
29671
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
Today's software is brittle. A tiny corruption in an executable will normally result in terminal failure of that program. But nature does not seem to suffer from the same problems. A multicellular organism, its genes evolved and developed, shows graceful degradation: should it be damaged, it is designed to continue to work. This paper describes an investigation into software with the same properties. Three programs, one human-designed, one evolved using genetic programming, and one evolved and developed using a fractal developmental system are compared. All three calculate the square root of a number. The programs are damaged by corrupting their compiled executable code, and the ability for each of them to survive such damage is assessed. Experiments demonstrate that only the evolutionary developmental code shows graceful degradation after damage. A method for evolving programs that construct multicellular structures (organisms) is described. The paper concentrates on the difficult problem of evolving a cell program that constructs a fixed size French flag. We obtain and analyze an organism that shows a remarkable ability to repair itself when subjected to severe damage. Its behaviour resembles the regenerative power of some living organisms.
Abstract of query paper
Cite abstracts
29673
29672
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
The Situation Calculus is a logic of time and change in which there is a distinguished initial situation and all other situations arise from the different sequences of actions that might be performed starting in the initial one. Within this framework, it is difficult to incorporate the notion of an occurrence, since all situations after the initial one are hypothetical. These occurrences are important, for instance, when one wants to represent narratives. There have been proposals to incorporate the notion of an action occurrence in the language of the Situation Calculus, namely Miller and Shanahan’s work on narratives [22] and Pinto and Reiter’s work on actual lines of situations [27, 29]. Both approaches have in common the idea of incorporating a linear sequence of situations into the tree described by theories written in the Situation Calculus language. Unfortunately, several advantages of the Situation Calculus are lost when reasoning with a narrative line or with an actual line of occurrences. In this paper we propose a different approach to dealing with action occurrences and narratives, which can be seen as a generalization of narrative lines to narrative trees. In this approach we exploit the fact that, in the discrete Situation Calculus [13], each situation has a unique history. Then, occurrences are interpreted as constraints on valid histories. We argue that this new approach subsumes the linear approaches of Miller and Shanahan’s, and Pinto and Reiter’s. In this framework, we are able to represent various kinds of occurrences; namely, conditional, preventable and non-preventable occurrences. Other types of occurrences, not discussed in this article, can also be accommodated.
Abstract of query paper
Cite abstracts
29674
29673
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
Representing and reasoning with both temporal constraints between classes of events (e.g., between the types of actions needed to achieve a goal) and temporal constraints between instances of events (e.g., between the specific actions being executed) is a ubiquitous task in many areas of computer science, such as planning, workflow, guidelines and protocol management. The temporal constraints between the classes of events must be inherited by the instances, and the consistency of both types of constraints must be checked. We propose a general-purpose domain-independent knowledge server dealing with these issues. In particular, we propose a formalism to represent temporal constraints, we show two algorithms to deal with inheritance and to perform temporal consistency checking, and we study the properties of the algorithms.
Abstract of query paper
Cite abstracts
29675
29674
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
Network, Web, and disk I O traffic are usually bursty and self-similar and therefore cannot be modeled adequately with Poisson arrivals. However, we wish to model these types of traffic and generate realistic traces, because of obvious applications for disk scheduling, network management, and Web server design. Previous models (like fractional Brownian motion and FARIMA, etc.) tried to capture the 'burstiness'. However, the proposed models either require too many parameters to fit and or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model, which solves both problems: it requires just one parameter, and can easily generate large traces. In addition, it has many more attractive properties: (a) with our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86 Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and Web traces show that our synthetic traces match the real ones very well in terms of queuing behavior. terized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution Pw w with =3 2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display = 1, the surface mail based communication belongs to the =3 2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.
Abstract of query paper
Cite abstracts
29676
29675
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 "A-list" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern. Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of "infection." Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow. We propose two new tools to address the evolution of hyperlinked corpora. First, we define time graphs to extend the traditional notion of an evolving directed graph, capturing link creation as a point phenomenon in time. Second, we develop definitions and algorithms for time-dense community tracking, to crystallize the notion of community evolution. We develop these tools in the context of Blogspace , the space of weblogs (or blogs). Our study involves approximately 750K links among 25K blogs. We create a time graph on these blogs by an automatic analysis of their internal time stamps. We then study the evolution of connected component structure and microscopic community structure in this time graph. We show that Blogspace underwent a transition behavior around the end of 2001, and has been rapidly expanding over the past year, not just in metrics of scale, but also in metrics of community structure and connectedness. This expansion shows no sign of abating, although measures of connectedness must plateau within two years. By randomizing link destinations in Blogspace, but retaining sources and timestamps, we introduce a concept of randomized Blogspace . Herein, we observe similar evolution of a giant component, but no corresponding increase in community structure. Having demonstrated the formation of micro-communities over time, we then turn to the ongoing activity within active communities. We extend recent work of Kleinberg [11] to discover dense periods of "bursty" intra-community link creation.
Abstract of query paper
Cite abstracts
29677
29676
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. Models of collective behavior are developed for situations where actors have two alternatives and the costs and or benefits of each depend on how many other actors choose which alternative. The key concept is that of "threshold": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or "equilibrium" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ... An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual without regard to his own information. We argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades. Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties). The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascades—herein called global cascades—that occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.
Abstract of query paper
Cite abstracts
29678
29677
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
We study discrete time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap 1 - _2 for the dynamics on trees and on certain hyperbolic graphs is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that if the relaxation time T2 satisfies T2 = O(n), then the correlation coefficient, and the mutual information, between any local function (which dependsonly on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp. Various finite volume mixing conditions in classical statistical mechanics are reviewed and critically analyzed. In particular somefinite size conditions are discussed, together with their implications for the Gibbs measures and for the approach to equilibrium of Glauber dynamics inarbitrarily large volumes. It is shown that Dobrushin-Shlosman's theory ofcomplete analyticity and its dynamical counterpart due to Stroock and Zegarlinski, cannot be applied, in general, to the whole one phase region since it requires mixing properties for regions ofarbitrary shape. An alternative approach, based on previous ideas of Oliveri, and Picco, is developed, which allows to establish results on rapid approach to equilibrium deeply inside the one phase region. In particular, in the ferromagnetic case, we considerably improve some previous results by Holley and Aizenman and Holley. Our results are optimal in the sene that, for example, they show for the first time fast convergence of the dynamicsfor any temperature above the critical one for thed-dimensional Ising model with or without an external field. In part II we extensively consider the general case (not necessarily attractive) and we develop a new method, based on renormalizations group ideas and on an assumption of strong mixing in a finite cube, to prove hypercontractivity of the Markov semigroup of the Glauber dynamics. For finite range lattice gases with a finite spin space, it is shown that the Dobrushin-Shlosman mixing condition is equivalent to the existence of a logarithmic Sobolev inequality for the associated (unique) Gibbs state. In addition, implications of these considerations for the ergodic properties of the corresponding Glauber dynamics are examined. We show that, under the conditions of the Dobrushin Shlosman theorem for uniqueness of the Gibbs state, the reversible stochastic Ising model converges to equilibrium exponentially fast on the L2 space of that Gibbs state. For stochastic Ising models with attractive interactions and under conditions which are somewhat stronger than Dobrushin’s, we prove that the semi-group of the stochastic Ising model converges to equilibrium exponentially fast in the uniform norm. We also give a new, much shorter, proof of a theorem which says that if the semi-group of an attractive spin flip system converges to equilibrium faster than 1 td where d is the dimension of the underlying lattice, then the convergence must be exponentially fast.
Abstract of query paper
Cite abstracts
29679
29678
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Spin systems are a general way to describe local interactions between nodes in a graph. In statistical mechanics, spin systems are often used as a model for physical systems. In computer science, they comprise an important class of families of combinatorial objects, for which approximate counting and sampling algorithms remain an elusive goal. The Dobrushin condition states that every row sum of the "influence matrix" for a spin system is less than 1 - epsiv, where epsiv > 0. This criterion implies rapid convergence (O(n log n) mixing time) of the single-site (Glauber) dynamics for a spin system, as well as uniqueness of the Gibbs measure. The dual criterion that every column sum of the influence matrix is less than 1 - epsiv has also been shown to imply the same conclusions. We examine a common generalization of these conditions, namely that the maximum eigenvalue of the influence matrix is less than 1 epsiv. Our main result is that this criterion implies O(n log n) mixing time for the Glauber dynamics. As applications, we consider the Ising model, hard-core lattice gas model, and graph colorings, relating the mixing time of the Glauber dynamics to the maximum eigenvalue for the adjacency matrix of the graph. For the special case of planar graphs, this leads to improved bounds on mixing time with quite simple proofs We analyze Markov chains for generating a random k-coloring of a random graph Gn,d n. When the average degree d is constant, a random graph has maximum degree Θ(log n log log n), with high probability. We show that, with high probability, an efficient procedure can generate an almost uniformly random k-coloring when k = Θ(log log n log log log n), i.e., with many fewer colors than the maximum degree. Previous results hold for a more general class of graphs, but always require more colors than the maximum degree. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006
Abstract of query paper
Cite abstracts
29680
29679
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
We consider local Markov chain Monte–Carlo algorithms for sampling from the weighted distribution of independent sets with activity λ, where the weight of an independent set I is λ|I|. A recent result has established that Gibbs sampling is rapidly mixing in sampling the distribution for graphs of maximum degree d and λ λ c it is NP-hard to approximate the above weighted sum over independent sets to within a factor polynomial in the size of the graph. We study several statistical mechanical models on a general tree. Particular attention is devoted to the classical Heisenberg models, where the state space is the d-dimensional unit sphere and the interactions are proportional to the cosines of the angles between neighboring spins. The phenomenon of interest here is the classification of phase transition (non-uniqueness of the Gibbs state) according to whether it is robust. In many cases, including all of the Heisenberg and Potts models, occurrence of robust phase transition is determined by the geometry (branching number) of the tree in a way that parallels the situation with independent percolation and usual phase transition for the Ising model. The critical values for robust phase transition for the Heisenberg and Potts models are also calculated exactly. In some cases, such as the q > 3 Potts model, robust phase transition and usual phase transition do not coincide, while in other cases, such as the Heisenberg models, we conjecture that robust phase transition and usual phase transition are equivalent. In addition, we show that symmetry breaking is equivalent to the existence of a phase transition, a fact believed but not known for the rotor model on Z 2 .
Abstract of query paper
Cite abstracts
29681
29680
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Consider a collection of random variables attached to the vertices of a graph. The reconstruction problem requires to estimate one of them given far away' observations. Several theoretical results (and simple algorithms) are available when their joint probability distribution is Markov with respect to a tree. In this paper we consider the case of sequences of random graphs that converge locally to trees. In particular, we develop a sufficient condition for the tree and graph reconstruction problem to coincide. We apply such condition to colorings of random graphs. Further, we characterize the behavior of Ising models on such graphs, both with attractive and random interactions (respectively, ferromagnetic' and spin glass').
Abstract of query paper
Cite abstracts
29682
29681
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
Abstract A closed universe containing pressureless dust, a more generally perfect fluid matter with pressure-to-density ratio w in the range ( 1 3 ,− 1 3 ) , violates the holographic principle applied according to the Fischler–Susskind proposal. We show, first for a class of two-fluid solutions and then for the general multifluid case, that the closed universe will obey the holographic principle if it also contains matter with w 1 3 , and if the present value of its total density is sufficiently close to the critical density. It is possible that such matter can be realised by some form of quintessence', much studied recently. Abstract We examine in details Friedmann–Robertson–Walker models in 2+1 dimensions in order to investigate the cosmic holographic principle suggested by Fischler and Susskind. Our results are rigorously derived differing from the previous one found by Wang and Abdalla. We discuss the erroneous assumptions done in this work. The matter content of the models is composed of a perfect fluid, with a γ -law equation of state. We found that closed universes satisfy the holographic principle only for exotic matter with a negative pressure. We also analyze the case of a collapsing flat universe. The holographic bound states that the entropy in a region cannot exceed one quarter of the area (in Planck units) of the bounding surface. A version of the holographic principle that can be applied to cosmological spacetimes has recently been given by Fischler and Susskind. This version can be shown to fail in closed spacetimes and they concluded that the holographic bound may rule out such universes. In this paper I give a modified definition of the holographic bound that holds in a large class of closed universes. Fischler and Susskind also showed that the dominant energy condition follows from the holographic principle applied to cosmological spacetimes with @math . Here I show that the dominant energy condition can be violated by cosmologies satisfying the holographic principle with more general scale factors. We propose that the holographic principle be replaced by the generalized second law of thermodynamics when applied to time-dependent backgrounds. For isotropic open and flat universes with a fixed equation of state, this agrees with the cosmological holographic principle proposed by Fischler and Susskind (hep-th 9806039). However, in more general situations, it does not. copyright ital 1999 ital The American Physical Society A cosmological version of the holographic principle is proposed. Various consequences are discussed including bounds on equation of state and the requirement that the universe be infinite.
Abstract of query paper
Cite abstracts
29683
29682
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
Foreword by Professor Sir Fred Hoyle 1. The large-scale structure of the universe 2. General relativity 3. From relativity to cosmology 4. The Friedman models 5. Relics of the Big Bang 6. The very early universe 7. The formation of structures in the universe 8. Alternative cosmologies 9. Local observations of cosmological significance 10. Observations of distant parts of the universe 11. A critical overview.
Abstract of query paper
Cite abstracts
29684
29683
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
We present a complete quantum mechanical description of a flat Friedmann-Robertson-Walker universe with equation of state p= rho . We find a detailed correspondence with our heuristic picture of such a universe as a dense black hole fluid. Features of the geometry are derived from purely quantum input. We present a new version of holographic cosmology, which is compatible with present observations. A primordial p = ? phase of the universe is followed by a brief matter dominated era and a brief period of inflation, whose termination heats the universe. The flatness and horizon problems are solved by the p = ? dynamics. The model is characterized by two parameters, which should be calculated in a more fundamental approach to the theory. For a large range in the phenomenologically allowed parameter space, the observed fluctuations in the cosmic microwave background were generated during the p = ? era, and are exactly scale invariant. The scale invariant spectrum cuts off sharply at both upper and lower ends, and this may have observational consequences. We argue that the amplitude of fluctuations is small but cannot yet calculate it precisely. A simple cosmological model with only six parameters (matter density, mh 2 , baryon density, bh 2 , Hubble Constant, H0, amplitude of fluctua We have discovered 16 Type Ia supernovae (SNe Ia) with the Hubble Space Telescope (HST) and have used them to provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration. These objects, discovered during the course of the GOODS ACS Treasury program, include 6 of the 7 highest redshift SNe Ia known, all at z > 1.25, and populate the Hubble diagram in unexplored territory. The luminosity distances to these objects and to 170 previously reported SNe Ia have been determined using empirical relations between light-curve shape and luminosity. A purely kinematic interpretation of the SN Ia sample provides evidence at the greater than 99 confidence level for a transition from deceleration to acceleration or, similarly, strong evidence for a cosmic jerk. Using a simple model of the expansion history, the transition between the two epochs is constrained to be at z = 0.46 ± 0.13. The data are consistent with the cosmic concordance model of ΩM ≈ 0.3, ΩΛ ≈ 0.7 (χ = 1.06) and are inconsistent with a simple model of evolution or dust as an alternative to dark energy. For a flat universe with a cosmological constant, we measure ΩM = 0.29 ± (equivalently, ΩΛ = 0.71). When combined with external flat-universe constraints, including the cosmic microwave background and large-scale structure, we find w = -1.02 ± (and w < -0.76 at the 95 confidence level) for an assumed static equation of state of dark energy, P = wρc2. Joint constraints on both the recent equation of state of dark energy, w0, and its time evolution, dw dz, are a factor of 8 more precise than the first estimates and twice as precise as those without the SNe Ia discovered with HST. Our constraints are consistent with the static nature of and value of w expected for a cosmological constant (i.e., w0 = -1.0, dw dz = 0) and are inconsistent with very rapid evolution of dark energy. We address consequences of evolving dark energy for the fate of the universe.
Abstract of query paper
Cite abstracts
29685
29684
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
We employ the holographic model of interacting dark energy to obtain the equation of state for the holographic energy density in non-flat (closed) universe enclosed by the event horizon measured from the sphere of horizon named L. Abstract Entropy bounds render quantum corrections to the cosmological constant Λ finite. Under certain assumptions, the natural value of Λ is of order the observed dark energy density ∼10 −10 eV 4 , thereby resolving the cosmological constant problem. We note that the dark energy equation of state in these scenarios is w ≡ p ρ =0 over cosmological distances, and is strongly disfavored by observational data. Alternatively, Λ in these scenarios might account for the diffuse dark matter component of the cosmological energy density. A model for holographic dark energy is proposed, following the idea that the short distance cut-off is related to the infrared cut-off. We assume that the infrared cut-off relevant to the dark energy is the size of the event horizon. With the input Omega(Lambda) = 0.73, we predict the equation of state of the dark energy at the present time be characterized by w = -0.90. The cosmic coincidence problem can be resolved by inflation in our scenario, provided we assume the minimal number of e-foldings. (C) 2004 Elsevier B.V. All rights reserved. Here we consider a scenario in which dark energy is associated with the apparent area of a surface in the early universe. In order to resemble the cosmological constant at late times, this hypothetical reference scale should maintain an approximately constant physical size during an asymptotically de Sitter expansion. This is found to arise when the particle horizon—anticipated to be significantly greater than the Hubble length—is approaching the antipode of a closed universe. Depending on the constant of proportionality, either the ensuing inflationary period prevents the particle horizon from vanishing, or it may lead to a sequence of 'big rips'.
Abstract of query paper
Cite abstracts
29686
29685
Let @math be a product of @math independent, identically distributed random matrices @math , with the properties that @math is bounded in @math , and that @math has a deterministic (constant) invariant vector. Assuming that the probability of @math having only the simple eigenvalue 1 on the unit circle does not vanish, we show that @math is the sum of a fluctuating and a decaying process. The latter converges to zero almost surely, exponentially fast as @math . The fluctuating part converges in Cesaro mean to a limit that is characterized explicitly by the deterministic invariant vector and the spectral data of @math associated to 1. No additional assumptions are made on the matrices @math ; they may have complex entries and not be invertible. We apply our general results to two classes of dynamical systems: inhomogeneous Markov chains with random transition matrices (stochastic matrices), and random repeated interaction quantum systems. In both cases, we prove ergodic theorems for the dynamics, and we obtain the form of the limit states.
Abstract We study the behavior at infinity of the tail of the stationary solution of a multidimensional linear auto-regressive process with random coefficients. We exhibit an extended class of multiplicative coefficients satisfying a condition of irreducibility and proximality that yield to a heavy tail behavior. To cite this article: B. de , C. R. Acad. Sci. Paris, Ser. I 339 (2004).
Abstract of query paper
Cite abstracts
29687
29686
Users of online dating sites are facing information overload that requires them to manually construct queries and browse huge amount of matching user profiles. This becomes even more problematic for multimedia profiles. Although matchmaking is frequently cited as a typical application for recommender systems, there is a surprising lack of work published in this area. In this paper we describe a recommender system we implemented and perform a quantitative comparison of two collaborative filtering (CF) and two global algorithms. Results show that collaborative filtering recommenders significantly outperform global algorithms that are currently used by dating sites. A blind experiment with real users also confirmed that users prefer CF based recommendations to global popularity recommendations. Recommender systems show a great potential for online dating where they could improve the value of the service to users and improve monetization of the service.
Eigentaste is a collaborative filtering algorithm that uses i>universal queries to elicit real-valued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline clustering of users and rapid computation of recommendations. For a database of i>n users, standard nearest-neighbor techniques require i>O(i>n) processing time to compute recommendations, whereas Eigentaste requires i>O(1) (constant) time. We compare Eigentaste to alternative algorithms using data from i>Jester, an online joke recommending system. Jester has collected approximately 2,500,000 ratings from 57,000 users. We use the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms. In the Appendix we use Uniform and Normal distribution models to derive analytic estimates of NMAE when predictions are random. On the Jester dataset, Eigentaste computes recommendations two orders of magnitude faster with no loss of accuracy. Jester is online at: http: eigentaste.berkeley.edu Thesis (M. Eng)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994. The Tapestry experimental mail system developed at the Xerox Palo Alto Research Center is predicated on the belief that information filtering can be more effective when humans are involved in the filtering process. Tapestry was designed to support both content-based filtering and collaborative filtering, which entails people collaborating to help each other perform filtering by recording their reactions to documents they read. The reactions are called annotations; they can be accessed by other people’s filters. Tapestry is intended to handle any incoming stream of electronic documents and serves both as a mail filter and repository; its components are the indexer, document store, annotation store, filterer, little box, remailer, appraiser and reader browser. Tapestry’s client server architecture, its various components, and the Tapestry query language are described. Recommendation algorithms are best known for their use on e-commerce Web sites, where they use input about a customer's interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. There are three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods. Here, we compare these methods with our algorithm, which we call item-to-item collaborative filtering. Unlike traditional collaborative filtering, our algorithm's online computation scales independently of the number of customers and number of items in the product catalog. Our algorithm produces recommendations in real-time, scales to massive data sets, and generates high quality recommendations.
Abstract of query paper
Cite abstracts
29688
29687
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications. This paper proposes an efficient technique for context-sensitive pointer analysis that is applicable to real C programs. For efficiency, we summarize the effects of procedures using partial transfer functions . A partial transfer function (PTF) describes the behavior of a procedure assuming that certain alias relationships hold when it is called. We can reuse a PTF in many calling contexts as long as the aliases among the inputs to the procedure are the same. Our empirical results demonstrate that this technique is successful—a single PTF per procedure is usually sufficient to obtain completely context-sensitive results. Because many C programs use features such as type casts and pointer arithmetic to circumvent the high-level type system, our algorithm is based on a low-level representation of memory locations that safely handles all the features of C. We have implemented our algorithm in the SUIF compiler system and we show that it runs efficiently for a set of C benchmarks. This paper concerns static-analysis algorithms for analyzing x86 executables. The aim of the work is to recover intermediate representations that are similar to those that can be created for a program written in a high-level language. Our goal is to perform this task for programs such as plugins, mobile code, worms, and virus-infected code. For such programs, symbol-table and debugging information is either entirely absent, or cannot be relied upon if present; hence, the technique described in the paper makes no use of symbol-table debugging information. Instead, an analysis is carried out to recover information about the contents of memory locations and how they are manipulated by the executable. This article presents a novel framework for the symbolic bounds analysis of pointers, array indices, and accessed memory regions. Our framework formulates each analysis problem as a system of inequality constraints between symbolic bound polynomials. It then reduces the constraint system to a linear program. The solution to the linear program provides symbolic lower and upper bounds for the values of pointer and array index variables and for the regions of memory that each statement and procedure accesses. This approach eliminates fundamental problems associated with applying standard fixed-point approaches to symbolic analysis problems. Experimental results from our implemented compiler show that the analysis can solve several important problems, including static race detection, automatic parallelization, static detection of array bounds violations, elimination of array bounds checks, and reduction of the number of bits used to store computed values.
Abstract of query paper
Cite abstracts
29689
29688
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.
Abstract of query paper
Cite abstracts
29690
29689
Search engines provide cached copies of indexed content so users will have something to "click on" if the remote resource is temporarily or permanently unavailable. Depending on their proprietary caching strategies, search engines will purge their indexes and caches of resources that exceed a threshold of unavailability. Although search engine caches are provided only as an aid to the interactive user, we are interested in building reliable preservation services from the aggregate of these limited caching services. But first, we must understand the contents of search engine caches. In this paper, we have examined the cached contents of Ask, Google, MSN and Yahoo to profile such things as overlap between index and cache, size, MIME type and "staleness" of the cached resources. We also examined the overlap of the various caches with the holdings of the Internet Archive.
This study measures the frequency with which search engines update their indices. Therefore, 38 websites that are updated on a daily basis were analysed within a time-span of six weeks. The analysed search engines were Google, Yahoo and MSN. We find that Google performs best overall with the most pages updated on a daily basis, but only MSN is able to update all pages within a time-span of less than 20 days. Both other engines have outliers that are older. In terms of indexing patterns, we find different approaches at the different engines. While MSN shows clear update patterns, Google shows some outliers and the update process of the Yahoo index seems to be quite chaotic. Implications are that the quality of different search engine indices varies and more than one engine should be used when searching for current content.
Abstract of query paper
Cite abstracts
29691
29690
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
Noncooperative game theory provides a normative framework for analyzing strategic interactions. However, for the toolbox to be operational, the solutions it defines will have to be computed. In this paper, we provide a single reduction that 1) demonstrates NP-hardness of determining whether Nash equilibria with certain natural properties exist, and 2) demonstrates the NP-hardness of counting Nash equilibria (or connected sets of Nash equilibria). We also show that 3) determining whether a purestrategy Bayes-Nash equilibrium exists is NP-hard, and that 4) determining whether a pure-strategy Nash equilibrium exists in a stochastic (Markov) game is PSP ACE-hard even if the game is invisible (this remains NP-hard if the game is finite). All of our hardness results hold even if there are only two players and the game is symmetric. This paper deals with the complexity of computing Nash and correlated equilibria for a finite game in normal form. We examine the problems of checking the existence of equilibria satisfying a certain condition, such as “Given a game G and a number r, is there a Nash (correlated) equilibrium of G in which all players obtain an expected payoff of at least r?” or “Is there a unique Nash (correlated) equilibrium in G?” etc. We show that such problems are typically “hard” (NP-hard) for Nash equilibria but “easy” (polynomial) for correlated equilibria.
Abstract of query paper
Cite abstracts
29692
29691
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
We consider the problem of computing a Nash equilibrium in multiple-player games. It is known that there exist games, in which all the equilibria have irrational entries in their probability distributions [19]. This suggests that either we should look for symbolic representations of equilibria or we should focus on computing approximate equilibria. We show that every finite game has an equilibrium such that all the entries in the probability distributions are algebraic numbers and hence can be finitely represented. We also propose an algorithm which computes an approximate equilibrium in the following sense: the strategies output by the algorithm are close with respect to l ∞ -norm to those of an exact Nash equilibrium and also the players have only a negligible incentive to deviate to another strategy. The running time of the algorithm is exponential in the number of strategies and polynomial in the digits of accuracy. We obtain similar results for approximating market equilibria in the neoclassical exchange model under certain assumptions.
Abstract of query paper
Cite abstracts
29693
29692
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
We study the problem of on-line job-scheduling on parallel machines with different network topologies. An on-line scheduling algorithm schedules a collection of parallel jobs with known resource requirements but unknown running times on a parallel machine. We give an O(log log N)-competitive algorithm for on-line scheduling on a two-dimensional mesh of N processors and we prove a matching lower bound of Ω(log log N) on the competitive ratio. Furthermore, we show tight constant bounds of 2 for PRAMs and hypercubes, and present a 2.5-competitive algorithm for lines. We also generalize our two-dimensional mesh result to higher dimensions. Surprisingly, our algorithms become less and less greedy as the geometric structure of the network topology becomes more complicated. The proof of our lower bound for the two- dimensional mesh actually shows that no greedy-like algorithm can perform well. We propose and evaluate empirically the performance of a dynamic processor-scheduling policy for multiprogrammed shared-memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized parallelism of those jobs. The policy is suitable for implementation in production systems in that: —It interacts well with very efficient user-level thread packages, leaving to them many low-level thread operations that do not require kernel intervention. —It deals with thread blocking due to user I O and page faults. —It ensures fairness in delivering resources to jobs. —Its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in existing systems. It provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, and the importance of cooperation between the kernel and the application in reallocating processors between jobs. We have also compared the policies according to other criteia important in real implementations, in particular, fairness and respone time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy.
Abstract of query paper
Cite abstracts
29694
29693
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
We describe Charm++, an object oriented portable parallel programming language based on C++. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of C++ with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latency-tolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects, all of which are more difficult problems in a parallel context. Charm++ provides specific modes for sharing information between parallel objects. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm. Malleable jobs are parallel programs that can change the number of processors on which they are executing at run time in response to an external command. One of the advantages of such jobs is that a job scheduler for malleable jobs can provide improved system utilization and average response time over a scheduler for traditional jobs. In this paper, we present a programming system for creating malleable jobs that is more general than other current malleable systems. In particular, it is not limited to the master-worker paradigm or the Fortran SPMD programming model, but can also support general purpose parallel programs including those written in MPI and Charm++, and has built-in migration and load-balancing, among other features. Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization.
Abstract of query paper
Cite abstracts
29695
29694
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated. This paper presents a new paradigm for parallel job scheduling called integrated scheduling or iScheduling. The iScheduler is an application-aware job scheduler as opposed to a general-purpose system scheduler. It dynamically controls resource allocation among a set of competing applications, but unlike a traditional job scheduler, it can interact directly with an application during execution to optimize resource allocation. An iScheduler may add or remove resources from a running application to improve the performance of other applications. Such fluid resource management can support both improved application and system performance. We propose a framework for building iSchedulers and evaluate the concept on several workload traces obtained both from supercomputer centers and from a set of real parallel jobs. The results indicate that iScheduling can improve both waiting time and overall turnaround time substantially for these workload classes, outperforming standard policies such as backfilling and moldable job scheduling.
Abstract of query paper
Cite abstracts
29696
29695
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead. At least three factors in the existing migration frameworks make them less suitable in Grid systems especially when the goal is to improve the response times for individual applications. These factors are the separate policies for suspension and migration of executing applications employed by these migration frameworks, the use of pre-defined conditions for suspension and migration and the lack of knowledge of the remaining execution time of the applications. In this paper we describe a migration framework for performance oriented Grid systems that implements tightly coupled policies for both suspension and migration of executing applications and takes into account both system load and application characteristics. The main goal of our migration framework is to improve the response times for individual applications. We also present some results that demonstrate the usefulness of our migration framework.
Abstract of query paper
Cite abstracts
29697
29696
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated. This paper presents a new paradigm for parallel job scheduling called integrated scheduling or iScheduling. The iScheduler is an application-aware job scheduler as opposed to a general-purpose system scheduler. It dynamically controls resource allocation among a set of competing applications, but unlike a traditional job scheduler, it can interact directly with an application during execution to optimize resource allocation. An iScheduler may add or remove resources from a running application to improve the performance of other applications. Such fluid resource management can support both improved application and system performance. We propose a framework for building iSchedulers and evaluate the concept on several workload traces obtained both from supercomputer centers and from a set of real parallel jobs. The results indicate that iScheduling can improve both waiting time and overall turnaround time substantially for these workload classes, outperforming standard policies such as backfilling and moldable job scheduling.
Abstract of query paper
Cite abstracts
29698
29697
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
In integrated service communication networks, an important problem is to exercise call admission control and routing so as to optimally use the network resources. This problem is naturally formulated as a dynamic programming problem, which, however, is too complex to be solved exactly. We use methods of reinforcement learning (RL), together with a decomposition approach, to find call admission control and routing policies. The performance of our policy for a network with approximately 1045 different feature configurations is compared with a commonly used heuristic policy.
Abstract of query paper
Cite abstracts
29699
29698
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
When a user requests. a connection to another user or a computer in a communications network, a routing algorithm selects a path for transferring the resulting data stream. If all suitable paths ar ...
Abstract of query paper
Cite abstracts
29700
29699
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
This paper describes the Q-routing algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used by each node to keep accurate statistics on which routing decisions lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route efficiently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically. The paper concludes with a discussion of the tradeoff between discovering shortcuts and maintaining stable policies.
Abstract of query paper
Cite abstracts