aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
The approach presented in this paper belongs to the spectrum of intensional specifications, and is related to @cite_18 @cite_6 . In @cite_6 , a requirement specification language is proposed. This language is useful for specifying sets of requirements for classes of protocol; the requirements can be mapped onto a particular protocol instance, which can be later verified using their tool, called NRL Protocol Analyzer. This approach has been subsequently used to specify the GDOI secure multicast protocol @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_6" ], "mid": [ "", "1997596059", "2078142047" ], "abstract": [ "", "Although there is a substantial amount of work on formal requirements for two and three-party key distribution protocols, very little has been done on requirements for group protocols. However, since the latter have security requirements that can differ in important but subtle ways, we believe that a rigorous expression of these requirements can be useful in determining whether a given protocol can satisfy an application's needs. In this paper we make a first step in providing a formal understanding of security requirements for group key distribution by using the NPATRL language, a temporal requirement specification language for use with the NRL Protocol Analyzer. We specify the requirements for GDOI, a protocol being proposed as an IETF standard, which we are formally specifying and verifying in cooperation with the MSec working group.", "In this paper we present a formal language for specifying and reasoning about cryptographic protocol requirements. We give sets of requirements for key distribution protocols and for key agreement protocols in that language. We look at a key agreement protocol due to Aziz and Diffie that might meet those requirements and show how to specify it in the language of the NRL Protocol Analyzer. We also show how to map our formal requirements to the language of the NRL Protocol Analyzer and use the Analyzer to show that the protocol meets those requirements. In other words, we use the Analyzer to assess the validity of the formulae that make up the requirements in models of the protocol. Our analysis reveals an implicit assumption about implementations of the protocol and reveals subtleties in the kinds of requirements one might specify for similar protocols." ] }
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
In @cite_20 , Cremers, Mauw and de Vink present another logic for specifying local security properties. Similarly to us, in @cite_20 the authors define the message authenticity property by referring to the variables occurring in the protocol role. In addition, in @cite_20 , it is defined a new kind of authentication, called , which is then compared with the Lowe's intensional specification. The logic presented in this paper cannot handle the specification of the synchronization authentication. In fact, we cannot handle the weaker notion of injective authentication, since we cannot match corresponding events in a trace. However, we believe we can extend our logic to support these properties. Briefly, this could be achieved by decorating the different runs with label identifiers and adding a primitive to reason about events that happenned before others in a trace.
{ "cite_N": [ "@cite_20" ], "mid": [ "146967524" ], "abstract": [ "In this paper we define a general trace model for security protocols which allows to reason about various formal definitions of authentication. In the model, we define a strong form of authentication which we call synchronization. We present both an injective and a noninjective version. We relate synchronization to a formulation of agreement in our trace model and contribute to the discussion on intensional vs. extensional specifications." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The authors have previously considered topologically-based load balancing with a simpler model than BON which is amenable to analytical study @cite_17 . In that work each node's resources were proportional to in-degree and load was distributed by performing a short random walk and migrating load to the last node of the walk; this method produces Erd "o s-R 'e nyi (ER) random graphs and exhibits good load-balancing performance. As we demonstrate in the current work, performing more complex functions on the random walk can significantly improve performance.
{ "cite_N": [ "@cite_17" ], "mid": [ "2071293938" ], "abstract": [ "The maximum entropy principle from statistical mechanics states that a closed system attains an equilibrium distribution that maximizes its entropy. We first show that for graphs with fixed number of edges one can define a stochastic edge dynamic that can serve as an effective thermalization scheme, and hence, the underlying graphs are expected to attain their maximum-entropy states, which turn out to be Erdos-Renyi (ER) random graphs. We next show that (i) a rate-equation based analysis of node degree distribution does indeed confirm the maximum-entropy principle, and (ii) the edge dynamic can be effectively implemented using short random walks on the underlying graphs, leading to a local algorithm for the generation of ER random graphs. The resulting statistical mechanical system can be adapted to provide a distributed and local (i.e., without any centralized monitoring) mechanism for load balancing, which can have a significant impact in increasing the efficiency and utilization of both the Internet (e.g., efficient web mirroring), and large-scale computing infrastructure (e.g., cluster and grid computing)." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The majority of distributed computing research has focused on central server methods, DHT architectures, agent-based systems, randomized algorithms and local diffusive techniques @cite_22 @cite_21 @cite_13 @cite_10 @cite_3 @cite_18 @cite_12 . Some of the most successful systems to date @cite_14 @cite_5 have used a centralized approach. This can be explained by the relatively small scale of the networked systems or by special properties of the workload experienced by these systems. However since a central server must have @math bandwidth capacity and CPU power, systems that depend on central architectures are unscalable @cite_9 @cite_23 . Reliability is also a concern since a central server is a single point of failure. BON addresses both of these issues by using @math maximum communications scaling and no single points of failure. Furthermore since the networks created by the BON algorithm are random graphs, they will be highly robust to random failures.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_3", "@cite_23", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "1411713", "2030235801", "2142863519", "2114860831", "2088240190", "1571078753", "1985956989", "2135664862", "", "2117702591", "2029220296" ], "abstract": [ "", "Diffusive schemes have been widely analyzed for parallel and distributed load balancing. It is well known that their convergence rates depend on the eigenvalues of some associated matrices and on the expansion properties of the underlying graphs. In the first part of this paper we make use of these relationships in order to obtain new spectral bounds on the edge and node expansion of graphs. We show that these new bounds are better than the classical bounds for several graph classes. In the second part of the paper, we consider the load balancing problem for indivisible unit size tokens. Since known diffusion schemes do not completely balance the load for such settings, we propose a randomized distributed algorithm based on Markov chains to reduce the load imbalance. We prove that this approach provides the best asymptotic result that can be achieved in l 1 - or l 2 -norm concerning the final load situation.", "BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems.", "The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics whereas a decentralized architecture is simpler to implement. >", "", "This paper describes the design and implementation of SWORD, a scalable resource discovery service for wide-area distributed systems. SWORD locates a set of machines matching user-specified constraints on both static and dynamic node characteristics, including both single-node and inter-node characteristics. We explore a range of system architectures to determine the appropriate tradeoffs for building a scalable, highly-available, and efficient resource discovery infrastructure. We describe: i) techniques for efficient handling of multi-attribute range queries that describe application resource requirements; ii) an integrated mechanism for scalably measuring and querying inter-node attributes without requiring O(n) time and space; iii) a mechanism for users to encode a restricted form of utility function indicating how the system should filter candidate nodes when more are available than the user needs, and an optimizer that performs this node selection based on per-node and inter-node characteristics; and iv) working prototypes of a variety of architectural alternatives—running the gamut from centralized to fully distributed—along with a detailed performance evaluation. SWORD is currently deployed as a continuously-running service on PlanetLab. We find that SWORD offers good performance, scalability, and robustness in both an emulated environment and a real-world deployment.", "Diffusion is a well-known algorithm for load-balancing in which tasks move from heavily-loaded processors to lightly-loaded neighbors. This paper presents a rigorous analysis of the performance of the diffusion algorithm on arbitrary networks. It is shown that the running time of the diffusion algorithm is bounded by: O(log s Γ) ≤ Time ≤ O(Ns Γ) and O(log s F) ≤ Time ≤ O(s F 2 ), where N is the number of nodes in the network, s is the standard deviation of the initial load distribution (which represents how imbalanced the load is initially), and Γ and F are the network's electrical and fluid conductances respectively (which are measures of the network's bandwidth). For the case of the generalized mesh with wrap-around (which includes common networks like the ring, 2D-torus, 3D-torus and hypercube), we derive tighter bounds and conclude that the diffusion algorithm is inefficient for lower dimensional meshes.", "A method for qualitative and quantitative analysis of load sharing algorithms is presented, using a number of well known examples as illustration. Algorithm design choice are considered with respect to the main activities of information dissemination and allocation decision making. It is argued that nodes must be capable of making local decisions, and for this efficient state, dissemination techniques are necessary. Activities related to remote execution should be bounded and restricted to a small proportion of the activity of the system. The quantitative analysis provides both performance and efficiency measures, including consideration of the load and delay characteristics of the environment. To assess stability, which is also a precondition for scalability, the authors introduce and measure the load-sharing hit-ratio, the ratio of remote execution requests concluded successfully. Using their analysis method, they are able to suggest improvements to some published algorithms. >", "", "We consider the following natural model: customers arrive as a Poisson stream of rate spl lambda n, spl lambda <1, at a collection of n servers. Each customer chooses some constant d servers independently and uniformly at random from the n servers and waits for service at the one with the fewest customers. Customers are served according to the first-in first-out (FIFO) protocol and the service time for a customer is exponentially distributed with mean 1. We call this problem the supermarket model. We wish to know how the system behaves and in particular we are interested in the effect that the parameter d has on the expected time a customer spends in the system in equilibrium. Our approach uses a limiting, deterministic model representing the behavior as n spl rarr spl infin to approximate the behavior of finite systems. The analysis of the deterministic model is interesting in its own right. Along with a theoretical justification of this approach, we provide simulations that demonstrate that the method accurately predicts system behavior, even for relatively small systems. Our analysis provides surprising implications. Having d=2 choices leads to exponential improvements in the expected time a customer spends in the system over d=1, whereas having d=3 choices is only a constant factor better than d=2. We discuss the possible implications for system design.", "This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d+1 fewer tokens, where d is the maximum degree of any node in the network. We show that within @math steps, the algorithm reduces the maximum difference in tokens between any two nodes to at most @math , where @math is the global imbalance in tokens (i.e., the maximum difference between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and @math is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion @math , and for any value @math , there exists an initial distribution of tokens with imbalance @math for which the time to reduce the imbalance to even @math is at least @math . The bound on the final imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum difference in tokens between any two neighbors is at most 2d), while the global imbalance remains @math . Furthermore, we show that upon reaching a state with a global imbalance of @math , the time for this algorithm to locally balance the network can be as large as @math . We extend our analysis to a variant of this algorithm for dynamic and asynchronous networks. We also present tight bounds for a randomized algorithm in which each node sends at most one token in each step." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
BON is designed to be deployed on extremely large ensembles of nodes. This is a major similarity with BOINC @cite_14 . The Einstein@home project which processes gravitation data and Predictor@home which studies protein-related disease are based on BOINC, the latest infrastructure for creating public-resource computing projects. Such projects are single-purpose and are designed to handle massive, embarrassingly parallel problems with tens or hundreds of thousands of nodes. BON should scale to networks of this scale and beyond while providing a dynamic, multi-user environment instead of the special purpose environment provided by BOINC.
{ "cite_N": [ "@cite_14" ], "mid": [ "2142863519" ], "abstract": [ "BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
The concept of the memory wall has been popularized by Wulf @cite_5 . Many researchers have been working on improving cache efficiency to overcome the memory wall problem. The pioneering work @cite_11 done by has both theoretically and experimentally studied the blocking technique and described the factors that affect the cache formance. However, there is not an easy way to apply the blocking technique to the tree traversal problem or to the index structure lookup problem to improve the cache efficiency.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "1983096721", "2014515453" ], "abstract": [ "Static cache analysis characterizes a program's cache behavior by determining in a sound but approximate manner which memory accesses result in cache hits and which result in cache misses. Such information is valuable in optimizing compilers, worst-case execution time analysis, and side-channel attack quantification and mitigation.Cache analysis is usually performed as a combination of must' and may' abstract interpretations, classifying instructions as either always hit', always miss', or unknown'. Instructions classified as unknown' might result in a hit or a miss depending on program inputs or the initial cache state. It is equally possible that they do in fact always hit or always miss, but the cache analysis is too coarse to see it.Our approach to eliminate this uncertainty consists in (i) a novel abstract interpretation able to ascertain that a particular instruction may definitely cause a hit and a miss on different paths, and (ii) an exact analysis, removing all remaining uncertainty, based on model checking, using abstract-interpretation results to prune down the model for scalability.We evaluated our approach on a variety of examples; it notably improves precision upon classical abstract interpretation at reasonable cost.", "B+-Trees have been traditionally optimized for I O performance with disk pages as tree nodes. Recently, researchers have proposed new types of B+-Trees optimized for CPU cache performance in main memory environments, where the tree node sizes are one or a few cache lines. Unfortunately, due primarily to this large discrepancy in optimal node sizes, existing disk-optimized B+-Trees suffer from poor cache performance while cache-optimized B+-Trees exhibit poor disk performance. In this paper, we propose fractal prefetching B+-Trees (fpB+-Trees), which embed \"cache-optimized\" trees within \"disk-optimized\" trees, in order to optimize both cache and I O performance. We design and evaluate two approaches to breaking disk pages into cache-optimized nodes: disk-first and cache-first. These approaches are somewhat biased in favor of maximizing disk and cache performance, respectively, as demonstrated by our results. Both implementations of fpB+-Trees achieve dramatically better cache performance than disk-optimized B+-Trees: a factor of 1.1-1.8 improvement for search, up to a factor of 4.2 improvement for range scans, and up to a 20-fold improvement for updates, all without significant degradation of I O performance. In addition, fpB+-Trees accelerate I O performance for range scans by using jump-pointer arrays to prefetch leaf pages, thereby achieving a speed-up of 2.5-5 on IBM's DB2 Universal Database." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
In the area of theory and experimental algorithms, @cite_0 proposed an analytical model to predict the cache performance. In their model, they assume all nodes in a tree are accessed uniformly. This model is not accurate for the tree lookup problem. Because the number of nodes from root node to leaf nodes is exponentially increasing, nodes' access rates are exponentially decreasing as the their positioned levels in the tree increase. Hankins and Patel @cite_7 proposed a model with an exponential distributed node access rate in a B+ tree according to the level of a node positioned. However, they only considered the compulsory cache misses, and not the capacity cache misses. They also assume that the tree can fit in the cache. So, for tree structures that can't fit in the cache, the model in @cite_7 is not applicable.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "1965929361", "2091829363" ], "abstract": [ "analyze them. This paper describes a model for studying the cache performance of algorithms in a direct-mapped cache. Using this model, we analyze the cache performance of several commonly occurring memory access patterns: (i) sequential and random memory traversals, (ii) systems of random accesses, and (iii) combinations of each. For each of these, we give exact expressions for the number of cache misses per memory access in our model. We illustrate the application of these analyses by determining the cache performance of two algorithms: the traversal of a binary search tree and the counting of items in a large array. Trace driven cache simulations validate that our analyses accurately predict cache performance. The key application of cache performance analysis is towards the cache conscious design of data structures and algorithms. In our previous work we studied the cache conscious design of priority queues [13] and sorting algorithms [14], and were able to make significant performance improvements over traditional implementations by considering cache effects.", "In main-memory databases, the number of processor cache misses has a critical impact on the performance of the system. Cache-conscious indices are designed to improve performance by reducing the number of processor cache misses that are incurred during a search operation. Conventional wisdom suggests that the index's node size should be equal to the cache line size in order to minimize the number of cache misses and improve performance. As we show in this paper, this design choice ignores additional effects, such as the number of instructions executed and the number of TLB misses, which play a significant role in determining the overall performance. To capture the impact of node size on the performance of a cache-conscious B+ tree (CSB+-tree), we first develop an analytical model based on the fundamental components of the search process. This model is then validated with an actual implementation, demonstrating that the model is accurate. Both the analytical model and experiments confirm that using node sizes much larger than the cache line size can result in better search performance for the CSB+-tree." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The system was the first setup in which a supergravity dual of pure @math SYM theory with no hypermultiplets was studied @cite_9 . It was constructed by wrapping BPS D-branes on a K3 manifold, and studying the resulting geometry. From the supergravity point of view, the system exhibited a novel singularity resolution mechanism. Naively, there appeared to be a naked timelike singularity in the space transverse to the branes, dubbed the repulson, because a massive particle would feel a repulsive potential which becomes infinite in magnitude at a finite radius from the naive position of the branes. Probing the background with a wrapped D-brane, however, showed that the @math source D-branes do not, in fact, sit at the origin. Rather, they expand to form a shell of branes, inside of which the geometry does not, after all, become singular.
{ "cite_N": [ "@cite_9" ], "mid": [ "2040482607" ], "abstract": [ "We study brane configurations that give rise to large-N gauge theories with eight supersymmetries and no hypermultiplets. These configurations include a variety of wrapped, fractional, and stretched branes or strings. The corresponding spacetime geometries which we study have a distinct kind of singularity known as a repulson. We find that this singularity is removed by a distinctive mechanism, leaving a smooth geometry with a core having an enhanced gauge symmetry. The spacetime geometry can be related to large-N Seiberg-Witten theory." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
A natural generalisation was to study geometries for which the system gains energy above the BPS bound. An unusual two-branch structure was found @cite_9 @cite_0 . One class of possible solutions had the appearance of a black hole (or black brane), and was dubbed the horizon branch, while the other appeared to have an -like shell surrounding an inner event horizon and was dubbed the shell branch. Only the shell branch correctly matches onto the BPS solution in the limit of zero energy above extremality but, for sufficiently high extra energy, both solutions were seen to be consistent with the asymptotic charges. The presence of the horizon branch far from extremality was expected, since there, the system should look like an uncharged black hole, when the energy is highly dominant over the charge. Additionally, for the shell branch, fixing the asymptotic charges did not specify exactly how the extra energy distributed itself between the inner horizon and the shell.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2079936263", "2040482607" ], "abstract": [ "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions.", "We study brane configurations that give rise to large-N gauge theories with eight supersymmetries and no hypermultiplets. These configurations include a variety of wrapped, fractional, and stretched branes or strings. The corresponding spacetime geometries which we study have a distinct kind of singularity known as a repulson. We find that this singularity is removed by a distinctive mechanism, leaving a smooth geometry with a core having an enhanced gauge symmetry. The spacetime geometry can be related to large-N Seiberg-Witten theory." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Dimitriadis and Ross did a preliminary search @cite_6 for a classical instability that would provide evidence that the two branches are connected. Such an instability, which is fundamentally different in nature from the Gregory-Laflamme instability, could be interpreted as signalling a phase transition in the dual gauge theory. Such instability was not found. Also presented was an entropic argument that, at high mass, the horizon branch should dominate over the shell branch in a canonical ensemble. In later work @cite_7 , a numerical study of perturbations of the non-BPS shell branch was completed, but still no instability was found. An analytic proof of non-existence of such instabilities could not be found either, owing to the non-linearity of the coupled equations. Furthermore, @cite_7 investigated whether the shell branch might violate a standard gravitational energy condition. Indeed, they found that the shell branch violates the weak energy condition (WEC). This matter will be important for us in a later section, and so we review it here.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2471626564", "1505471498" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "We consider the stability of the two branches of nonextremal enhan c c on solutions. We argue that one would expect a transition between the two branches at some value of the nonextremality, which should manifest itself in some instability. We study small perturbations of these solutions, constructing a sufficiently general ansatz for linearized perturbations of the nonextremal solutions, and show that the linearized equations are consistent. We show that the simplest kind of perturbation does not lead to any instability. We reduce the problem of studying the more general spherically symmetric perturbation to solving a set of three coupled second-order differential equations." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Surprisingly, when the system is near extremality and the asymptotic volume of the K3 is large, the first two terms combine into a dominant, negative, contribution. Thus the shell branch violates the WEC. It was argued @cite_7 that the shell branch should therefore be regarded as unphysical. Accordingly, the horizon branch should be considered the dominant, valid, supergravity solution for non-BPS , for the range of parameters admitting it. For the region of parameter space in which no horizon branch exists, other solutions, more general than those yet considered, might be valid @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2471626564" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In subsequent work on non-BPS , involving two of the current authors, we used simple supergravity techniques to find the most general solutions with the correct symmetries and asymptotic charges of the hot system @cite_3 . We showed that the only non-BPS solution with a well-behaved event horizon is the horizon branch.
{ "cite_N": [ "@cite_3" ], "mid": [ "1965302333" ], "abstract": [ "We extend the investigation of nonextremal enhan c c ons, finding the most general solutions with the correct symmetry and charges. There are two families of solutions. One of these contains a solution with a regular horizon found previously; this previous example is shown to be the unique solution with a regular horizon. The other family generalizes a previous nonextreme extension of the enhan c c on, producing solutions with shells that satisfy the weak energy condition. We argue that identifying a unique solution with a shell requires input beyond supergravity." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Here, the story is particularly simple. We find that, at some radius greater than @math , the volume of the K3 always shrinks to zero, indicating that somewhere outside this radius, the K3 has reached its stringy volume. Note that the old ( @math ) shell solution @cite_0 falls into this category.
{ "cite_N": [ "@cite_0" ], "mid": [ "2079936263" ], "abstract": [ "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
It is straightforward to find an expression for the radius of the @math -shell solutions: We could also rewrite this in terms of the parameters: @math , @math , @math , in order to put the solution exactly in terms of the language of previous studies @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2079936263" ], "abstract": [ "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In a related context, the geometry of fractional D @math -branes was studied @cite_5 . Fractional branes can be described as regular D @math -branes wrapped on a vanishing two-cycle inside the @math orbifold limit of K3. The dual gauge theory is again @math SYM with no hypermultiplets. Attempting to take the decoupling limit once again fails to yield a clean strong weak duality. This happens in a way directly analogous to the original case.
{ "cite_N": [ "@cite_5" ], "mid": [ "2019049541" ], "abstract": [ "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e ." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The authors of @cite_5 found supergravity solutions for fractional branes in six dimensions using two different methods. First, they used boundary state technology to produce a consistent truncation of Type II supergravity coupled to fractional brane sources; second, they related their consistent truncation to the heterotic theory via a chain of dualities. The BPS solutions they found exhibit repulson-like behaviour and an analogous phenomenon occurs.
{ "cite_N": [ "@cite_5" ], "mid": [ "2019049541" ], "abstract": [ "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e ." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The natural extension of this work was, again, to consider the systems when energy is added to take them above the BPS bound. In @cite_2 , a consistent six-dimensional truncation ansatz for fractional Dp-branes in orbifold backgrounds was provided, for general @math . Solutions corresponding to the geometry of non-BPS fractional branes were found, in analogy to the non-BPS work @cite_0 . After imposition of positivity of ADM mass, half of the solutions were disposed of. One of the remaining solutions was discarded because it did not have a BPS limit.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2079936263", "2152342374" ], "abstract": [ "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The construction of fractional brane geometries that exhibit the mechanism is expected to be dual (through T-duality of type IIA on K3) to the original geometries @cite_9 @cite_5 @cite_2 . However, in view of work reviewed in the previous subsection, the conclusion that horizons never form in the non-BPS fractional brane geometries is puzzling.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_2" ], "mid": [ "2019049541", "2040482607", "2152342374" ], "abstract": [ "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We study brane configurations that give rise to large-N gauge theories with eight supersymmetries and no hypermultiplets. These configurations include a variety of wrapped, fractional, and stretched branes or strings. The corresponding spacetime geometries which we study have a distinct kind of singularity known as a repulson. We find that this singularity is removed by a distinctive mechanism, leaving a smooth geometry with a core having an enhanced gauge symmetry. The spacetime geometry can be related to large-N Seiberg-Witten theory.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
We will show that this apparent discord is actually an artifact. The hot fractional brane system exhibits the exact dual behavior to that of the hot . In particular, we will show that the solutions of @cite_2 are related by duality to the hot solutions of @cite_0 . By continuously varying the K3 moduli away from the orbifold point, we can reach solutions in which the shell branch solutions once again violate the WEC. In the following sections we pin down the precise map between the two setups, and resurrect the horizon branch on the fractional brane side. We will also exhibit the fractional brane equivalent of the @math -shell solutions.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2079936263", "2152342374" ], "abstract": [ "The enhancon mechanism removes a family of time-like singularities from certain supergravity spacetimes by forming a shell of branes on which the exterior geometry terminates. The problematic interior geometry is replaced by a new spacetime, which in the prototype extremal case is simply flat. We show that this excision process, made inevitable by stringy phenomena such as enhanced gauge symmetry and the vanishing of certain D-branes' tension at the shell, is also consistent at the purely gravitational level. The source introduced at the excision surface between the interior and exterior geometries behaves exactly as a shell of wrapped D6-branes, and in particular, the tension vanishes at precisely the enhancon radius. These observations can be generalised, and we present the case for non-extremal generalisations of the geometry, showing that the procedure allows for the possibility that the interior geometry contains an horizon. Further knowledge of the dynamics of the enhancon shell itself is needed to determine the precise position of the horizon, and to uncover a complete physical interpretation of the solutions.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In order to embed the non-extremal D4 brane solutions of @cite_3 in the six dimensional supergravity, we display a simple two charge truncation which describes the solutions studied in @cite_3 . These solutions can then be lifted straight across into the larger supergravity theory. In deriving the truncation, it is convenient to switch to heterotic variables using the well-known duality between type IIA on K3 and heterotic strings on @math . This is also convenient for comparing with the fractional brane solutions of @cite_2 since that paper presents solutions in the heterotic frame. However, we should stress that we are performing T-dualities between different IIA solutions and in principle we could have worked in IIA variables throughout.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1965302333", "2152342374" ], "abstract": [ "We extend the investigation of nonextremal enhan c c ons, finding the most general solutions with the correct symmetry and charges. There are two families of solutions. One of these contains a solution with a regular horizon found previously; this previous example is shown to be the unique solution with a regular horizon. The other family generalizes a previous nonextreme extension of the enhan c c on, producing solutions with shells that satisfy the weak energy condition. We argue that identifying a unique solution with a shell requires input beyond supergravity.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
For direct offline optimization, i.e. from an oracle that evaluates the function, in theory one can use the ellipsoid @cite_6 or more recent random-walk based approaches @cite_2 . In black-box optimization, practitioners often use Simulated Annealing @cite_12 or finite difference simulated perturbation stochastic approximation methods (see, for example, @cite_16 ). In the case that the functions may change dramatically over time, a single-point approximation to the gradient may be necessary. Granichin and Spall propose a different single-point estimate of the gradient @cite_5 @cite_11 .
{ "cite_N": [ "@cite_6", "@cite_2", "@cite_5", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "", "2106318612", "", "2064076655", "2024060531", "2012117977" ], "abstract": [ "", "Minimizing a convex function over a convex set in n-dimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasi-convex functions and to other generalizations.", "", "This comprehensive book offers 504 main pages divided into 17 chapters. In addition, five very useful and clearly written appendices are provided, covering multivariate analysis, basic tests in statistics, probability theory and convergence, random number generators and Markov processes. Some of the topics covered in the book include: stochastic approximation in nonlinear search and optimization; evolutionary computations; reinforcement learning via temporal differences; mathematical model selection; and computer-simulation-based optimizations. Over 250 exercises are provided in the book, though only a small number of them have solutions included in the volume. A separate solution manual is available, as is a very informative webpage. The book may serve as either a reference for researchers and practitioners in many fields or as an excellent graduate level textbook.", "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.", "The simultaneous perturbation stochastic approximation (SPSA) algorithm has proven very effective for difficult multivariate optimization problems where it is not possible to obtain direct gradient information. As discussed to date, SPSA is based on a highly efficient gradient approximation requiring only two measurements of the loss function independent of the number of parameters being estimated. This note presents a form of SPSA that requires only one function measurement (for any dimension). Theory is presented that identifies the class of problems for which this one-measurement form will be asymptotically superior to the standard two-measurement form." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
In addition to the appeal of an online model of convex optimization, Zinkevich's gradient descent analysis can be applied to several other online problems for which gradient descent and other special-purpose algorithms have been carefully analyzed, such as Universal Portfolios @cite_0 @cite_18 @cite_19 , online linear regression @cite_13 , and online shortest paths @cite_3 (one convexifies to get an online shortest flow problem).
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_0", "@cite_19", "@cite_13" ], "mid": [ "1964964840", "1527666879", "", "2076798318", "2069317438" ], "abstract": [ "We present an on-line investment algorithm that achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algorithm is very simple to implement and requires only constant storage and computing time per stock in each trading period. We tested the performance of our algorithm on real stock data from the New York Stock Exchange accumulated during a 22-year period. On these data, our algorithm clearly outperforms the best single stock as well as Cover's universal portfolio selection algorithm. We also present results for the situation in which the investor has access to additional \"side information.\" Copyright Blackwell Publishers Inc 1998.", "Kernels are typically applied to linear algorithms whose weight vector is a linear combination of the feature vectors of the examples. On-line versions of these algorithms are sometimes called \"additive updates\" because they add a multiple of the last feature vector to the current weight vector.In this paper we have found a way to use special convolution kernels to efficiently implement \"multiplicative\" updates. The kernels are defined by a directed graph. Each edge contributes an input. The inputs along a path form a product feature and all such products build the feature vector associated with the inputs.We also have a set of probabilities on the edges so that the outflow from each vertex is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. After adding the factors to the edges, the total outflow out of each vertex is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each vertex is one again. Finally, we show that if the digraph is built from a regular expressions, then this can be used for speeding up the kernel and re-normalization computations.We reformulate a large number of multiplicative update algorithms using path kernels and characterize the applicability of our method. The examples include efficient algorithms for learning disjunctions and a recent algorithm that predicts as well as the best pruning of a series parallel digraphs.", "", "A constant rebalanced portfolio is an investment strategy that keeps the same distribution of wealth among a set of stocks from day to day. There has been much work on Cover's Universal algorithm, which is competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991, , 1998, Blum and Kalai, 1999, Foster and Vohra, 1999, Vovk, 1998, Cover and Ordentlich, 1996a, Cover, 1996c). While this algorithm has good performance guarantees, all known implementations are exponential in the number of stocks, restricting the number of stocks used in experiments (, 1998, Cover and Ordentlich, 1996a, Ordentlich and Cover, 1996b, Cover, 1996c, Blum and Kalai, 1999). We present an efficient implementation of the Universal algorithm that is based on non-uniform random walks that are rapidly mixing (Applegate and Kannan, 1991, Lovasz and Simonovits, 1992, Frieze and Kannan, 1999). This same implementation also works for non-financial applications of the Universal algorithm, such as data compression (Cover, 1996c) and language modeling (, 1999).", "We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A similar line of research has developed for the problem of online linear optimization @cite_1 @cite_10 @cite_9 . Here, one wants to solve the related but incomparable problem of optimizing a sequence of linear functions, over a possibly non-convex feasible set, modeling problems such as online shortest paths and online binary search trees (which are difficult to convexify). Kalai and Vempala @cite_1 show that, for such linear optimization problems in general, if the offline optimization problem is solvable efficiently, then regret can be bounded by @math also by an efficient online algorithm, in the full-information model. Awerbuch and Kleinberg @cite_10 generalize this to the bandit setting against an oblivious adversary (like ours). Blum and McMahan @cite_9 give a simpler algorithm that applies to adaptive adversaries, that may choose their functions @math depending on the previous points.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1" ], "mid": [ "2014482607", "2116067849", "80526489" ], "abstract": [ "Minimal delay routing is a fundamental task in networks. Since delays depend on the (potentially unpredictable) traffic distribution, online delay optimization can be quite challenging. While uncertainty about the current network delays may make the current routing choices sub-optimal, the algorithm can nevertheless try to learn the traffic patterns and keep adapting its choice of routing paths so as to perform nearly as well as the best static path. This online shortest path problem is a special case of online linear optimization, a problem in which an online algorithm must choose, in each round, a strategy from some compact set S ⊆ Rd so as to try to minimize a linear cost function which is only revealed at the end of the round. Kalai and Vempala[4] gave an algorithm for such problems in the transparent feedback model, where the entire cost function is revealed at the end of the round. Here we present an algorithm for online linear optimization in the more challenging opaque feedback model, in which only the cost of the chosen strategy is revealed at the end of the round. In the special case of shortest paths, opaque feedback corresponds to the notion that in each round the algorithm learns only the end-to-end cost of the chosen path, not the cost of every edge in the network.We also present a second algorithm for online shortest paths, which solves the shortest-path problem using a chain of online decision oracles, one at each node of the graph. This has several advantages over the online linear optimization approach. First, it is effective against an adaptive adversary, whereas our linear optimization algorithm assumes an oblivious adversary. Second, even in the case of an oblivious adversary, the second algorithm performs better than the first, as measured by their additive regret.", "In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T sup -1 3 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of \"experts\" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.", "" ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A few comparisons are interesting to make with the online linear optimization problem. First of all, for the bandit versions of the linear problems, there was a distinction between exploration phases and exploitation phases. During exploration phases, one action from a barycentric spanner @cite_10 basis of @math actions was chosen, for the sole purpose of estimating the linear objective function. In contrast, our algorithm does a little bit of exploration each time. Secondly, Blum and McMahan @cite_9 were able to compete against an adaptive adversary, using a careful Martingale analysis. It is not clear if that can be done in our setting.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "2116067849", "2014482607" ], "abstract": [ "In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T sup -1 3 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of \"experts\" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.", "Minimal delay routing is a fundamental task in networks. Since delays depend on the (potentially unpredictable) traffic distribution, online delay optimization can be quite challenging. While uncertainty about the current network delays may make the current routing choices sub-optimal, the algorithm can nevertheless try to learn the traffic patterns and keep adapting its choice of routing paths so as to perform nearly as well as the best static path. This online shortest path problem is a special case of online linear optimization, a problem in which an online algorithm must choose, in each round, a strategy from some compact set S ⊆ Rd so as to try to minimize a linear cost function which is only revealed at the end of the round. Kalai and Vempala[4] gave an algorithm for such problems in the transparent feedback model, where the entire cost function is revealed at the end of the round. Here we present an algorithm for online linear optimization in the more challenging opaque feedback model, in which only the cost of the chosen strategy is revealed at the end of the round. In the special case of shortest paths, opaque feedback corresponds to the notion that in each round the algorithm learns only the end-to-end cost of the chosen path, not the cost of every edge in the network.We also present a second algorithm for online shortest paths, which solves the shortest-path problem using a chain of online decision oracles, one at each node of the graph. This has several advantages over the online linear optimization approach. First, it is effective against an adaptive adversary, whereas our linear optimization algorithm assumes an oblivious adversary. Second, even in the case of an oblivious adversary, the second algorithm performs better than the first, as measured by their additive regret." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Regular model checking @cite_17 @cite_28 uses regular languages to represent parameterized systems and computes the closure for the regular relations to construct the reachable state space. In general, the method is not guaranteed to be complete and requires various acceleration techniques (sometimes guided by the user) to ensure termination. Moreover, approaches based on regular language are not suited for representing data in the system. Several examples that we consider in this work can't be modeled in this framework; the out-of-order processor which contains data operations or the Peterson's mutual exclusion are few such examples. Even though the Bakery algorithm can be verified in this framework, it requires considerable user ingenuity to encode the protocol in a regular language.
{ "cite_N": [ "@cite_28", "@cite_17" ], "mid": [ "1861590051", "1926085771" ], "abstract": [ "We present regular model checking, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology. States are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings. Major problems in the verification of parameterized and infinite-state systems are to compute the set of states that are reachable from some set of initial states, and to compute the transitive closure of the transition relation. We present two complementary techniques for these problems. One is a direct automata-theoretic construction, and the other is based on widening. Both techniques are incomplete in general, but we give sufficient conditions under which they work. We also present a method for verifying ω-regular properties of parameterized systems, by computation of the transitive closure of a transition relation.", "The paper shows that, by an appropriate choice of a rich assertional language, it is possible to extend the utility of symbolic model checking beyond the realm of BDD-represented finite-state systems into the domain of infinite-state systems, leading to a powerful technique for uniform verification of unbounded (parameterized) process networks." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Several researchers have investigated restrictions on the system description to make the parameterized verification problem decidable. Notable among them is the early work by German and Sistla @cite_0 for verifying single-indexed properties for synchronously communicating systems. For restricted systems, finite cut-off'' based approaches @cite_16 @cite_12 @cite_27 reduce the problem to verifying networks of some fixed finite size. These bounds have been established for verifying restricted classes of ring networks and cache coherence protocols. Emerson and Kahlon @cite_27 have verified the version of German's cache coherence protocol with single entry channels by manually reducing it to a snoopy protocol, for which finite cut-off exists. However, the reduction is manually performed and exploits details of operation of the protocol, and thus requires user ingenuity. It can't be easily extended to verify other unbounded systems including the Bakery algorithm or the out-of-order processors.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_16", "@cite_12" ], "mid": [ "2036526834", "", "2051054731", "1589760516" ], "abstract": [ "Methods are given for automatically verifying temporal properties of concurrent systems containing an arbitrary number of finite-state processes that communicate using CCS actions. TWo models of systems are considered. Systems in the first model consist of a unique control process and an arbitrary number of user processes with identical definitions. For this model, a decision procedure to check whether all the executions of a process satisfy a given specification is presented. This algorithm runs in time double exponential in the sizes of the control and the user process definitions. It is also proven that it is decidable whether all the fair executions of a process satisfy a given specification. The second model is a special case of the first. In this model, all the processes have identical definitions. For this model, an efficient decision procedure is presented that checks if every execution of a process satisfies a given temporal logic specification. This algorithm runs in time polynomial in the size of the process definition. It is shown how to verify certain global properties such as mutual exclusion and absence of deadlocks. Finally, it is shown how these decision procedures can be used to reason about certain systems with a communication network.", "", "The ring is a useful means of structuring concurrent processes. Processes communicate by passing a token in a fixed direction; the process that possesses the token is allowed to make certain moves. Usually, correctness properties are expected to hold irrespective of the size of the ring. We show that the problem of checking many useful correctness properties for rings of all sizes can be reduced to checking them on a ring of small size. The results do not depend on the processes being finite state. We illustrate our results on examples.", "Systems with an arbitrary number of homogeneous processes occur in many applications. The Parametrized Model Checking Problem (PMCP) is to determine whether a temporal property is true for every size instance of the system. Unfortunately, it is undecidable in general. We are able to establish, nonetheless, decidability of the PMCP in quite a broad framework. We consider asynchronous systems comprised of an arbitrary number n of homogeneous copies of a generic process template. The process template is represented as a synchronization skeleton while correctness properties are expressed using Indexed CTL*nX. We reduce model checking for systems of arbitrary size n to model checking for systems of size (up to) a small cutoff size c. This establishes decidability of PMCP as it is only necessary to model check a finite number of relatively small systems. Efficient decidability can be obtained in some cases. The results generalize to systems comprised of multiple heterogeneous classes of processes, where each class is instantiated by many homogeneous copies of the class template (e.g., m readers and n writers)." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Flanagan and Qadeer @cite_2 use indexed predicates to synthesize loop invariants for sequential software programs that involve unbounded arrays. They also provide heuristics to extract some of the predicates from the program text automatically. The heuristics are specific to loops in sequential software and not suited for verifying more general unbounded systems that we handle in this paper. In this work, we explore formal properties of this formulation and apply it for verifying distributed systems. In a recent work @cite_22 , we provide a weakest precondition transformer @cite_25 based syntactic heuristic for discovering most of the predicates for many of the systems that we consider in this paper.
{ "cite_N": [ "@cite_25", "@cite_22", "@cite_2" ], "mid": [ "2066210260", "2151643528", "" ], "abstract": [ "So-called “guarded commands” are introduced as a building block for alternative and repetitive constructs that allow nondeterministic program components for which at least the activity evoked, but possibly even the final state, is not necessarily uniquely determined by the initial state. For the formal derivation of programs expressed in terms of these constructs, a calculus will be be shown.", "Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.", "" ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
A second related model can be found in @cite_15 and @cite_42 , where edges between nodes @math and @math are present with probability equal to @math for some expected degree vector' @math . Chung and Lu @cite_15 show that when @math is proportional to @math the average distance between pairs of nodes is @math when @math , and @math when @math . The difference between this model and ours is that the nodes are not exchangeable in @cite_15 , but the observed phenomena are similar. This result can be heuristically understood as follows. Firstly, the actual degree vector in @cite_15 should be close to the expected degree vector. Secondly, for the expected degree vector, we can compute that the number of nodes for which the degree is less than or equal to @math equals @math Thus, one expects that the number of nodes with degree at most @math decreases as @math , similarly as in our model. In @cite_42 , Chung and Lu study the sizes of the connected components in the above model. The advantage of this model is that the edges are independently present, which makes the resulting graph closer to a traditional random graph.
{ "cite_N": [ "@cite_15", "@cite_42" ], "mid": [ "2027377866", "2112976607" ], "abstract": [ "Abstract Random graph theory is used to examine the “small-world phenomenon”; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n log d, where d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1 kβ for some fixed exponent β. For the case of β > 3, we prove that the average distance of the power law graphs is almost surely of order log n log d. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.", "We consider a family of random graphs with a given expected degree sequence. Each edge is chosen independently with probability proportional to the product of the expected degrees of its endpoints. We examine the distribution of the sizes volumes of the connected components which turns out depending primarily on the average degree d and the second-order average degree d . Here d denotes the weighted average of squares of the expected degrees. For example, we prove that the giant component exists if the expected average degree d is at least 1, and there is no giant component if the expected second-order average degree d is at most 1. Examples are given to illustrate that both bounds are best possible." ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
The reason why we study the random graphs at a given time instant is that we are interested in the topology of the random graph. In @cite_36 , and inspired by the observed power law degree sequence in @cite_12 , the configuration model with i.i.d. degrees is proposed as a model for the AS-graph in Internet, and it is argued on a qualitative basis that this simple model serves as a better model for the Internet topology than currently used topology generators. Our results can be seen as a step towards the quantitative understanding of whether the hopcount in Internet is described well by the average graph distance in the configuration model.
{ "cite_N": [ "@cite_36", "@cite_12" ], "mid": [ "2163252320", "1976969221" ], "abstract": [ "Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions.Contrary to much of the current literature on network topology generators, this paper starts with the assumption that it is more important for network generators to accurately model the large-scale structure of the Internet (such as its hierarchical structure) than to faithfully imitate its local properties (such as the degree distribution). The purpose of this paper is to determine, using various topology metrics, which network generators better represent this large-scale structure. We find, much to our surprise, that network generators based on the degree distribution more accurately capture the large-scale structure of measured topologies. We then seek an explanation for this result by examining the nature of hierarchy in the Internet more closely; we find that degree-based generators produce a form of hierarchy that closely resembles the loosely hierarchical nature of the Internet.", "Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45 growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96 or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
In recent years, these potential scalability concerns have been addressed by implementing a very small number of independent service guarantees. Under the Differentiated Services framework @cite_12 , flows are aggregated in @math classes, and service guarantees are offered for classes. The downside is that the realized QoS per flow has a lower level of assurance (higher probability of violating the desired service level) than the QoS per aggregate @cite_9 , @cite_1 . Moreover, recently proposed VPN and VLAN services @cite_11 , @cite_17 require per-VPN or VLAN QoS guarantees. All the above are arguments in favor of implemeting a number of independent service guarantees per port much larger than six.
{ "cite_N": [ "@cite_11", "@cite_9", "@cite_1", "@cite_12", "@cite_17" ], "mid": [ "", "2109130057", "2126499691", "1977867261", "1556897819" ], "abstract": [ "", "The Differentiated Service (Diff-Serv) architecture [1] advocates a model based on different “granularity” at network edges and within the network. In particular, core routers are only required to act on a few aggregates that are meant to offer a pre-defined set of service levels. The use of aggregation raises a number of questions for end-to-end services, in particular when crossing domain boundaries where policing actions may be applied. This paper focuses on the impact of such policing actions in the context of individual and the bulk services built on top of the Expedited Forwarding (EF) [7] per-hop-behavior (PHB). The findings of this investigation confirm and quantify the expected need for reshaping at network boundaries, and identify a number of somewhat unexpected behaviors. Recommendations are also made for when reshaping is not available.", "This paper explores, primarily by means of analysis, the differences that can exist between individual and aggregate loss guarantees in an environment where guarantees are only provided at an aggregate level. The focus is on understanding which traffic parameters are responsible for inducing possible deviations and to what extent. In addition, we seek to evaluate the level of additional resources, e.g., bandwidth or buffer, required to ensure that all individual loss measures remain below their desired target. The paper's contributions are in developing analytical models that enable the evaluation of individual loss probabilities in settings where only aggregate losses are controlled, and in identifying traffic parameters that play a dominant role in causing differences between individual and aggregate losses. The latter allows the construction of guidelines identifying what kind of traffic can be safely multiplexed into a common service class.", "This document defines an architecture for implementing scalable service differentiation in the Internet. This architecture achieves scalability by aggregating traffic classification state which is conveyed by means of IP-layer packet marking using the DS field [DSFIELD]. Packets are classified and marked to receive a particular per-hop forwarding behavior on nodes along their path. Sophisticated classification, marking, policing, and shaping operations need only be implemented at network boundaries or hosts. Network resources are allocated to traffic streams by service provisioning policies which govern how traffic is marked and conditioned upon entry to a differentiated services-capable network, and how that traffic is forwarded within that network. A wide variety of services can be implemented on top of these building blocks.", "This document provides requirements for Layer 3 Virtual Private Networks (L3VPNs). It identifies requirements applicable to a number of individual approaches that a Service Provider may use to provision a Virtual Private Network (VPN) service. This document expresses a service provider perspective, based upon past experience with IP-based service offerings and the ever-evolving needs of the customers of such services. Toward this end, it first defines terminology and states general requirements. Detailed requirements are expressed from a customer perspective as well as that of a service provider. This memo provides information for the Internet community." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
More recent proposals @cite_16 decrease the time interval between two runs of the matching algorithm, but with a tradeoff in increased burstiness and additional scheduling algorithms for mitigating unbounded delays. Moreover, the service presented in @cite_16 is of type Premium 1-to-1, but cannot provide Assured N-to-1 service.
{ "cite_N": [ "@cite_16" ], "mid": [ "89807891" ], "abstract": [ "Input-Output buffered crossbars are popular building blocks for scalable high-speed switching because they require minimum speed-up of memory bandwidth. Scaling the design of crossbar switches to large capacities is limited by technology issues such as the reconfiguration of high-speed fabrics or power consumption. In addition, these crossbar architechtures typically schedule and transfer in terms of fixed size envelopes. Thus when they are used in the context of IP networks where packets are of variable size, the incoming packets need to be fragmented into fixed size envelopes. This fragmentation can lead to, possibly large [1], loss of bandwidth and even instability. This paper proposes a new method for switching variable sized packets over a crossbar switch that i) allows maximum utilization of switch bandwidth and avoids the fragmentation effect, and ii) allows designers to use much larger envelopes for transfering data over the fabric and thus minimizes the reconfiguration frequency of the fabric. Reducing the scheduling frequency makes implementation of complex schedulers practical and enables us to build ultra-fast switches incorporating optical technology that can provide bandwidth and delay guarantees." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
Work by @cite_21 has shown that power-law like distributions can be obtained for subgraphs of Erd "os-R 'enyi random graphs when the subgraph is the result of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process.
{ "cite_N": [ "@cite_21" ], "mid": [ "2107648668" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
In Ref. @cite_11 , Petermann and De Los Rios have studied a traceroute -like procedure on various examples of scale-free graphs, showing that, in the case of a single source, power-law distributions with underestimated exponents are obtained. Analytical estimates of the measured exponents as a function of the true ones were also derived. Finally, in a recent preprint appeared during the completion of our work, Guillaume and Latapy @cite_28 report about the shortest-paths explorations of synthetic graphs, comparing properties of the resulting sampled graph with those of the original network. The exploration is made using level plots for the proportion of discovered nodes and edges in the graph as a function of the number of sources and targets, giving also hints for optimal placement of sources and targets. All these pieces of work make clear the relevance of determining up to which extent the topological properties observed in sampled graphs are representative of that of the real networks.
{ "cite_N": [ "@cite_28", "@cite_11" ], "mid": [ "1479935453", "2949920252" ], "abstract": [ "Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conduced to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, we show that the exploration process induces some properties on the maps. We validate our analysis using real-world data and experiments and we discuss its implications.", "The increased availability of data on real networks has favoured an explosion of activity in the elaboration of models able to reproduce both qualitatively and quantitatively the measured properties. What has been less explored is the reliability of the data, and whether the measurement technique biases them. Here we show that tree-like explorations (similar in principle to traceroute) can indeed change the measured exponents of a scale-free network." ] }
cs0405044
2950057546
Most previous work on the recently developed language-modeling approach to information retrieval focuses on document-specific characteristics, and therefore does not take into account the structure of the surrounding corpus. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in precision and recall, and our new interpolation algorithm posts statistically significant improvements for both metrics over all three corpora tested.
Document clustering has a long history in information retrieval @cite_11 @cite_12 ; in particular, approximating topics via clusters is a recurring theme @cite_18 . Arguably the work most related to ours by dint of employing both clustering and language modeling in the context of ad hoc retrieval See e.g., @cite_16 , @cite_3 , and @cite_5 for applications of clustering in related areas. is that on latent-variable models, e.g., @cite_1 @cite_13 @cite_15 @cite_10 , of which the classic aspect model is one instantiation. Such work takes a strictly probabilistic approach to the problems we have discussed with standard language modeling, as opposed to our algorithmic viewpoint. Also, a focus in the latent-variable work has been on sophisticated cluster induction, whereas we find that a very simple clustering scheme works rather well in practice. Interestingly, Hofmann @cite_13 linearly interpolated his probabilistic model's score, which is based on (soft) clusters, with the usual cosine metric; this is quite close in spirit to what our algorithm does.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_1", "@cite_3", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "1990388042", "1880262756", "1519270649", "2112874453", "2074449313", "1602444393", "2121227244", "2134731454", "", "" ], "abstract": [ "", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "", "Standard statistical language models use n-grams to capture local dependencies, or use dynamic modeling techniques to track dependencies within an article. In this paper, we investigate a new statistical language model that captures topic-related dependencies of words within and across sentences. First, we develop a topic-dependent, sentence-level mixture language model which takes advantage of the topic constraints in a sentence or article. Second, we introduce topic-dependent dynamic adaptation techniques in the framework of the mixture model, using n-gram caches and content word unigram caches. Experiments with the static (or unadapted) mixture model on the North American Business (NAB) task show a 21 reduction in perplexity and a 3-4 improvement in recognition accuracy over a general n-gram model, giving a larger gain than that obtained with supervised dynamic cache modeling. Further experiments on the Switchboard corpus also showed a small improvement in performance with the sentence-level mixture model. Cache modeling techniques introduced in the mixture framework contributed a further 14 reduction in perplexity and a small improvement in recognition accuracy on the NAB task for both supervised and unsupervised adaptation.", "We present Scatter Gather, a cluster-based document browsing method, as an alternative to ranked titles for the organization and viewing of retrieval results. We systematically evaluate Scatter Gather in this context and find significant improvements over similarity search ranking alone. This result provides evidence validating the cluster hypothesis which states that relevant documents tend to be more similar to each other than to non-relevant documents. We describe a system employing Scatter Gather and demonstrate that users are able to use this system close to its full potential.", "We explore the use of Optimal Mixture Models to represent topics. We analyze two broad classes of mixture models: set-based and weighted. We provide an original proof that estimation of set-based models is NP-hard, and therefore not feasible. We argue that weighted models are superior to set-based models, and the solution can be estimated by a simple gradient descent technique. We demonstrate that Optimal Mixture Models can be successfully applied to the task of document retrieval. Our experiments show that weighted mixtures outperform a simple language modeling baseline. We also observe that weighted mixtures are more robust than other approaches of estimating topical models.", "We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.", "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.", "", "" ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The protocol described here is derived from earlier work @cite_21 in which we covered the background of the LOCKSS system. That protocol used redundancy, rate limitation, effort balancing, bimodal behavior (polls must be won or lost by a landslide) and friend bias (soliciting some percentage of votes from peers on the friends list) to prevent powerful adversaries from modifying the content without detection, or discrediting the intrusion detection system with false alarms. To mitigate its vulnerability to attrition, in this work we reinforce these defenses using admission control, desynchronization, and redundancy, and restructure votes to support a block-based repair mechanism that penalizes free-riding. In this section we list work that describes the nature and types of denial of service attacks, as well as related work that applies defenses similar to ours.
{ "cite_N": [ "@cite_21" ], "mid": [ "2144552569" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in \"opinion polls.\" Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Our attrition adversary draws on a wide range of work in detecting @cite_60 , measuring @cite_34 , and combating @cite_8 @cite_38 @cite_27 @cite_49 network-level DDoS attacks capable of stopping traffic to and from our peers. This work observes that current attacks are not simultaneously of high intensity, long duration, and high coverage (many peers) @cite_34 .
{ "cite_N": [ "@cite_38", "@cite_8", "@cite_60", "@cite_27", "@cite_49", "@cite_34" ], "mid": [ "2154178154", "2120065915", "2159160833", "1967949770", "", "" ], "abstract": [ "The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internet's vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms.", "This paper describes Active Internet Traffic Filtering (AITF), a mechanism for blocking highly distributed denial-of-service (DDoS) attacks. These attacks are an acute contemporary problem, with few practical solutions available today; we describe in this paper the reasons why no effective DDoS filtering mechanism has been deployed yet. We show that the current Internet's routers have sufficient filtering resources to thwart such attacks, with the condition that attack traffic be blocked close to its sources; AITF leverages this observation. Our results demonstrate that AITF can block a million-flow attack within seconds, while it requires only tens of thousands of wire-speed filters per participating router -- an amount easily accommodated by today's routers. AITF can be deployed incrementally and yields benefits even to the very first adopters.", "Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, and novel techniques such as transient ramp-up behavior and spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results through monitoring at a second site, controlled experiments, and simulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classification mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet.", "This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology.", "", "" ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Related to first-hand reputation is the use of game-theoretic analysis of peer behavior by @cite_7 to show that a reciprocative strategy in admission control policy can motivate cooperation among selfish peers.
{ "cite_N": [ "@cite_7" ], "mid": [ "2100642498" ], "abstract": [ "Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, a symmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD),and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fullydistributed and include: discriminating server selection, maxflow-based subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Admission control has been used to improve the usability of overloaded services. For example, @cite_1 propose admission control strategies that help protect long-running Web service sessions (i.e., related sequences of requests) from abrupt termination. Preserving the responsiveness of Web services in the face of demand spikes is critical, whereas LOCKSS peers need only manage their resources to make progress at the necessary rate in the long term. They can treat demand spikes as hostile behavior. In a P2P context, @cite_14 use admission control (and rate limiting) to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer networks such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2158656420", "2160436229" ], "abstract": [ "We describe a simple but effective traffic model that can be used to understand the effects of denial-of-service (DoS) attacks based on query floods in Gnutella networks. We run simulations based on the model to analyze how different choices of network topology and application level load balancing policies can minimize the effect of these types of DoS attacks. In addition, we also study how damage caused by query floods is distributed throughout the network, and how application-level policies can localize the damage.", "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Golle and Mironov @cite_2 provide compliance enforcement in the context of distributed computation using a receipt technique similar to ours. Random auditing using challenges and hashing has been proposed @cite_42 @cite_37 as a means of enforcing trading requirements in some distributed storage systems.
{ "cite_N": [ "@cite_37", "@cite_42", "@cite_2" ], "mid": [ "1585819637", "2148042433", "1506068270" ], "abstract": [ "Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications.We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems.", "Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems.Samsara enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure.", "Computationally expensive tasks that can be parallelized are most efficiently completed by distributing the computation among a large number of processors. The growth of the Internet has made it possible to invite the participation of just about any computer in such distributed computations. This introduces the potential for cheating by untrusted participants. In a commercial setting where participants get paid for their contribution, there is incentive for dishonest participants to claim credit for work they did not do. In this paper, we propose security schemes that defend against this threat with very little overhead. Our weaker scheme discourages cheating by ensuring that it does not pay off, while our stronger schemes let participants prove that they have done most of the work they were assigned with high probability." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
In DHTs waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn. Bamboo's @cite_28 desynchronization defense using lazy updates is effective.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The previous version of the LOCKSS protocol used rate-limiting, inherent intrusion detection through bimodal system behavior, and churning of friends into the reference list to prevent poll samples from being influenced by nominated peers. These techniques are effective in defending against adversaries attempting to modify content without being detected or trying to trigger intrusion detection alarms to discredit the system @cite_21 . The previous version of the protocol, however, did not tolerate attrition attacks well. An attrition adversary with about 50 nodes of computational power was able to bring a system of 1000 peers to a crawl. By further leveraging the rate-limitation defense to provide admission control, compliance enforcement, and desynchronization of poll invitations raise the computational power an adversary must use to equal that used by the defenders.
{ "cite_N": [ "@cite_21" ], "mid": [ "2144552569" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in \"opinion polls.\" Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Rate limits on peers joining a DHT have been suggested @cite_47 @cite_37 as a defense against attempts to control parts of the hash space, for example to control the placement of certain data objects or for misrouting. Limiting both joins and stores to empirically determined safe rates will also be needed to thwart the attrition adversary. At least for file sharing, studies @cite_24 have suggested that users' behavior may not be sensitive to latency. The increased storage latency that rate limits create is probably unimportant. XXX Matt Williamson's viral work XXX
{ "cite_N": [ "@cite_24", "@cite_47", "@cite_37" ], "mid": [ "1972699782", "", "1585819637" ], "abstract": [ "In the span of only a few years, the Internet has experienced an astronomical increase in the use of specialized content delivery systems, such as content delivery networks and peer-to-peer file sharing systems. Therefore, an understanding of content delivery on the lnternet now requires a detailed understanding of how these systems are used in practice.This paper examines content delivery from the point of view of four content delivery systems: HTTP web traffic, the Akamai content delivery network, and Kazaa and Gnutella peer-to-peer file sharing traffic. We collected a trace of all incoming and outgoing network traffic at the University of Washington, a large university with over 60,000 students, faculty, and staff. From this trace, we isolated and characterized traffic belonging to each of these four delivery classes. Our results (1) quantify, the rapidly increasing importance of new content delivery systems, particularly peer-to-peer networks, (2) characterize the behavior of these systems from the perspectives of clients, objects, and servers, and (3) derive implications for caching in these systems.", "", "Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications.We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
: Admission control appears frequently as a defense against overloading, for example in the context of Web services. For example, @cite_1 propose admission control strategies that help protect long-running sessions (i.e., related sequences of requests) from abrupt termination. However, several of the pertinent assumptions that hold true in a Web environment are inapplicable to LOCKSS: request rejection costs much less than an accepted request, and explicit rejection rarely stems the tide of further requests when a denial of service attack is under way. @cite_14 use admission control as well as rate limiting to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer network such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2158656420", "2160436229" ], "abstract": [ "We describe a simple but effective traffic model that can be used to understand the effects of denial-of-service (DoS) attacks based on query floods in Gnutella networks. We run simulations based on the model to analyze how different choices of network topology and application level load balancing policies can minimize the effect of these types of DoS attacks. In addition, we also study how damage caused by query floods is distributed throughout the network, and how application-level policies can localize the damage.", "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Some researchers have proposed storing useless content in exchange for having content be stored as a way to enforce symmetric storage relationships. Compliance enforcement is achieved by asking the peer storing the file of interest to hash some portion of the file as proof that it is still storing the file @cite_42 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_42" ], "mid": [ "1585819637", "2148042433" ], "abstract": [ "Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications.We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems.", "Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems.Samsara enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn @cite_28 . Breaking the synchrony through lazy updates (e.g., in Bamboo @cite_28 ) can absorb the brunt of a churn attack.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
As (the rate at which the peer population changes) increases, both the latency and the probability of failure of queries to a DHT increases @cite_28 . An attrition attack might consist of adversary peers joining and leaving fast enough to destabilize the routing infrastructure.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations." ] }
cs0405070
2949521043
We propose a model for the World Wide Web graph that couples the topological growth with the traffic's dynamical evolution. The model is based on a simple traffic-driven dynamics and generates weighted directed graphs exhibiting the statistical properties observed in the Web. In particular, the model yields a non-trivial time evolution of vertices and heavy-tail distributions for the topological and traffic properties. The generated graphs exhibit a complex architecture with a hierarchy of cohesiveness levels similar to those observed in the analysis of real data.
A very interesting class of models that considers the main features of the WWW growth has been introduced by @cite_11 in order to produce a mechanism which does not assume the knowledge of the degree of the existing vertices. Each newly introduced vertex @math selects at random an already existing vertex @math ; for each out-neighbour @math of @math , @math connects to @math with a certain probability @math ; with probability @math it connects instead to another randomly chosen node. This model describes the growth process of the WWW as a copy mechanism in which newly arriving web-pages tends to reproduce the hyperlinks of similar web-pages; i.e. the first to which they connect. Interestingly, this model effectively recovers a preferential attachment mechanism without explicitely introducing it.
{ "cite_N": [ "@cite_11" ], "mid": [ "2115579680" ], "abstract": [ "The Web may be viewed as a directed graph each of whose vertices is a static HTML Web page, and each of whose edges corresponds to a hyperlink from one Web page to another. We propose and analyze random graph models inspired by a series of empirical observations on the Web. Our graph models differ from the traditional G sub n,p models in two ways: 1. Independently chosen edges do not result in the statistics (degree distributions, clique multitudes) observed on the Web. Thus, edges in our model are statistically dependent on each other. 2. Our model introduces new vertices in the graph as time evolves. This captures the fact that the Web is changing with time. Our results are two fold: we show that graphs generated using our model exhibit the statistics observed on the Web graph, and additionally, that natural graph models proposed earlier do not exhibit them. This remains true even when these earlier models are generalized to account for the arrival of vertices over time. In particular, the sparse random graphs in our models exhibit properties that do not arise in far denser random graphs generated by Erdos-Renyi models." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
With the exceptions noted below, there has been very little prior work on mathematical analysis of multi-agent systems. The closest in spirit to our paper is the work by Huberman, Hogg and coworkers on computational ecologies @cite_5 @cite_73 . These authors mathematically studied collective behavior in a system of agents, each choosing between two alternative strategies. They derived a rate equation for the average number of agents using each strategy from the underlying probability distributions. Our approach is consistent with theirs --- in fact, we can easily write down the same rate equations from the macroscopic state diagram of the system, without having to derive them from the underlying probability distributions. Computational ecologies can, therefore, be considered an application of the methodology described in this paper. Yet another application of the approach presented here is the author's work on coalition formation in electronic marketplaces @cite_71 .
{ "cite_N": [ "@cite_5", "@cite_73", "@cite_71" ], "mid": [ "13086009", "1973321347", "2127034968" ], "abstract": [ "", "Abstract We investigate the effect of predictions upon a model of coevolutionary systems which was originally inspired by computational ecosystems. The model incorporates many of the features of distributed resource allocation in systems comprised of many individual agents, including asynchrony, resource contention, and decision-making based upon incomplete knowledge and delayed information. Previous analyses of a similar model of non-predictive agents have demonstrated that periodic or chaotic oscillations in resource allocation can occur under certain conditions, and that these oscillations can affect the performance of the system adversely. In this work, we show that the system performance can be improved if the agents do an adequate job of predicting the current state of the system. We explore two plausible methods for prediction - technical analysis and system analysis. Technical analysts are responsive to the behavior of the system, but suffer from an inability to take their own behavior into account. System analysts perform extremely well when they have very accurate information about the other agents in the system, but can perform very poorly when their information is even slightly inaccurate. By combining the strengths of both methods, we obtain a successful hybrid of the two prediction methods which adapts its model of other agents in response to the observed behavior of the system.", "Coalition formation is a desirable behavior in a multiagent system, when a group of agents can perform a task more efficiently than any single agent can. Computational and communications complexity of traditional approaches to coalition formation, e.g., through negotiation, make them impractical for large systems. We propose an alternative, physics-motivated mechanism for coalition formation that treats agents as randomly moving, locally interacting entities. A new coalition may form when two agents encounter one another and it may grow when a single agent encounters it. Such agent-level behavior leads to a macroscopic model that describes how the number and distribution of coalitions change with time. We increase the generality and complexity of the model by letting the agents leave coalitions with some probability. The model is expressed mathematically as a series of differential equations. These equations have steady state solutions that describe the equilibrium distribution of coalitions. Within a context of a specific multi-agent application, we analyze and discuss the connection between the global system utility the parameters of the model." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
In the robotics domain, Sugawara and coworkers @cite_62 @cite_26 developed simple state-based analytical models of cooperative foraging in groups of communicating and non-co -mmu -ni -cating robots and studied them quantitatively. Although these models are similar to ours, they are overly simplified and fail to take crucial interactions among robots into account. In separate papers, we have analyzed collaborative @cite_46 and foraging @cite_29 behavior in groups of robots. The focus of that work is on realistic models and the comparison of the models' predictions to experimental and simulations results. For example, in @cite_46 , we considered the same model of collaborative stick-pulling presented here, but studied it under the same conditions as the experiments. In @cite_29 , we found that we had to include avoiding-while-searching and wall-avoiding states in the model in order to obtain good quantitative agreement between the model and results of sensor-based simulations. The focus of this paper, on the other hand, is to show that there is a principled way to construct a macroscopic model of collective dynamics of a MAS, and, more importantly, a practical recipe'' for creating such a model from the details of the microscopic controller.
{ "cite_N": [ "@cite_29", "@cite_46", "@cite_62", "@cite_26" ], "mid": [ "1578969637", "2137153348", "2053796391", "2090703876" ], "abstract": [ "In multi-robot applications, such as foraging or collection tasks, interference, which results from competition for space between spatially extended robots, can significantly affect the performance of the group. We present a mathematical model of foraging in a homogeneous multi-robot system, with the goal of understanding quantitatively the effects of interference. We examine two foraging scenarios: a simplified collection task where the robots only collect objects, and a foraging task, where they find objects and deliver them to some pre-specified “home” location. In the first case we find that the overall group performance improves as the system size growss however, interference causes this improvement to be sublinear, and as a result, each robot's individual performance decreases as the group size increases. We also examine the full foraging task where robots collect objects and deliver them home. We find an optimal group size that maximizes group performance. For larger group sizes, the group performance declines. However, again due to the effects of interference, the individual robot's performance is a monotonically decreasing function of the group size. We validate both models by comparing their predictions to results of sensor-based simulations in a multi-robot system and find good agreement between theory and simulations data.", "In this article, we present a macroscopic analytical model of collaboration in a group of reactive robots. The model consists of a series of coupled differential equations that describe the dynamics of group behavior. After presenting the general model, we analyze in detail a case study of collaboration, the stick-pulling experiment, studied experimentally and in simulation by (Autonomous Robots, 11, 149-171). The robots' task is to pull sticks out of their holes, and it can be successfully achieved only through the collaboration of two robots. There is no explicit communication or coordination between the robots. Unlike microscopic simulations (sensor-based or using a probabilistic numerical model), in which computational time scales with the robot group size, the macroscopic model is computationally efficient, because its solutions are independent of robot group size. Analysis reproduces several qualitative conclusions of : namely, the different dynamical regimes for different values of the ratio of robots to sticks, the existence of optimal control parameters that maximize system performance as a function of group size, and the transition from superlinear to sublinear performance as the number of robots is increased.", "We study the efficiency of cooperative behavior in a society of interacting agents. After reviewing the problem and defining the concept of swarm intelligence, we examine the collective behavior of many-body active clusters through the task of gathering pucks in a field. We used a simple robot with a drive system and the simplest means of interaction; a light and some sensors. The effectiveness of group behavior was studied for various (homogeneous, localized) puck distributions in a real experiment, simulations, and analysis. To evaluate the efficiency of group behavior, we examined the scaling relation between the task completion time and the number of robots, and the relation between the interaction duration and the efficiency of the group. We also found that a critical density for efficiency when the interaction distance is finite. These results show that cooperation between elements using a simple interaction strongly enhances the performance of the group compared to independent individuals.", "We have researched the efficiency of cooperative behavior of interacting multirobots. In this paper, we generalize the definition of swarm intelligence and examine the emergence of the generalized swarm intelligence (here we call it “swarm function”) through the task of gathering pucks in a field by interacting simple robots. This robot has a drive system and the simplest means of interaction. The effectiveness of group behavior was studied for various (homogeneous, localized) puck distributions. To evaluate the efficiency of group behavior, we proposed a scaling relation between the task completion time and the number of robots, and examined the relation between the interaction duration and the efficiency of the group. We also proposed a simplified state transition diagram of the group and analysed their characteristics using it." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Recently, Bertolino et. al. @cite_17 recognized the importance of testing a software component in its deployment environment. They developed a framework that supports functional testing of a software component with respect to customer's specification, which also provides a simple way to enclose with a component the developer's test suites which can be re-executed by the customer. Yet their approach requires the customer to have a complete specification about the component to be incorporated into a system, which is not always possible. McCamant and Ernst @cite_19 considered the issue of predicting the safety of dynamic component upgrade, which is part of the problem we consider. But their approach is completely different since they try to generate some abstract operational expectation about the new component through observing a system's run-time behavior with the old component.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2121376435", "2100161032" ], "abstract": [ "We present a new, automatic technique to assess whether replacing a component of a software system by a purportedly compatible component may change the behavior of the system. The technique operates before integrating the new component into the system or running system tests, permitting quicker and cheaper identification of problems. It takes into account the system's use of the component, because a particular component upgrade may be desirable in one context but undesirable in another. No formal specifications are required, permitting detection of problems due either to errors in the component or to errors in the system. Both external and internal behaviors can be compared, enabling detection of problems that are not immediately reflected in the output.The technique generates an operational abstraction for the old component in the context of the system and generates an operational abstraction for the new component in the context of its test suite; an operational abstraction is a set of program properties that generalizes over observed run-time behavior. If automated logical comparison indicates that the new component does not make all the guarantees that the old one did, then the upgrade may affect system behavior and should not be performed without further scrutiny. In case studies, the technique identified several incompatibilities among software components.", "Component-based development is the emerging paradigm in software production, though several challenges still slow down its full taking up. In particular, the \"component trust problem\" refers to how adequate guarantees and documentation about a component' s behaviour can be transferred from the component developer to its potential users. The capability to test a component when deployed within the target application environment can help establish the compliance of a candidate component to the customer's expectations and certainly contributes to \"increase trust\". To this purpose, we propose the CDT framework for Component Deployment Testing. CDT provides the customer with both a technique to early specify a deployment test suite and an environment for running and reusing the specified tests on any component implementation. The framework can also be used to deliver the component developer's test suite and to later re-execute it. The central feature of CDT is the complete decoupling between the specification of the tests and the component implementation." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the formal verification area, there has been a long history of research on verification of systems with modular structure (called modular verification @cite_9 ). A key idea @cite_31 @cite_11 in modular verification is the assume-guarantee paradigm: A module should guarantee to have the desired behavior once the environment with which the module is interacting has the assumed behavior. There have been a variety of implementations for this idea (see, e.g., @cite_0 ). However, the assume-guarantee idea does not immediately fit with our problem setup since it requires that users must have clear assumptions about a module's environment.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_31", "@cite_11" ], "mid": [ "1596365597", "2130025446", "2086070079", "1488659932" ], "abstract": [ "R. Alur1, T.A. Henzinger2, F.Y.C. Mang2, S. Qadeer2, S.K. Rajamani2, and S. Tasiran2 1 Computer & Information Science Department, University of Pennsylvania, Philadelphia, PA 19104. Computing Science Research Center, Bell Laboratories, Murray Hill, NJ 07974. alur@cis.upenn.edu 2 Electrical Engineering & Computer Sciences Department, University of California, Berkeley, CA 94720. ftah,fmang,shaz,sriramr,serdarg@eecs.berkeley.edu", "The role of Temporal Logic as a feasible approach to the specification and verification of concurrent systems is now widely accepted. A companion paper in this volume ([HP]) defines more precisely the area of applicability of Temporal Logic as that of reactive systems.", "", "Assume-guarantee reasoning has long been advertised as an important method for decomposing proof obligations in system verification. Refinement mappings (homomorphisms) have long been advertised as an important method for solving the language-inclusion problem in practice. When confronted with large verification problems, we therefore attempted to make use of both techniques. We soon found that rather than offering instant solutions, the success of assume-guarantee reasoning depends critically on the construction of suitable abstraction modules, and the success of refinement checking depends critically on the construction of suitable witness modules. Moreover, as abstractions need to be witnessed, and witnesses abstracted, the process must be iterated. We present here the main lessons we learned from our experiments, in limn of a systematic and structured discipline for the compositional verification of reactive modules. An infrastructure to support this discipline, and automate parts of the verification, has been implemented in the tool Mocha." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the past decade, there has also been some research on combining model-checking and testing techniques for system verification, which can be classified into a broader class of techniques called specification-based testing. But most of the work only utilizes model-checkers' ability of generating counter-examples from a system's specification to produce test cases against an implementation @cite_32 @cite_18 @cite_35 @cite_25 @cite_30 @cite_39 @cite_4 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25" ], "mid": [ "1896160926", "1506339322", "2115309705", "1558093988", "", "2101361140", "2055647675" ], "abstract": [ "We apply a model checker to the problem of test generation using a new application of mutation analysis. We define syntactic operators, each of which produces a slight variation on a given model. The operators define a form of mutation analysis at the level of the model checker specification. A model checker generates countersamples which distinguish the variations from the original specification. The countersamples can easily be turned into complete test cases, that is, with inputs and expected results. We define two classes of operators: those that produce test cases from which a correct implementation must differ, and those that produce test cases with which it must agree. There are substantial advantages to combining a model checker with mutation analysis. First, test case generation is automatic; each countersample is a complete test case. Second, in sharp contrast to program-based mutation analysis, equivalent mutant identification is also automatic. We apply our method to an example specification and evaluate the resulting test sets with coverage metrics on a Java implementation.", "We study the use of model checking techniques for the generation of test sequences. Given a formal model of the system to be tested, one can formulate test purposes. A model checker then derives test sequences that fulfill these test purposes. The method is demonstrated by applying it to a specification of an Intelligent Network with two features.", "SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.", "", "", "Testing has a vital support role in the software engineering process, but developing tests often takes significant resources. A formal specification is a repository of knowledge about a system, and a recent method uses such specifications to automatically generate complete test suites via mutation analysis. We define an extensive set of mutation operators for use with this method. We report the results of our theoretical and experimental investigation of the relationships between the classes of faults detected by the various operators. Finally, we recommend sets of mutation operators which yield good test coverage at a reduced cost compared to using all proposed operators.", "Recently, many formal methods, such as the SCR (Software Cost Reduction) requirements method, have been proposed for improving the quality of software specifications. Although improved specifications are valuable, the ultimate objective of software development is to produce software that satisfies its requirements. To evaluate the correctness of a software implementation, one can apply black-box testing to determine whether the implementation, given a sequence of system inputs, produces the correct system outputs. This paper describes a specification-based method for constructing a suite of test sequences , where a test sequence is a sequence of inputs and outputs for testing a software implementation. The test sequences are derived from a tabular SCR requirements specification containing diverse data types, i.e., integer, boolean, and enumerated types. From the functions defined in the SCR specification, the method forms a collection of predicates called branches , which “cover” all possible software behaviors described by the specification. Based on these predicates, the method then derives a suite of test sequences by using a model checker's ability to construct counterexamples. The paper presents the results of applying our method to four specifications, including a sizable component of a contractor specification of a real system." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Callahan et. al. @cite_32 used the model-checker SPIN @cite_18 to check a program's execution traces generated during white-box testing and to generate new test-cases from the counter-example found by SPIN; in @cite_35 , SPIN was also used to generate test-cases from counter-examples found during model-checking system specifications. Gargantini and Heitmeyer @cite_25 used SMV to both generate test-cases from the operational SCR specifications and as test oracles. In @cite_30 @cite_39 , Ammann et. al. also exploited the ability of producing counter-examples with the model-checker SMV @cite_12 ; but their approach is by mutating both specifications and properties such that a large set of test cases can be generated. (A detailed introduction on using model-checkers in testing can be found in @cite_4 ).
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25", "@cite_12" ], "mid": [ "1896160926", "1506339322", "2115309705", "1558093988", "", "2101361140", "2055647675", "" ], "abstract": [ "We apply a model checker to the problem of test generation using a new application of mutation analysis. We define syntactic operators, each of which produces a slight variation on a given model. The operators define a form of mutation analysis at the level of the model checker specification. A model checker generates countersamples which distinguish the variations from the original specification. The countersamples can easily be turned into complete test cases, that is, with inputs and expected results. We define two classes of operators: those that produce test cases from which a correct implementation must differ, and those that produce test cases with which it must agree. There are substantial advantages to combining a model checker with mutation analysis. First, test case generation is automatic; each countersample is a complete test case. Second, in sharp contrast to program-based mutation analysis, equivalent mutant identification is also automatic. We apply our method to an example specification and evaluate the resulting test sets with coverage metrics on a Java implementation.", "We study the use of model checking techniques for the generation of test sequences. Given a formal model of the system to be tested, one can formulate test purposes. A model checker then derives test sequences that fulfill these test purposes. The method is demonstrated by applying it to a specification of an Intelligent Network with two features.", "SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.", "", "", "Testing has a vital support role in the software engineering process, but developing tests often takes significant resources. A formal specification is a repository of knowledge about a system, and a recent method uses such specifications to automatically generate complete test suites via mutation analysis. We define an extensive set of mutation operators for use with this method. We report the results of our theoretical and experimental investigation of the relationships between the classes of faults detected by the various operators. Finally, we recommend sets of mutation operators which yield good test coverage at a reduced cost compared to using all proposed operators.", "Recently, many formal methods, such as the SCR (Software Cost Reduction) requirements method, have been proposed for improving the quality of software specifications. Although improved specifications are valuable, the ultimate objective of software development is to produce software that satisfies its requirements. To evaluate the correctness of a software implementation, one can apply black-box testing to determine whether the implementation, given a sequence of system inputs, produces the correct system outputs. This paper describes a specification-based method for constructing a suite of test sequences , where a test sequence is a sequence of inputs and outputs for testing a software implementation. The test sequences are derived from a tabular SCR requirements specification containing diverse data types, i.e., integer, boolean, and enumerated types. From the functions defined in the SCR specification, the method forms a collection of predicates called branches , which “cover” all possible software behaviors described by the specification. Based on these predicates, the method then derives a suite of test sequences by using a model checker's ability to construct counterexamples. The paper presents the results of applying our method to four specifications, including a sizable component of a contractor specification of a real system.", "" ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Peled et. al. @cite_27 @cite_29 @cite_5 studied the issue of checking a black-box against a temporal property (called black-box checking). But their focus is on how to efficiently establish an abstract model of the black-box through black-box testing , and their approach requires a clearly-defined property (LTL formula) about the black-box, which is not always possible in component-based systems. Kupferman and Vardi @cite_28 investigated module checking by considering the problem of checking an open finite-state system under all possible environments. Module checking is different from the problem in (*) mentioned at the beginning of the paper in the sense that a component understood as an environment in @cite_28 is a specific one. Fisler et. al. @cite_24 @cite_6 proposed an idea of deducing a model-checking condition for extension features from the base feature, which is adopted to study model-checking feature-oriented software designs. Their approach relies totally on model-checking techniques; their algorithms have false negatives and do not handle LTL formulas.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_6", "@cite_24", "@cite_27", "@cite_5" ], "mid": [ "", "1493173186", "2141399917", "1968542268", "1482663303", "1539019418" ], "abstract": [ "", "The AMC (for adaptive model checking) system allows one to perform model checking directly on a system, even when its internal structure is unknown or invisible. It also allows one to perform model checking using an inaccurate model, incrementally improving the model each time that a false negative (i.e., not an actual) counterexample is found.", "Feature-oriented software designs capture many interesting notions of cross-cutting, and offer a powerful method for building product-line architectures. Each cross-cutting feature is an independent module that fundamentally yields an open system from a verification perspective. We describe desiderata for verifying such modules through model checking and find that existing work on the verification of open systems fails to address most of the concerns that arise from feature-oriented systems. We therefore provide a new methodology for verifying such systems. To validate this new methodology, we have implemented it and applied it to a suite of modules that exhibit feature interaction problems. Our model checker was able to automatically locate ten problems previously found through a laborious simulation-based effort.", "Most existing modular model checking techniques betray their hardware roots: they assume that modules compose in parallel. In contrast, collaboration-based software designs, which have proven very successful in several domains, are sequential in the simplest case. Most interesting collaboration-based designs are really quasi-sequential compositions of parallel compositions. These designs demand and inspire new verification techniques. This paper presents algorithms that exploit the software's modular decomposition to verify collaboration-based designs. Our technique can verify most properties locally in the collaborations; we also characterize when a global state space construction is unavoidable. We have validated our proposal by testing it on several designs.", "Two main approaches are used for increasing the quality of systems: in model checking, one checks properties of a known design of a system; in testing, one usually checks whether a given implementation, whose internal structure is often unknown, conforms with an abstract design. We are interested in the combination of these techniques. Namely, we would like to be able to test whether an implementation with unknown structure satisfies some given properties. We propose and formalize this problem of black box checking and suggest several algorithms. Since the input to black box checking is not given initially, as is the case in the classical model of computation, but is learned through experiments, we propose a computational model based on games with incomplete information. We use this model to analyze the complexity of the problem. We also address the more practical question of finding an approach that can detect errors in the implementation before completing an exhaustive search.", "Model checking is a technique for automatically checking properties of models of systems. We present here several combinations of model checking with testing techniques. This allows checking systems when no model is given, when the model is inaccurate, or when only a part of its description is given." ] }
cs0402003
2952176458
The notion of preference is becoming more and more ubiquitous in present-day information systems. Preferences are primarily used to filter and personalize the information reaching the users of such systems. In database systems, preferences are usually captured as preference relations that are used to build preference queries. In our approach, preference queries are relational algebra or SQL queries that contain occurrences of the winnow operator ("find the most preferred tuples in a given relation"). We present here a number of semantic optimization techniques applicable to preference queries. The techniques make use of integrity constraints, and make it possible to remove redundant occurrences of the winnow operator and to apply a more efficient algorithm for the computation of winnow. We also study the propagation of integrity constraints in the result of the winnow. We have identified necessary and sufficient conditions for the applicability of our techniques, and formulated those conditions as constraint satisfiability problems.
The basic reference for semantic query optimization is @cite_11 . The most common techniques are: join elimination introduction, predicate elimination and introduction, and detecting an empty answer set. @cite_15 discusses the implementation of predicate introduction and join elimination in an industrial query optimizer. Semantic query optimization techniques for relational queries are studied in @cite_8 in the context of denial and referential constraints, and in @cite_14 in the context of constraint tuple-generating dependencies (a generalization of CGDs and classical relational dependencies). FDs are used for reasoning about sort orders in @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "1501976648", "2108930340", "2147722789", "", "2006519067" ], "abstract": [ "We investigate the optimization of extended relational queries used in systems holding, for example, spatial, multimedia or constraint data. For such queries we must account for the built-in relations specific to the kind of data, and application dependent relationships between different relations. We show that the constraint database perspective and the use of constrained tuple-generating dependencies provides a general framework in which to address semantic query optimization for these queries. We establish some sufficient conditions for query transformations involving the introduction of relations, extending work in the literature for conventional databases. We introduce semantic query partition (SQP) as a useful technique for optimizing queries with expensive operations, and investigate the problem of generating subqueries, which is central to the use of SQP.", "The authors address the issue of reasoning with two classes of commonly used semantic integrity constraints in database and knowledge-base systems: implication constraints and referential constraints. They first consider a central problem in this respect, the IRC-refuting problem, which is to decide whether a conjunctive query always produces an empty relation on (finite) database instances satisfying a given set of implication and referential constraints. Since the general problem is undecidable, they only consider acyclic referential constraints. Under this assumption, they prove that the IRC-refuting problem is decidable, and give a novel necessary and sufficient condition for it. Under the same assumption, they also study several other problems encountered in semantic query optimization, such as the semantics-based query containment problem, redundant join problem, and redundant selection-condition problem, and show that they are polynomially equivalent or reducible to the IRC-refuting problem. Moreover, they give results on reducing the complexity for some special cases of the IRC-refuting problem.", "Decision support applications are growing in popularity as more business data is kept on-line. Such applications typically include complex SQL queries that can test a query optimizer's ability to produce an efficient access plan. Many access plan strategies exploit the physical ordering of data provided by indexes or sorting. Sorting is an expensive operation, however. Therefore, it is imperative that sorting is optimized in some way or avoided all together. Toward that goal, this paper describes novel optimization techniques for pushing down sorts in joins, minimizing the number of sorting columns, and detecting when sorting can be avoided because of predicates, keys, or indexes. A set of fundamental operations is described that provide the foundation for implementing such techniques. The operations exploit data properties that arise from predicate application, uniqueness, and functional dependencies. These operations and techniques have been implemented in IBM's DB2 CS.", "", "The purpose of semantic query optimization is to use semantic knowledge (e.g., integrity constraints) for transforming a query into a form that may be answered more efficiently than the original version. In several previous papers we described and proved the correctness of a method for semantic query optimization in deductive databases couched in first-order logic. This paper consolidates the major results of these papers emphasizing the techniques and their applicability for optimizing relational queries. Additionally, we show how this method subsumes and generalizes earlier work on semantic query optimization. We also indicate how semantic query optimization techniques can be extended to databases that support recursion and integrity constraints that contain disjunction, negation, and recursion." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
This paper draws on results from two areas: termination (checking) analysis and backwards analysis. It shows how to combine components implementing these so as to obtain an analyser for termination inference. Termination checking for logic programs has been studied extensively (see for example the survey @cite_18 ). Backwards reasoning for imperative programs dates back to the early days of static analysis and has been applied extensively in functional programming. Applications of backwards analysis in the context of logic programming are few. For details concerning other applications of backwards analysis, see @cite_14 . The only other work on termination inference that we are aware of is that of Mesnard and coauthors. The implementation of Mesnard's cTI analyser is described in @cite_15 and its formal justification is given in @cite_23 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_23", "@cite_15" ], "mid": [ "2009286786", "", "1999783590", "1524883003" ], "abstract": [ "Abstract We survey termination analysis techniques for Logic Programs. We give an extensive introduction to the topic. We recall several motivations for the work, and point out the intuitions behind a number of LP-specific issues that turn up, such as: the study of different classes of programs and LP languages, of different classes of queries and of different selection rules, the difference between existential and universal termination, and the treatment of backward unification and local variables. Then, we turn to more technical aspects: the structure of the termination proofs, the selection of well-founded orderings, norms and level mappings, the inference of interargument relations, and special treatments proposed for dealing with mutual recursion. For each of these, we briefly sketch the main approaches presented in the literature, using a fixed example as a file rouge. We conclude with some comments on loop detection and cycle unification and state some open problems.", "", "The Constraint Logic Programming (CLP) Scheme merges logic programming with constraint solving over predefined domains. In this article, we study proof methods for universal left termination of constraint logic programs. We provide a sound and complete characterization of left termination for ideal CLP languages which generalizes acceptability of logic programs. The characterization is then refined to the notion of partial acceptability, which is well suited for automatic modular inference. We describe a theoretical framework for automation of the approach, which is implemented. For nonideal CLP languages and without any assumption on their incomplete constraint solvers, even the most basic sound termination criterion from logic programming does not lift. We focus on a specific system, namely CLP(R), by proposing some additional conditions that make (partial) acceptability sound.", "We present the implementation of cTI, a system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, requiringu ser annotations. With termination inference such annotations are no longer necessary. Instead, all provably terminatingclasses to all related predicates are inferred at once. The architecture of cTI is described1 and some optimizations are discussed. Runningti mes for classical examples from the termination literature in LP and for some middle-sized logic programs are given." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
Both systems compute the greatest fixed point of a system of recursive equations. In our case the implementation is based on a simple meta-interpreter written in Prolog. In cTI, the implementation is based on a @math -calculus interpreter. In our case this system of equations is set up as an instance of backwards analysis hence providing a clear motivation and justification @cite_23 .
{ "cite_N": [ "@cite_23" ], "mid": [ "1999783590" ], "abstract": [ "The Constraint Logic Programming (CLP) Scheme merges logic programming with constraint solving over predefined domains. In this article, we study proof methods for universal left termination of constraint logic programs. We provide a sound and complete characterization of left termination for ideal CLP languages which generalizes acceptability of logic programs. The characterization is then refined to the notion of partial acceptability, which is well suited for automatic modular inference. We describe a theoretical framework for automation of the approach, which is implemented. For nonideal CLP languages and without any assumption on their incomplete constraint solvers, even the most basic sound termination criterion from logic programming does not lift. We focus on a specific system, namely CLP(R), by proposing some additional conditions that make (partial) acceptability sound." ] }
math0312490
2166075559
As a sequel to our proof of the analog of Serre's conjecture for function fields in Part I of this work, we study in this paper the deformation rings of @math -dimensional mod @math representations @math of the arithmetic fundamental group @math where @math is a geometrically irreducible, smooth curve over a finite field @math of characteristic @math ( @math ). We are able to show in many cases that the resulting rings are finite flat over @math . The proof principally uses a lifting result of the authors in Part I of this two-part work, Taylor-Wiles systems and the result of Lafforgue. This implies a conjecture of A.J. de Jong for representations with coefficients in power series rings over finite fields of characteristic @math , that have this mod @math representation as their reduction.
The key qualitative difference between the mentioned works and ours is that we can prove automorphy of residual representations like @math in the theorem, while in the other works this has to be at the moment an imprtant assumption that seems extremely difficult to verify in their number field case; further we are mainly interested in establishing algebraic properties of deformation rings, while in the number field case these are established en route to proving modularity of @math -adic representations (which is known in our context by @cite_22 !). Thus our uses of the methods pioneered by Wiles can be deemed to a certain extent to be warped!
{ "cite_N": [ "@cite_22" ], "mid": [ "2121723322" ], "abstract": [ "On demontre la correspondance de Langlands pour GL r sur les corps de fonctions. La preuve generalise celle de Drinfeld en rang 2 : elle consiste a realiser la correspondance en rang r dans la cohomologie l-adique des varietes modulaires de chtoucas de Drinfeld de rang r." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
In @cite_20 , an argumentation semantics for extended logic programs, similar to Prakken and Sartor's, is proposed; it is influenced by WFSX, and distinguishes between sceptical and credulous conclusions of an argument. It also provides a proof theory based on dialogue trees, similar to Prakken and Sartor's.
{ "cite_N": [ "@cite_20" ], "mid": [ "2140044962" ], "abstract": [ "The ability to view extended logic programs as argumentation systems opens the way for the use of this language in formalizing communication among reasoning computing agents in a distributed framework. In this paper we define an argumentative and cooperative multi-agent framework, introducing credulous and sceptical conclusions. We also present an algorithm for inference and show how the agents can have more credulous or sceptical conclusions." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
Defeasible Logic Programming @cite_44 @cite_25 @cite_30 is a formalism very similar to Prakken and Sartor's, based on the first order logic argumentation framework of @cite_1 . It includes logic programming with two kinds of negation, distinction between strict and defeasible rules, and allowing for various criteria for comparing arguments. Its semantics is given operationally, by proof procedures based on dialectical trees @cite_44 @cite_25 . In @cite_19 , the semantics of Defeasible Logic Programming is related to the well-founded semantics, albeit only for the restricted language corresponding to normal logic programs @cite_41 .
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_1", "@cite_44", "@cite_19", "@cite_25" ], "mid": [ "190056634", "1968513265", "2156092566", "2159569510", "2170232725", "" ], "abstract": [ "We present here a knowledge representation language, where defeasible and non-defeasible rules can be expressed. The language has two different negations: classical negation, which is represented by the symbol “∼” used for representing contradictory knowledge; and negation as failure, represented by the symbol “not” used for representing incomplete information. Defeasible reasoning is done using a argumentation formalism. Thus, systems for acting in a dynamic domain, that properly handle contradictory and or incomplete information can be developed with this language. An argument is used as a defeasible reason for supporting conclusions. A conclusion q will be considered justified only when the argument that supports it becomes a justification. Building a justification involves the construction of a nondefeated argument A for q. In order to establish that A is a non-defeated argument, the system looks for counterarguments that could be defeaters for A. Since defeaters are arguments, there may exist defeaters for the defeaters, and so on, thus requiring a complete dialectical analysis. The system also detects, avoids, circular argumentation. The language was implemented using an abstract machine defined and developed as an extension of the Warren Abstract Machine (wam).", "A general logic program (abbreviated to \"program\" hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the \"meaning of the program, \" or Its \"declarative semantics. \" Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a \"satisfactory\" total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of \"stratified\" and \"locally stratified\" programs, The method in this paper is also compared with other proposals in the literature, including Clark's \"program completion, \" Fitting's and Kunen's 3-vahred interpretations of it, and the \"stable models\" of Gelfond and Lifschitz.", "In this dissertation I present a formal approach to defeasible reasoning. This mathematical approach is based on the notion of specificity introduced by Poole and the general theory of warrant as presented by Pollock. General background information on the subject of Nonmonotonic Reasoning is presented and some of the shortcomings of existing systems are analyzed. We believe that the approach presented here represents a definite improvement over past systems. The main contribution of this thesis is a formally precise, elegant, clean, well-defined system which exhibits a correct behavior when applied to the benchmark examples in the literature. Model-theoretic semantical issues have been addressed. The investigation on the theoretical issues has aided the study of how this kind of reasoner can be realized on a computer. An interpreter of a restricted language, an extension of Horn clauses with defeasible rules, has been implemented. Finally, the implementation details are discussed.", "The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query @math will succeed when there is an argument @math for @math that is warranted, i.e. the argument @math that supports @math is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.", "This paper relates the Defeasible Logic Programming (DeLP) framework and its semantics SEMDeLP to classical logic programming frameworks. In DeLP, we distinguish between two different sorts of rules: strict and defeasible rules. Negative literals (∼A) in these rules are considered to represent classical negation. In contrast to this, in normal logic programming (NLP), there is only one kind of rules, but the meaning of negative literals (not A) is different: they represent a kind of negation as failure, and thereby introduce defeasibility. Various semantics have been defined for NLP, notably the well-founded semantics (WFS) (van , Proceedings of the Seventh Symposium on Principles of Database Systems, 1988, pp. 221-230; J. ACM 38 (3) (1991) 620) and the stable semantics Stable (Gelfond and Lifschitz, Fifth Conference on Logic Programming, MIT Press, Cambridge, MA, 1988, pp. 1070-1080; Proceedings of the Seventh International Conference on Logical Programming, Jerusalem, MIT Press, Cambridge, MA, 1991, pp. 579-597).In this paper we consider the transformation properties for NLP introduced by Brass and Dix (J. Logic Programming 38(3) (1999) 167) and suitably adjusted for the DeLP framework. We show which transformation properties are satisfied, thereby identifying aspects in which NLP and DeLP differ. We contend that the transformation rules presented in this paper can help to gain a better understanding of the relationship of DeLP semantics with respect to more traditional logic programming approaches. As a byproduct, we obtain the result that DeLP is a proper extension of NLP.", "" ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
A number of authors @cite_23 @cite_18 @cite_15 @cite_27 @cite_37 @cite_45 @cite_14 @cite_20 work on argumentation for negotiating agents. Of these, the approaches of @cite_37 @cite_45 @cite_14 are based on logic programming. The advantage of the logic programming approach for arguing agents is the availability of goal-directed, top-down proof procedures. This is vital when implementing systems which need to react in real-time and therefore cannot afford to compute all justified arguments, as would be required when a bottom-up argumentation semantics would be used.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_14", "@cite_27", "@cite_45", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "1509700916", "", "", "2111427732", "2077144412", "2095587681", "1556421367", "2140044962" ], "abstract": [ "The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Ne- gotiation provides a solution to these problems by giv- ing the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world view, and coordinate a joint ap- proach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to ne- gotiate to establish acceptable ways to solve problems. offer concessions, and (hopefully) come to a mutually acceptable agreement--in other words to negotiate. This paper presents a well-grounded framework for describing the reasoning process of negotiating agents. This framework is based upon a system of argumenta- tion which may be used both at the level of all agent's internal reasoning and at the level of negotiation be- tween agents. An originating agent puts forward an initial proposal. The recipient agents evaluate the pro- posal by constructing arguments for and against it. If the proposal is unacceptable, the recipient constructs an argument against the initial proposal or in favour of a new alternative. This process continues until a proposal or counter-proposal is acceptable to all the parties involved or until the negotiation breaks down without an agreement. This paper presents a formal model covering the essence of the negotiation process which can be spe- cialised to describe specific strategies and tactics, all integrated framework for assessing proposals and for generating appropriate counter-proposals, and an in- tuitively appealing way of conducting reasoning and negotiation in the presence of imprecise and missing information.", "", "", "The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, corect inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents).", "Dialogue represents a powerful means to solve problems using agents that have an explicit knowledge representation, and exhibit a goal-oriented behaviour. In recent years, computational logic gave a relevant contribution to the development of Multi-Agent Systems, showing that a logic-based formalism can be effectively used to model and implement the agent knowledge, reasoning, and interactions, and can be used to generate dialogues among agents and to prove properties such as termination and success. In this paper, we discuss the meaning of termination in agent dialogue, and identify a trade-off between ensuring dialogue termination, and therefore robustness in the agent system, and achieving completeness in problem solving. Then, building on an existing negotiation framework, where dialogues are obtained as a product of the combination of the reasoning activity of two agents on a logic program, we define a syntactic transformation of existing agent programs, with the purpose to ensure termination in the negotiation process. We show how such transformations can make existing agent systems more robust against possible situations of non-terminating dialogues, while reducing the class of reachable solutions in a specific application domain, that of resource reallocation.", "In a multi-agent environment, where self-motivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multi-agent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore different methods to negotiate and argue in a noncooperative environment where no centralized mechanism for coordination exists. The development of negotiating agents in the framework of the Automated Negotiation Agent is illustrated with an example where the agents plan, act, and resolve conflicts via negotiation in a Blocks World environment.", "Many autonomous agents operate in domains in which the cooperation of their fellow agents cannot be guaranteed. In such domains negotiation is essential to persuade others of the value of co-operation. This paper describes a general framework for negotiation in which agents exchange proposals backed by arguments which summarise the reasons why the proposals should be accepted. The argumentation is persuasive because the exchanges are able to alter the mental state of the agents involved. The framework is inspired by our work in the domain of business process management and is explained using examples from that domain.", "The ability to view extended logic programs as argumentation systems opens the way for the use of this language in formalizing communication among reasoning computing agents in a distributed framework. In this paper we define an argumentative and cooperative multi-agent framework, introducing credulous and sceptical conclusions. We also present an algorithm for inference and show how the agents can have more credulous or sceptical conclusions." ] }
cs0310016
1673079227
By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of navigation through time,'' and the integration with an event analyzer.
HERCULE @cite_9 is a tool which can record and replay distributed events, in particular, window events and appearence. It does for windows much of what ODB does for programs, and provides much of the functionality that the ODB lacks.
{ "cite_N": [ "@cite_9" ], "mid": [ "1566707746" ], "abstract": [ "This paper presents HERCULE, an approach to non-invasively tracking end-user application activity in a distributed, component-based system. Such tracking can support the visualisation of user and application activity, system auditing, monitoring of system performance and the provision of feedback. A framework is provided that allows the insertion of proxies, dynamically and transparently, into a component-based system. Proxies are inserted in between the user and the graphical user-interface and between the client application and the rest of the distributed, component-based system. The paper describes: how the code for the proxies is generated by mining component documentation; how they are inserted without affecting pre-existing code; and how information produced by the proxies can be used to model application activity. The viability of this approach is demonstrated by means of a prototype implementation." ] }
cs0310020
2119104528
A simple mathematical definition of the 4-port model for pure Prolog is given. The model combines the intuition of ports with a compact representation of execution state. Forward and backward derivation steps are possible. The model satisfies a modularity claim, making it suitable for formal reasoning.
In contrast to the few specifications of the Byrd box, there are many more general models of pure (or even full) Prolog execution. Due to space limitations we mention here only some models, directly relevant to , and for a more comprehensive discussion see @cite_1 . Comparable to our work are the stack-based approaches. St "ark gives in @cite_3 , as a side issue, a simple operational semantics of pure logic programming. A state of execution is a stack of frame stacks, where each frame consists of a goal (ancestor) and an environment. In comparison, our state of execution consists of exactly one environment and one ancestor stack . The seminal paper of Jones and Mycroft @cite_10 was the first to present a stack-based model of execution, applicable to pure Prolog with cut added. It uses a sequence of frames. In these stack-based approaches (including our previous attempt @cite_1 ), there is no , it is not possible to abstract the execution of a subgoal.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_3" ], "mid": [ "158887749", "2058402932", "2013417779" ], "abstract": [ "", "Abstract The coincidence between the model-theoretic and the procedural semantics of SLD-resolution does not carry over to a Prolog system that also implements non-logical features like cut and whose depth-first search strategy is incomplete. The purpose of this paper is to present the key concepts of a new, simple operational semantics of Standard Prolog in the form of rewriting rules. We use a novel linear representation of the Prolog tree traversal. A derivation is represented at the level of unification and backtracking. The rewriting system presented here can easily be implemented in a rewriting logic language, giving an executable specification of Prolog.", "This article contains the theoretical foundations of LPTP, a logic program theorem prover that has been implemented in Prolog by the author. LPTP is an interactive theorem prover in which one can prove correctness properties of pure Prolog programs that contain negation and built-in predicates like is2 and calln + 1. The largest example program that has been verified using LPTP is 635 lines long including its specification. The full formal correctness proof is 13 128 lines long (133 pages). The formal theory underlying LPTP is the inductive extension of pure Prolog programs. This is a first-order theory that contains induction principles corresponding to the definition of the predicates in the program plus appropriate axioms for built-in predicates. The inductive extension allows to express modes and types of predicates. These can then be used to prove termination and correctness properties of programs. The main result of this article is that the inductive extension is an adequate axiomatization of the operational semantics of pure Prolog with built-in predicates." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
In Program Slicing @cite_15 @cite_4 , statements that cannot influence the value of a variable at a given program point are eliminated by considering the dependencies between the statements. Backward reasoning from output values, as in our approach, is not possible. Similar ideas were successfully utilized in a MBD tool analyzing VHDL programs @cite_29 @cite_8 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_4", "@cite_8" ], "mid": [ "1575308494", "1986146602", "", "2080412071" ], "abstract": [ "A program slice consists of the parts of a program that (potentially) affect the values computed at some point of interest. Such a point of interest is referred to as a slicing criterion, and is typically specified by a location in the program in combination with a subset of the program’s variables. The task of computing program slices is called program slicing. The original definition of a program slice was presented by Weiser in 1979. Since then, various slightly different notions of program slices have been proposed, as well as a number of methods to compute them. An important distinction is that between a static and a dynamic slice. Static slices are computed without making assumptions regarding a program’s input, whereas the computation of dynamic slices relies on a specific test case. This survey presents an overview of program slicing, including the various general approaches used to compute slices, as well as the specific techniques used to address a variety of language features such as procedures, unstructured control flow, composite data types and pointers, and concurrency. Static and dynamic slicing methods for each of these features are compared and classified in terms of their accuracy and efficiency. Moreover, the possibilities for combining solutions for different features are investigated. Recent work on the use of compiler-optimization and symbolic execution techniques for obtaining more accurate slices is discussed. The paper concludes with an overview of the applications of program slicing, which include debugging, program integration, dataflow testing, and software maintenance.", "Abstract The state of the art in hardware design is the use of hardware description languages such as VHDL. The designs are tested by simulating them and comparing their output to that prescribed by the specification. A significant part of the design effort is spent on detecting unacceptable deviations from this specification and subsequently localizing the sources of such faults. In this paper, we describe an approach to employ model-based diagnosis for fault detection and localization in very large VHDL programs, by automatically generating the diagnosis model from the VHDL code and using observations about the program behavior to derive possible fault locations from the model. In order to achieve sufficient performance for practical applicability, we have developed a representation that provides a highly abstracted view of programs and faults, but is sufficiently detailed to yield substantial reductions in the fault localization costs when compared to the current manpower-intensive approach. The implementation in conjunction with the knowledge representation is designed with openness in mind in order to facilitate use of the highly optimized simulation tools available.", "", "Program slicing is a general, widely-used, and accepted technique applicable to different software engineering tasks including debugging, whereas model-based diagnosis is an AI technique originally developed for finding faults in physical systems. During the last years it has been shown that model-based diagnosis can be used for software debugging. In this paper we discuss the relationship between debugging using a dependency-based model and program slicing. As a result we obtain that slices of a program in a fault situation are equivalent to conflicts in model-based debugging." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_41 @cite_22 use probability measurements to guide diagnosis. The program debugging process is divided into two steps. In the first one, program parts that may cause a discrepancy are computed by tracing the incorrect output back to the inputs and collecting the involved statements. In a second step, a belief network is used to identify the most probable statements causing the fault. Although this approach was successful in debugging a very large program, it requires statistics relating the statement types and fault symptoms, which makes it unsuitable for debugging general programs.
{ "cite_N": [ "@cite_41", "@cite_22" ], "mid": [ "1596732274", "1968814228" ], "abstract": [ "We describe the integration of logical and uncertain reasoning methods to identify the likely source and location of software problems. To date, software engineers have had few tools for identifying the sources of error in complex software packages. We describe a method for diagnosing software problems through combining logical and uncertain-reasoning analyses. Our preliminary results suggest that such methods can be of value in directing the attention of software engineers to paths of an algorithm that have the highest likelihood of harboring a programming error.", "Software errors abound in the world of computing. Sophisticated computer programs rank high on the list of the most complex systems ever created by humankind. The complexity of a program or a set of interacting programs makes it extremely difficult to perform offline verification of run-time behavior. Thus, the creation and maintenance of program code is often linked to a process of incremental refinement and ongoing detection and correction of errors. To be sure, the detection and repair of program errors is an inescapable part of the process of software development. However, run-time software errors may be discovered in fielded applications days, months, or even years after the software was last modified—especially in applications composed of a plethora of separate programs created and updated by different people at different times. In such complex applications, software errors are revealed through the run-time interaction of hundreds of distinct processes competing for limited memory and CPU resources. Software developers and support engineers responsible for correcting software problems face difficult challenges in tracking down the source of run-time errors in complex applications. The information made available to engineers about the nature of a failure often leaves open a wide range of possibilities that must be sifted through carefully in searching for an underlying error." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Jackson @cite_35 introduces a framework to detect faults in programs that manifest through changed dependencies between the input and the output variables of a program. The approach detects differences between the dependencies computed for a program and the dependencies specified by the user. It is able to detect certain kinds of structural faults but no test case information is exploited. Whereas Jackson focuses on bug detection, the model--based approach is also capable of locating faults. Further, the information obtained from present and absent dependencies can aid the debugger to focus on certain regions and types of faults, and thus find possible causes more quickly.
{ "cite_N": [ "@cite_35" ], "mid": [ "2073241739" ], "abstract": [ "Aspect is a static analysis technique for detecting bugs in imperative programs, consisting of an annotation language and a checking tool. Like a type declaration, an Aspect annotation of a procedure is a kind of declarative, partial specification that can be checked efficiently in a modular fashion. But instead of constraining the types of arguments and results, Aspect specifications assert dependences that should hold between inputs and outputs. The checker uses a simple dependence analysis to check code against annotations and can find bugs automatically that are not detectable by other static means, especially errors of omission, which are common, but resistant to type checking. This article explains the basic scheme and shows how it is elaborated to handle data abstraction and aliasing." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_16 apply similar ideas to knowledge base maintenance, exploiting hierarchical information to speed up the diagnostic process and to reduce the number of diagnoses.
{ "cite_N": [ "@cite_16" ], "mid": [ "199096196" ], "abstract": [ "Debugging, validation, and maintenance of configurator knowledge bases are important tasks for the successful deployment of product configuration systems, due to frequent changes (e.g., new component types, new regulations) in the configurable products. Model based diagnosis techniques have shown to be a promising approach to support the test engineer in identifying faulty parts in declarative knowledge bases. Given positive (existing configurations) and negative test cases, explanations for the unexpected behavior of the configuration systems can be calculated using a consistency based approach. For the case of large and complex knowledge bases, we show how the usage of hierarchical abstractions can reduce the computation times for the explanations and in addition gives the possibility to iteratively and interactively refine diagnoses from abstract to more detailed levels. Starting from a logical definition of configuration and diagnosis of knowledge bases, we show how a basic diagnostic algorithm can be extended to support hierarchical abstractions in the configuration domain. Finally, experimental results from a prototypical implementation using an industrial constraint based configurator library are presented." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Abstract Interpretation to analyze programs was first introduced by @cite_0 , and later extended by @cite_25 @cite_23 to include assertions for abstract debugging. Their approach aims at analyzing every possible execution of a program, which makes is suitable to detect errors even in the case where no test cases are available. A common problem of these approaches is that of choosing appropriate abstractions in order to obtain useful results, which hinders the automatic applicability of these approaches for many programs. @cite_27 introduces a relaxed form of representation for abstract interpretation, which allows for more complex domains, while building the structure of the approximation dynamically. Our framework is strongly inspired by this work, but provides more insight on how to choose approximation operators for debugging, in particular in the case where test information is known. These questions are not addressed in @cite_27 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_25", "@cite_23" ], "mid": [ "2043100293", "1984342415", "2165069483", "149979760" ], "abstract": [ "A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).", "The essential part of abstract interpretation is to build a machine-representable abstract domain expressing interesting properties about the possible states reached by a program at runtime. Many techniques have been developed which assume that one knows in advance the class of properties that are of interest. There are cases however when there are no a priori indications about the 'best' abstract properties to use. We introduce a new framework that enables non-unique representations of abstract program properties to be used, and expose a method, called dynamic partitioning, that allows the dynamic determination of interesting abstract domains using data structures built over simpler domains. Finally, we show how dynamic partitioning can be used to compute non-trivial approximations of functions over infinite domains and give an application to the computation of minimal function graphs.", "Abstract interpretation is a formal method that enables the static determination (i.e. at compile-time) of the dynamic properties (i.e. at run-time) of programs. We present an abstract interpretation-based method, called abstract debugging , which enables the static and formal debugging of programs, prior to their execution, by finding the origin of potential bugs as well as necessary conditions for these bugs not to occur at run-time. We show how invariant assertions and intermittent assertions , such as termination, can be used to formally debug programs. Finally, we show how abstract debugging can be effectively and efficiently applied to higher-order imperative programs with exceptions and jumps to non-local labels, and present the Syntox system that enables the abstract debugging of the Pascal language by the determination of the range of the scalar variables of programs.", "" ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Recently, model checking approaches have been extended to attempt fault localization in counterexample traces. @cite_40 extended a model checking algorithm that is able to pinpoint transitions in traces responsible for a faulty behavior. @cite_39 presents another approach, which explores the neighborhood of counterexamples to determine causes of faulty behavior. These techniques mostly consider deviations in control flow and do not take data dependencies into account. Also, the derivation of the abstract model from the concrete program usually is non--trivial and is difficult to automate.
{ "cite_N": [ "@cite_40", "@cite_39" ], "mid": [ "2158870716", "1511405608" ], "abstract": [ "There is significant room for improving users' experiences with model checking tools. An error trace produced by a model checker can be lengthy and is indicative of a symptom of an error. As a result, users can spend considerable time examining an error trace in order to understand the cause of the error. Moreover, even state-of-the-art model checkers provide an experience akin to that provided by parsers before syntactic error recovery was invented: they report a single error trace per run. The user has to fix the error and run the model checker again to find more error traces.We present an algorithm that exploits the existence of correct traces in order to localize the error cause in an error trace, report a single error trace per error cause, and generate multiple error traces having independent causes. We have implemented this algorithm in the context of slam , a software model checker that automatically verifies temporal safety properties of C programs, and report on our experience using it to find and localize errors in device drivers. The algorithm typically narrows the location of a cause down to a few lines, even in traces consisting of hundreds of statements.", "One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Boothe @cite_18 made a C debugger with reverse execution capability using a step counter which counts the number of step executions and re-execution from the beginning of debuggees. The capability could be also implemented with our timestamp counter and re-execution. The difference comes from the purpose of each project. Boothe made reverse execution version of existing debugger commands such as backward step'', backward finish'', and so on. Since we try to implement more abstract control of program execution than raw debugger commands, the counter of step execution is too expensive for our purpose.
{ "cite_N": [ "@cite_18" ], "mid": [ "1969550081" ], "abstract": [ "This paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. We expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. The efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. These counters areused to precisely identify the desired target event on the fly as thetarget program executes. This is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. For reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. Our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re-execution. Two other important components of this debugger are its I O logging and checkpointing. We log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re-execution used for reverse movements. Short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_17 , Moher @cite_8 and @cite_15 save complete memory history of process to achieve fully random accessibility to program states. Their systems have to deal with large log''. Our system, however, saves only a pair of line number and value of timestamp to obtain the same capability by assuming the determinism of debuggees.
{ "cite_N": [ "@cite_8", "@cite_15", "@cite_17" ], "mid": [ "2153838924", "2060071172", "2082498963" ], "abstract": [ "The author introduces PROVIDE, a source-level process visualization and debugging environment currently under development at the University of Illinois at Chicago. PROVIDE is a modern coding and debugging environment that is designed to allow the user to configure interaction at a desired level of abstraction. It emphasizes the use of interactive computer graphics for the illustration of program execution, with special attention to the requirements of program debugging. The major features of PROVIDE are presented, especially the concepts of deferred-binding program animation, which allows users to interactively change the depiction of program execution during the debugging task, and process history consistency maintenance, which guarantees a consistent (automatically updated) record of program execution in the face of changes to program instructions and run-time data values. The current PROVIDE prototype is implemented on Macintosh workstations networked to a VAX 11 780 running 4.2 BSD Unix. >", "Demonic memory is a form of reconstructive memory for process histories. As a process executes, its states are regularly checkpointed, generating a history of the process at low time resolution. Following the initial generation, any prior state of the process can be reconstructed by starting from a checkpointed state and re-executing the process up through the desired state, thereby exploiting the redundancy between the states of a process and the description of that process (i.e., a computer program). The reconstruction of states is automatic and transparent. The history of a process may be examined as though it were a large two-dimensional array, or address space-time , with a normal address space as one axis and steps of process time as the other. An attempt to examine a state that is not physically stored triggers a “demon” which reconstructs that memory state before access is allowed. Regeneration requires an exact description of the original execution of the process. If the original process execution depends on non-deterministic events (e.g., user input), these events are recorded in an exception list , and are replayed at the proper points during re-execution. While more efficient than explicitly storing all state changes, such a checkpointing system is still prohibitively expensive for many applications; each copy (or snapshot ) of the system's state may be very large, and many snapshots may be required. Demonic memory saves both space and time by using a virtual copy mechanism. (Virtual copies share unchanging data with the objects that they are copies of, only storing differences from a prototype or original [MiBK86].) In demonic memory, the snapshot at each checkpoint is a virtual copy of the preceding checkpoint's snapshot. Hence it is called a virtual snapshot . In order to make the virtual snapshot mechanism efficient, state information is initially saved in relatively large units of space and time, on the order of pages and seconds, with single-word single-step regeneration undertaken only as needed. This permits the costs of indexing and lookup operations to be amortized over many locations.", "Typical debugging tools are insufficiently powerful to find the most difficult types of program misbehaviors. We have implemented a prototype of a new debugging system, IGOR, which provides a great deal more useful information and offers new abilities that are quite promising. The system runs fast enough to be quite useful while providing many features that are usually available only in an interpreted environment. We describe here some improved facilities (reverse execution, selective searching of execution history, substitution of data and executable parts of the programs) that are needed for serious debugging and are not found in traditional single-thread debugging tools. With a little help from the operating system, we provide these capabilities at reasonable cost without modifying the executable code and running fairly close to full speed. The prototype runs under the DUNE distributed operating system. The current system only supports debugging of single-thread programs. The paper describes planned extensions to make use of extra processors to speed the system and for applying the technique to multi-thread and time dependent executions." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Ducass ' e @cite_26 allows the programmer to control the execution not by source statement orientation, but by event orientation such as assignments, function calls, loops, and so on. Users write Prolog-like forms to designate breakpoints which have complex conditions. This mechanism is complementary to our system and suitable for a front end of it in order to designate appropriate positions where we would move control point to.
{ "cite_N": [ "@cite_26" ], "mid": [ "1985444680" ], "abstract": [ "Presents Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics, whereas the source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data, whereas debuggers usually search according to either control flow or data. As opposed to fully \"relational\" debuggers which use plain database querying mechanisms, the Coca trace querying mechanism does not require any storage. The analysis is done on-the-fly, synchronously with the traced execution. Coca is therefore more powerful than \"source-line\" debuggers and more efficient than relational debuggers." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_1 @cite_10 developed a event-based instrumentation tool, CCI, which inserts instrumentation codes into C source codes. The converted codes have platform independence. The execution slowdown, however, is 2.09 times in the case of laplace.c and 5.85 times in the case of life.c @cite_10 . In order to achieve position system, events only about control flow should be generated.
{ "cite_N": [ "@cite_10", "@cite_1" ], "mid": [ "2243384985", "2096537660" ], "abstract": [ "The Alamo monitor architecture reduces the difficulty of writing dynamic analysis tools such as special-purpose profilers, bug-detectors, and visualizations.", "Automatic software instrumentation is usually done at the machine level or is targeted at specific program behavior for use with a particular monitoring application. The paper describes CCI, an automatic software instrumentation tool for ANSI C designed to serve a broad range of program execution monitors. CCI supports high level instrumentation for both application-specific behavior as well as standard libraries and data types. The event generation mechanism is defined by the execution monitor which uses CCI, providing flexibility for different monitors' execution models. Code explosion and the runtime cost of instrumentation are reduced by declarative configuration facilities that allow the monitor to select specific events to be instrumented. Higher level events can be defined by combining lower level events with information obtained from semantic analysis of the instrumented program." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_3 made EEL, which is a library for building tools to analyze and modify an executable program. Using EEL, we could implement the insertion of codes to maintain timestamp in executable code level. The solution, however, is dependent on a specified platform, so we chose the intermediate code level and modified GCC.
{ "cite_N": [ "@cite_3" ], "mid": [ "2040183246" ], "abstract": [ "EEL (Executable Editing Library) is a library for building tools to analyze and modify an executable (compiled) program. The systems and languages communities have built many tools for error detection, fault isolation, architecture translation, performance measurement, simulation, and optimization using this approach of modifying executables. Currently, however, tools of this sort are difficult and time-consuming to write and are usually closely tied to a particular machine and operating system. EEL supports a machine- and system-independent editing model that enables tool builders to modify an executable without being aware of the details of the underlying architecture or operating system or being concerned with the consequences of deleting instructions or adding foreign code." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The problem of debugging memory corruption problems in production was explicitly identified by Patil and Fischer in @cite_2 , in which they describe using idle processors to absorb their technique's substantial performance impact. Unfortunately, this is not practical in a general-purpose system: idle processors cannot be relied upon to be available for extraneous processing. Indeed, in performance critical systems any performance impact is often unacceptable.
{ "cite_N": [ "@cite_2" ], "mid": [ "2056452385" ], "abstract": [ "Efficient Run-time Monitoring Using Shadow Processing Harish Patil* and Charles Fischer University of Wisconsin —Madison** Abstract General purpose multiprocessors are becoming increasingly common. We propose using pairs of processors, one running an ordinary application program and the other monitoring the application’s execution. We call the processor doing the monitoring a “shadow processor,” as it “shadows” the main processor’s execution. We have developed a prototype shadow processing system which supports full-size programs written in C. Our system instruments an executable user program in C to obtain a “main process” and a “shadow process.” The main process performs computations from the original program, occasionally communicating a few key values to the shadow process. The shadow process follows the main process, checking pointer and array accesses and detecting memory leaks. The overhead to the main process is very low — almost always less than 10 . Further, since the shadow process avoids repeating some of the computations from the input program, it runs much faster than a single process performing both the computation and monitoring. Sometimes the shadow process can even run ahead of the main process catching errors before they actually occur. Our system has found a number of errors (15 so far) in widelyused Unix utilities and SPEC92 benchmarks. It also detected many subtle memory leaks in various test cases. We believe our approach shows great potential in improving the quality and reliability of application programs at a very modest cost." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
Some memory allocators have addressed debugging problems in production by allowing their behavior to be dynamically changed to provide greater debugging support @cite_13 . This allows optimal allocators to be deployed into production, while still allowing their debugging features to be later enabled should problems arise. A common way for these allocators to detect buffer overruns is to optionally place red zones around allocated memory. However, this only provides for immediate identification of the errant code if stores to the red zone induce a synchronous fault. Such faults are typically achieved by coopting the virtual memory system in some way --- either by surrounding a buffer with unmapped regions, or by performing a check on each access. The first has enormous cost in terms of space, and the second in terms of time --- neither can be acceptably enabled at all times. Thus, these approaches are still only useful for reproducible memory corruption problems.
{ "cite_N": [ "@cite_13" ], "mid": [ "1746694335" ], "abstract": [ "This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex objects by retaining their state between uses. These same primitives prove equally effective for managing stateless memory (e.g. data pages and temporary buffers) because they are space-efficient and fast. The allocator's object caches respond dynamically to global memory pressure, and employ an object-coloring scheme that improves the system's overall cache utilization and bus balance. The allocator also has several statistical and debugging features that can detect a wide range of problems throughout the system." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
If memory corruption cannot be acceptably prevented in production code, then the focus must shift to debugging the corruption postmortem. While the notion of postmortem debugging has existed since the earliest dawn of debugging @cite_5 , there seems to have been very little work on postmortem debugging of memory corruption per se; such as it is, work on postmortem debugging has focused on race condition detection in parallel and distributed programs. The lack of work on postmortem debugging is surprising given its clear advantages for debugging production systems --- advantages that were clearly elucidated by McGregor and Malone in @cite_10 :
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2020874122", "2044423461" ], "abstract": [ "This paper describes methods developed at the Cambridge University Mathematical Laboratory for the speedy diagnosis of mistakes in programmes for an automatic high-speed digital computer. The aim of these methods is to avoid undue wastage of machine time, and a principal feature is the provision of several standard routines which may be used in conjunction with faulty programmes to check the operation of the latter. Two of these routines are considered in detail, and the others are briefly described.", "Program development can be greatly speeded by a dump analysis program which makes the state of a program more visible to the programmer. A single comprehensive analysis presenting as much of the relevant material in as concise a manner as possible has proved superior in use to the alternative of interactive analysis one item-at-a-time. The methods adopted in the STAB utility to achieve comprehensive and concise output are described. The system and compiler modifications necessary to support this type of system are discussed." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The only nod to postmortem debugging of memory corruption seems to come from memory allocators such as the slab allocator @cite_13 used by the Solaris kernel. This allocator can optionally log information with each allocation and deallocation; in the event of failure, these logs can be used to determine the subsystem allocating the overrun buffer. While this mechanism has proved to be enormously useful in debugging memory corruption problems in the Solaris kernel, it is still far too space- and time-intensive to be enabled at all times in production environments.
{ "cite_N": [ "@cite_13" ], "mid": [ "1746694335" ], "abstract": [ "This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex objects by retaining their state between uses. These same primitives prove equally effective for managing stateless memory (e.g. data pages and temporary buffers) because they are space-efficient and fast. The allocator's object caches respond dynamically to global memory pressure, and employ an object-coloring scheme that improves the system's overall cache utilization and bus balance. The allocator also has several statistical and debugging features that can detect a wide range of problems throughout the system." ] }
cs0309055
1495757250
In this paper, we propose a mathematical framework for automated bug localization. This framework can be briefly summarized as follows. A program execution can be represented as a rooted acyclic directed graph. We define an execution snapshot by a cut-set on the graph. A program state can be regarded as a conjunction of labels on edges in a cut-set. Then we argue that a debugging task is a pruning process of the execution graph by using cut-sets. A pruning algorithm, i.e., a debugging task, is also presented.
* Shapiro's algorithmic debugging was invented for prolog programs @cite_2 . Fig. shows our interpretation of his work. From our viewpoint, it uses a proof tree as an execution graph. (Attention: This interpretation differs from a normal proof tree. Our interpretation is based on a line graph A line graph can be get by interchanging vertices and edges of an original graph. of a normal proof tree.) He used only one edge as a cut-set since removal of any edge divides a tree into two disconnected subtrees. A state is also simple because only one label, i.e., one unified clause, is enough. In this work, step of the pruning process is fully automated and a programmer carries out step by answering yes'' or no'' to tell a system the correctness of the label on the edge. GADT @cite_6 and Lichtenstein's system @cite_0 can be interpreted as the same manner because they are straightforward extensions of Shapiro's work.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_2" ], "mid": [ "2044672898", "2134080718", "1514468887" ], "abstract": [ "Algorithmic Debugging is a theory of debugging that uses queries on the compositional semantics of a program in order to localize bugs. It uses the following principle: if a computation of a program's component gives an incorrect result, while all the subcomputations it invokes compute correct results, then the code of this component is erroneous. Algorithmic Debugging is applied, in this work, to reactive systems, in particular to programs written in Flat Concurrent Prolog (FCP). Debugging reactive systems is known to be more difficult than the debugging of functional systems. A functional system is fully described by the relation between its initial input and final output; this context-freedom is used in debugging. A reactive system continuously responds to external inputs, thus its debugging cannot make use of context-free input output relations. Given a compositional semantic model for a concurrent programming language, we demonstrate how one can directly apply the ideas of Algorithmic Debugging to obtain a theory of program debugging for the considered language. The conflict between the context-freedom of input output relations and the reactive nature of concurrent systems is resolved by using semantic objects which record the reactive nature of the system's components. In functional algorithmic debugging the queries relate to input output relations; in concurrent algorithmic debugging the queries refer to semantic objects called processes which capture the reactive nature of FCP computations. A diagnosis algorithm for incorrect FCP programs is proposed. The algorithm gets an erroneous computation and using queries isolates an erroneous clause or an incomplete procedure. An FCP implementation of the diagnosis algorithm demonstrates the usefulness as well as the difficulties of Algorithmic Debugging of FCP programs.", "This paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. In this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. The long-range goal of this work is a semi-automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. The method is generally applicable to procedural langua ges and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro, was however limited to small Prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. Another drawback of the original method is the large number of interactions with the user during bug localization. To our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. The method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. Additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. We believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as Pascal. These improvements together makes it more feasible to debug larger programs. However, additional improvements are needed to make it handle pointer-related side-effects and concurrent Pascal programs. A prototype generalized algorithmic debugger for a Pascal subset without pointer side-effects and a test case generator for application programs in Pascal, C, dBase, and LOTUS have been implemented.", "The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize." ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
A first proposal on how to exploit implicit parallelism in tabling systems was Freire's @cite_53 . In this model, each tabled subgoal is computed independently in a single computational thread, a . Each generator thread is associated with a unique tabled subgoal and it is responsible for fully exploiting its search tree in order to obtain the complete set of answers. A generator thread dependent on other tabled subgoals will asynchronously consume answers as the correspondent generator threads will make them available. Within this model, parallelism results from having several generator threads running concurrently. Parallelism arising from non-tabled subgoals or from execution alternatives to tabled subgoals is not exploited. Moreover, we expect that scheduling and load balancing would be even harder than for traditional parallel systems.
{ "cite_N": [ "@cite_53" ], "mid": [ "1608806855" ], "abstract": [ "This paper addresses general issues involved in parallelizing tabled evaluations by introducing a model of shared-memory parallelism which we call table-parallelism, and by comparing it to traditional models of parallelizing SLD. A basic architecture for supporting table-parallelism in the framework of the SLG-WAM[14] is also presented, along with an algorithm for detecting termination of subcomputations." ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
There have been other proposals for concurrent tabling but in a distributed memory context. Hu @cite_46 was the first to formulate a method for distributed tabled evaluation termed . This method matches subgoals with processors in a similar way to Freire's approach. Each processor gets a single subgoal and it is responsible for fully exploiting its search tree and obtain the complete set of answers. One of the main contributions of SLGMP is its controlled scheme of propagation of subgoal dependencies in order to safely perform distributed completion. An implementation prototype of SLGMP was developed, but as far as we know no results have been reported.
{ "cite_N": [ "@cite_46" ], "mid": [ "2495949634" ], "abstract": [ "SLG resolution, a type of tabled resolution and a technique of logic programming (LP), has polynomial data complexity for ground Datalog queries with negation, making it suitable for deductive database (DDB). It evaluates non-stratified negation according to the three-valued Well-Founded Semantics, making it a suitable starting point for non-monotonic reasoning (NMR). Furthermore, SLG has an efficient partial implementation in the SLG-WAM which, in the XSB logic programming system, has proven an order of magnitude faster than current DDR systems for in-memory queries. Building on SLG resolution, we formulate a method for distributed tabled resolution termed Multi-Processor SLG (SLGMP). Since SLG is modeled as a forest of trees, it then becomes natural to think of these trees as executing at various places over a distributed network in SLGMP. Incremental completion, which is necessary for efficient sequential evaluation, can be modeled through the use of a subgoal dependency graph (SDG), or its approximation. However the subgoal dependency graph is a global property of a forest; in a distributed environment each processor should maintain as small a view of the SDG as possible. The formulation of what and when dependency information must be maintained and propagated in order for distributed completion to be performed safely is the central contribution of SLGMP. Specifically, subgoals in SLGMP are properly numbered such that most of the dependencies among subgoals are represented by the subgoal numbers. Dependency information that is not represented by subgoal numbers is maintained explicitly at each processor and propagated by each processor. SLGMP resolution aims at efficiently evaluating normal logic programs in a distributed environment. SLGMP operations are explicitly defined and soundness and completeness is proven for SLGMP with respect to SLG for programs which terminate for SLG evaluation. The resulting framework can serve as a basis for query processing and non-monotonic reasoning within a distributed environment. We also implemented Distributed XSB, a prototype implementation of SLGMP. Distributed XSB, as a distributed tabled evaluation model, is really a distributed problem solving system, where the data to solve the problem is distributed and each participating process cooperates with other participants (perhaps including itself), by sending and receiving data. Distributed XSB proposes a distributed data computing model, where there may be cyclic dependencies among participating processes and the dependencies can be both negative and positive." ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
A web of trust'' used in PGP is referred in several researches including the peer-to-peer authentication @cite_9 , trust computation @cite_7 @cite_14 , and privacy enhanced technology @cite_6 . However, there are few description on PGP keyserver. It might be because PGP keyserver mechanism is too simple. It is not a CA but just a pool of public keys. From users' viewpoint, PGP keyserver has a large amount of OpenPGP public keys that provide the interesting material for social analysis of network community. For example, OpenPGP keyserver developer Jonathan McDowell also developed Experimental PGP key path finder'' @cite_20 that searches and displays the chain of certification between the users.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_6", "@cite_20" ], "mid": [ "1850823044", "1585665690", "1586036900", "1494178662", "" ], "abstract": [ "Most currently deployed public key infrastructures (PKIs) are hierarchically oriented and rely on a centralized design. Hierarchical PKIs may be appropriate solutions for many usage-scenarios, but there exists the viable alternative of the 'Web of Trust'. In a web of trust, each user of the system can choose for himself whom he elects to trust, and whom not. After contrasting the properties of web-of-trust based PKIs to those of hierarchical PKIs, an introduction to webs of trust and to quantitative trust calculations is given. The paper concludes with the presentation of an efficient, sub-exponential algorithm that allows heuristic computations of trust paths in a web of trust.", "CRYPTOGRAPHIC PROTOCOLS. Protocol Building Blocks. Basic Protocols. Intermediate Protocols. Advanced Protocols. Esoteric Protocols. CRYPTOGRAPHIC TECHNIQUES. Key Length. Key Management. Algorithm Types and Modes. Using Algorithms. CRYPTOGRAPHIC ALGORITHMS. Data Encryption Standard (DES). Other Block Ciphers. Other Stream Ciphers and Real Random-Sequence Generators. Public-Key Algorithms. Special Algorithms for Protocols. THE REAL WORLD. Example Implementations. Politics. SOURCE CODE.source Code. References.", "From the Publisher: If gou've ever made a secure purchase with your credit card online, you have seen cryptography, or \"crypto,\" in action. From Steven Levy -- the author who made \"hackers\" a household word -- comes this account of a revolution that will affect every citizen in the twenty-first century. Crypto tells the inside story of how a group of \"crypto rebels\" -- nerds and visionaries turned freedom fighters -- teamed up with corporate interests to beat Big Brother and ensure our privacy on the Internet. From Whitfield Diffie's discovery of public key encryption in the mid-1970s to the Phil Zimmerman-led \"cypherpunks'\" recent fight to distribute encryption power freely to the masses, Levy's history of one of the most controversial and important topics of the digital age reads like the best futuristic fiction.", "From the Publisher: Use of the Internet is expanding beyond anyone's expectations. As corporations, government offices, and ordinary citizens begin to rely on the information highway to conduct business, they are realizing how important it is to protect their communications -- both to keep them a secret from prying eyes and to ensure that they are not altered during transmission. Encryption, which until recently was an esoteric field of interest only to spies, the military, and a few academics, provides a mechanism for doing this. PGP, which stands for Pretty Good Privacy, is a free and widely available encryption program that lets you protect files and electronic mail. Written by Phil Zimmermann and released in 1991, PGP works on virtually every platform and has become very popular both in the U.S. and abroad. Because it uses state-of-the-art public key cryptography, PGP can be used to authenticate messages, as well as keep them secret. With PGP, you can digitally \"sign\" a message when you send it. By checking the digital signature at the other end, the recipient can be sure that the message was not changed during transmission and that the message actually came from you. PGP offers a popular alternative to U.S. government initiatives like the Clipper Chip because, unlike Clipper, it does not allow the government or any other outside agency access to your secret keys. PGP: Pretty Good Privacy by Simson Garfinkel is both a readable technical user's guide and a fascinating behind-the-scenes look at cryptography and privacy. Part I, \"PGP Overview,\" introduces PGP and the cryptography that underlies it. Part II, \"Cryptography History and Policy,\" describes the history of PGP -- its personalities, legal battles, and other intrigues; it also provides background on the battles over public key cryptography patents and the U.S. government export restrictions, and other aspects of the ongoing public debates about privacy and free speech. Part III, \"Using PGP,\" describes how to use PGP: protecting files and email, creating and using keys, signing messages, certifying and distributing keys, and using key servers. Part IV, \"Appendices,\" describes how to obtain PGP from Internet sites, how to install it on PCs, UNIX systems, and the Macintosh, and other background information. The book also contains a glossary, a bibliography, and a handy reference card that summarizes all of the PGP commands, environment variables, and configuration variables.", "" ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
OpenPGP PKI itself can be described as the superset of PKI @cite_27 , however, combining OpenPGP PKI with other authentication system is challenging work in both theoretical and operational field. Formal study of trust relationship of PKI started in the late 1990s @cite_7 @cite_14 and GnuPG development version in December 2002 started to support its trust calculation with GnuPGP's trust signature.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_7" ], "mid": [ "", "1850823044", "1585665690" ], "abstract": [ "", "Most currently deployed public key infrastructures (PKIs) are hierarchically oriented and rely on a centralized design. Hierarchical PKIs may be appropriate solutions for many usage-scenarios, but there exists the viable alternative of the 'Web of Trust'. In a web of trust, each user of the system can choose for himself whom he elects to trust, and whom not. After contrasting the properties of web-of-trust based PKIs to those of hierarchical PKIs, an introduction to webs of trust and to quantitative trust calculations is given. The paper concludes with the presentation of an efficient, sub-exponential algorithm that allows heuristic computations of trust paths in a web of trust.", "CRYPTOGRAPHIC PROTOCOLS. Protocol Building Blocks. Basic Protocols. Intermediate Protocols. Advanced Protocols. Esoteric Protocols. CRYPTOGRAPHIC TECHNIQUES. Key Length. Key Management. Algorithm Types and Modes. Using Algorithms. CRYPTOGRAPHIC ALGORITHMS. Data Encryption Standard (DES). Other Block Ciphers. Other Stream Ciphers and Real Random-Sequence Generators. Public-Key Algorithms. Special Algorithms for Protocols. THE REAL WORLD. Example Implementations. Politics. SOURCE CODE.source Code. References." ] }
cs0308044
2950013766
A new method of hierarchical clustering of graph vertexes is suggested. In the method, the graph partition is determined with an equivalence relation satisfying a recursive definition stating that vertexes are equivalent if the vertexes they point to (or vertexes pointing to them) are equivalent. Iterative application of the partitioning yields a hierarchical clustering of graph vertexes. The method is applied to the citation graph of hep-th. The outcome is a two-level classification scheme for the subject field presented in hep-th, and indexing of the papers from hep-th in this scheme. A number of tests show that the classification obtained is adequate.
In this subsection, we demonstrate that above equivalence relation @math is a natural development of the recursive algorithms PageRank @cite_5 , HITS @cite_12 , and SimRank @cite_3 , which became lately quite popular among the network miners.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_12" ], "mid": [ "1854214752", "", "2138621811" ], "abstract": [ "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.", "", "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DBXplorer @cite_3 was developed by Microsoft Research, and like BANKS and Mragyati, it uses join trees to compute an SQL statement to access the data. The algorithm to compute these differs, as does the implementation, which was developed for Microsoft's IIS and SQL Server, the others being implemented in Java. DbSurfer does not require access to the database to discover the trails, only to display the data when user clicks on a link in that trail.
{ "cite_N": [ "@cite_3" ], "mid": [ "2121350579" ], "abstract": [ "Internet search engines have popularized the keyword-based search paradigm. While traditional database management systems offer powerful query languages, they do not allow keyword-based search. In this paper, we discuss DBXplorer, a system that enables keyword-based searches in relational databases. DBXplorer has been implemented using a commercial relational database and Web server and allows users to interact via a browser front-end. We outline the challenges and discuss the implementation of our system, including results of extensive experimental evaluation." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DISCOVER is the latest offering and shares many similarities to Mragyati, BANKS and DbXplorer, but uses a greedy algorithm to discover the @cite_17 . It also takes greater advantage of the database's internal keyword search facilities by using Oracle's Context cartridge for the text indexing.
{ "cite_N": [ "@cite_17" ], "mid": [ "2098388305" ], "abstract": [ "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
have also introduced a system for keyword search @cite_40 . Their system works by finding results for queries of the form @math near @math (e.g. find movie near travolta cage). Two sets of entries are found - and the contents of the first set are returned based upon their proximity to members of the second set. In comparison to DbSurfer, there is no support for navigation of the database (manual or assisted) nor any display of the context of the results.
{ "cite_N": [ "@cite_40" ], "mid": [ "1671881141" ], "abstract": [ "An information retrieval (IR) engine can rank documents based on textual proximity of keywords within each document. In this paper we apply this notion to search across an entire database for objects that are \"near\" other relevant objects. Proximity search enables simple \"focusing\" queries based on general relationships among objects, helpful for interactive query sessions. We view the database as a graph, with data in vertices (objects) and relationships indicated by edges. Proximity is defined based on shortest paths between objects. We have implemented a prototype search engine that uses this model to enable keyword searches over databases, and we have found it very effective for quickly finding relevant information. Computing the distance between objects in a graph stored on disk can be very expensive. Hence, we show how to build compact indexes that allow us to quickly find the distance between objects at search time. Experiments show that our algorithms are effcient and scale well." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
The join discovery problem is related to the problem tackled by the @cite_18 @cite_11 . The idea underlying the universal relation model is to allow querying the database soley through its attributes without explicitly specifying the join paths. The expressive querying power of such a system is essentially that of a union of conjunctive queries (see @cite_19 ). DbSurfer takes this approach further by allowing the user to specify values (keywords) without stating their related attributes and providing relevance based filtering.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "2146899290", "2162621793", "" ], "abstract": [ "The representative instance is proposed as a representation of the data stored in a database whose relations are not the projections of a universal instance. Database schemes are characterized for which local consistency implies global consistency. (Local consistency means that each relation satisfies its own functional dependencies; global consistency means that the representative instance satisfies all the functional dependencies.) A method of efficiently computing projections of the representative instance is given, provided that local consistency implies global consistency. Throughout, it is assumed that a cover of the functional dependencies is embodied in the database scheme in the form of keys.", "This book goes into the details of database conception and use, it tells you everything on relational databases. from theory to the actual used algorithms.", "" ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Ideas stemming from linear logic have been used previously by Abramsky in the study of classical reversible computation @cite_44 .
{ "cite_N": [ "@cite_44" ], "mid": [ "2015392385" ], "abstract": [ "Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of low-level machine models. By contrast, we develop a more structural approach. We show how high-level functional programs can be mapped compositionally (i.e. in a syntax-directed fashion) into a simple kind of automata which are immediately seen to be reversible. The size of the automaton is linear in the size of the functional term. In mathematical terms, we are building a concrete model of functional computation. This construction stems directly from ideas arising in Geometry of Interaction and Linear Logic--but can be understood without any knowledge of these topics. In fact, it serves as an excellent introduction to them. At the same time, an interesting logical delineation between reversible and irreversible forms of computation emerges from our analysis." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
One of the earlier attempts at formulating a language for quantum computation was Greg Baker's Qgol @cite_23 . Its implementation (which remained incomplete) used so-called uniqueness types (similar but not identical to our linear variables) for quantum objects @cite_36 . The language is not universal for quantum computation.
{ "cite_N": [ "@cite_36", "@cite_23" ], "mid": [ "1555216982", "1484366641" ], "abstract": [ "In this paper we describe a Curry-like type system for graphs and extend it with uniqueness information to indicate that certain objects are only ‘locally accessible’. The correctness of type assignment guarantees that no external access on such an object will take place in the future. We prove that types are preserved under reduction (for both type systems) for a large class of rewrite systems. Adding uniqueness information provides a solution to two problems in implementations of functional languages: efficient space behaviour and interfacing with non-functional operations.", "From the Publisher: In teaching the methods of functional programming--in particular, how to program in Standard ML, a functional language recently developed at Edinburgh University, the author shows how to use such concepts as lists, trees, higher-order functions and infinite data structures." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Another imperative language, based on C++, is the Q language developed by Bettelli, Calarco and Serafini @cite_7 . As in the case of QCL, no formal calculus is provided. A simulator is also available.
{ "cite_N": [ "@cite_7" ], "mid": [ "2121369607" ], "abstract": [ "It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives. The underlying scheme involves a run-time environment which calculates the byte-code for the quantum operations and pipes it to a quantum device controller or to a simulator. This language can compactly express existing quantum algorithms and reduce them to sequences of elementary operations; it also easily lends itself to automatic, hardware independent, circuit simplification. A publicly available preliminary implementation of the proposed ideas has been realised using the language." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A more theoretical approach is taken by Selinger in his description of the functional language QPL @cite_26 . This language has both a graphical and a textual representation. A formal semantics is provided.
{ "cite_N": [ "@cite_26" ], "mid": [ "1999626800" ], "abstract": [ "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and has an interesting denotational semantics in terms of complete partial orders of superoperators." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
The imperative language qGCL, developed by Sanders and Zuliani @cite_63 , is based on Dijkstra's guarded command language. It has a formal semantics and proof system.
{ "cite_N": [ "@cite_63" ], "mid": [ "2123328018" ], "abstract": [ "The rapid progress of computer technology has been accompanied by a corresponding evolution of software development, from hardwired components and binary machine code to high level programming languages, which allowed to master the increasing hardware complexity and fully exploit its potential." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A previous attempt to construct a lambda calculus for quantum computation is described by Maymin in @cite_27 . However, his calculus appears to be strictly stronger than the quantum Turing machine @cite_19 . It seems to go beyond quantum mechanics in that it does not appear to have a unitary and reversible operational model, instead relying on a more general class of transformations. It is an open question whether the calculus is physically realizable.
{ "cite_N": [ "@cite_19", "@cite_27" ], "mid": [ "1668464107", "1676498955" ], "abstract": [ "We show that the lambda-q calculus can efficiently simulate quantum Turing machines by showing how the lambda-q calculus can efficiently simulate a class of quantum cellular automaton that are equivalent to quantum Turing machines. We conclude by noting that the lambda-q calculus may be strictly stronger than quantum computers because NP-complete problems such as satisfiability are efficiently solvable in the lambda-q calculus but there is a widespread doubt that they are efficiently solvable by quantum computers.", "This paper introduces a formal met alanguage called the lambda-q calculus for the specification of quantum programming languages. This met alanguage is an extension of the lambda calculus, which provides a formal setting for the specification of classical programming languages. As an intermediary step, we introduce a formal met alanguage called the lambda-p calculus for the specification of programming languages that allow true random number generation. We demonstrate how selected randomized algorithms can be programmed directly in the lambda-p calculus. We also demonstrate how satisfiability can be solved in the lambda-q calculus." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A seminar by Wehr @cite_41 suggests that linear logic may be useful in constructing a calculus for quantum computation within the mathematical framework of Chu spaces. However, the author stops short of developing such a calculus.
{ "cite_N": [ "@cite_41" ], "mid": [ "2006290558" ], "abstract": [ "Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of @math can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a quantum Turing machine (QTM) in time @math . We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a QTM in time @math . The former bound is tight since recent work of Grover [in Proc. @math th Annual ACM Symposium Theory Comput. , 1996] shows how to accept the class @math relative to any oracle on a quantum computer in time @math ." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Abramsky and Coecke describe a realization of a model of multiplicative linear logic via the quantum processes of entangling and de-entangling by means of typed projectors. They briefly discuss how these processes can be represented as terms of an affine lambda calculus @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "1958788572" ], "abstract": [ "Abstract Within the Geometry of Interaction (GoI) paradigm, we present a setting that enables qualitative differences between classical and quantum processes to be explored. The key construction is the physical interpretation realization of the traced monoidal categories of finite dimensional vector spaces with tensor product as monoidal structure and of finite sets and relations with Cartesian product as monoidal structure, both of them providing a so-called wave-style GoI. The developments in this paper reveal that envisioning state update due to quantum measurement as a process provides a powerful tool for developing high-level approaches to quantum information processing." ] }
cs0306044
2952425141
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of , which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.
In addition, there is a long history of interest in optimality of a distributed algorithm given certain conditions, such as a particular pattern of failures @cite_23 @cite_26 @cite_11 @cite_35 @cite_27 @cite_33 , or a particular pattern of message delivery @cite_1 @cite_42 @cite_3 . In a sense, work on optimality envisions a fundamentally different role for the adversary in which it is trying to produce bad performance for both the candidate and champion algorithms; in contrast, the adversary used in competitive analysis usually cooperates with the champion.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_33", "@cite_42", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_11" ], "mid": [ "2092431596", "2091004544", "2078564399", "1963734157", "1600854295", "", "2054383044", "2058777482", "" ], "abstract": [ "This work applies the theory of knowledge in distributed systems to the design of efficient fault-tolerant protocols. We define a large class of problems requiring coordinated, simultaneous action in synchronous systems, and give a method of transforming specifications of such problems into protocols that areoptimal in all runs: these protocols are guaranteed to perform the simultaneous actions as soon as any other protocol could possibly perform them, given the input to the system and faulty processor behavior. This transformation is performed in two steps. In the first step we extract, directly from the problem specification, a high-level protocol programmed using explicit tests for common knowledge. In the second step we carefully analyze when facts become common knowledge, thereby providing a method of efficiently implementing these protocols in many variants of the omissions failure model. In the generalized omissions model, however, our analysis shows that testing for common knowledge is NP-hard. Given the close correspondence between common knowledge and simultaneous actions, we are able to show that no optimal protocol for any such problem can be computationally efficient in this model. The analysis in this paper exposes many subtle differences between the failure models, including the precise point at which this gap in complexity occurs.", "By analyzing the states of knowledge that the processors attain in an unreliable system of a simple type, we capture some of the basic underlying structure of such systems. In particular, we study what facts become common knowledge at various points in the execution of protocols in an unreliable system. This characterizes the simultaneous actions that can be carried out in such systems. For example, we obtain a complete characterization of the number of rounds required to reach simultaneous Byzantine agreement, given the pattern in which failures occur. From this we derive a new protocol for this problem that is optimal in all runs, rather than just always matching the worst-case lower bound. In some cases this protocol attains simultaneous Byzantine agreement in as few as two rounds. We also present a nontrivial simultaneous agreement problem called bivalent agreement for which there is a protocol that always halts in two rounds. Our analysis applies to simultaneous actions in general, and not just to Byzantine agreement. The lower bound proofs presented here generalize and simplify the previously known proofs.", "There is a very close relationship between common knowledge and simultaneity in synchronous distributed systems. The analysis of several well-known problems in terms of common knowledge has led to round-optimal protocols for these problems, including Reliable Broadcast, Distributed Consensus, and the Distributed Firing Squad problem. These problems require that the correct processors coordinate their actions in some way but place no restrictions on the behaviour of the faulty processors. In systems with benign processor failures, however, it is reasonable to require that the actions of a faulty processor be consistent with those of the correct processors, assuming it performs any action at all. We consider problems requiring consistent, simultaneous coordination. We then analyze these problems in terms of common knowledge in several failure models. The analysis of these stronger problems requires a stronger definition of common knowledge, and we study the relationship between these two definitions. In many cases, the two definitions are actually equivalent, and simple modifications of previous solutions yield round-optimal solutions to these problems. When the definitions differ, however, we show that such problems cannot be solved, even in failure-free executions.", "We present a simple algorithm for maintaining a replicated distributed dictionary which achieves high availability of data, rapid processing of atomic actions, efficient utilization of storage, and tolerance to node or network failures including lost or duplicated messages. It does not require transaction logs, synchronized clocks, or other complicated mechanisms for its operation. It achieves consistency contraints which are considerably weaker than serial consistency but nonetheless are adequate for many dictionary applications such as electronic appointment calendars and mail systems. The degree of consistency achieved depends on the particular history of operation of the system in a way that is intuitive and easily understood. The algorithm implements a \"best effort\" approximation to full serial consistency, relative to whatever internode communication has successfully taken place, so the semantics are fully specified even under partial failure of the system. Both the correctness of the algorithm and the utility of such weak semantics depend heavily on special properties of the dictionary operations.", "", "", "Abstract A distributed computing system consists of a set of individual processors that communicate through some medium. Coordinating the actions of such processors is essential in distributed computing. Researchers have long endeavored to find efficient solutions to a variety of coordination problems. Recently, processor knowledge has been used to characterize such solutions and to derive more efficient ones. Most of this work has concentrated on the relationship between common knowledge and simultaneous coordination. This paper considers non-simultaneous coordination problems. The results of this paper add to our understanding of the relationship between knowledge and the different requirements of coordination problems. This paper considers the ideas of optimal and optimum solutions to a coordination problem and precisely characterizes the problems for which optimum solutions exist. This characterization is based on combinations of eventual common knowledge and continual common knowledge . The paper then considers more general problems, for which optimal, but no optimum, solutions exist. It defines a new form of knowledge, called extended knowledge , which combines eventual and continual knowledge, and shows how extended knowledge can be used to both characterize and construct optimal protocols for coordination.", "Two different kinds of Byzantine Agreement for distributed systems with processor faults are defined and compared. The first is required when coordinated actions may be performed by each participant at different times. This kind is called Simultaneous Byzantine Agreement (SBA). This paper deals with the number of rounds of message exchange required to reach Byzantine Agreement of either kind (BA). If an algorithm allows its participants to reach Byzantine agreement in every execution in which at most t participants are faulty, then the algorithm is said to tolerate t faults. It is well known that any BA algorithm that tolerates t faults (with t n - 1 where n denotes the total number of processors) must run at least t + 1 rounds in some execution. However, it might be supposed that in executions where the number f of actual faults is small compared to t , the number of rounds could be correspondingly small. A corollary of our first result states that (when t n - 1) any algorithm for SBA must run t + 1 rounds in some execution where there are no faults. For EBA (with t n - 1), a lower bound of min( t + 1, f + 2) rounds is proved. Finally, an algorithm for EBA is presented that achieves the lower bound, provided that t is on the order of the square root of the total number of processors.", "" ] }
cs0306048
1675778287
Dataset storage, exchange, and access play a critical role in scientific applications. For such purposes netCDF serves as a portable and efficient file format and programming interface, which is popular in numerous scientific application domains. However, the original interface does not provide an efficient mechanism for parallel data storage and access. In this work, we present a new parallel interface for writing and reading netCDF datasets. This interface is derived with minimum changes from the serial netCDF interface but defines semantics for parallel access and is tailored for high performance. The underlying parallel I O is achieved through MPI-IO, allowing for dramatic performance gains through the use of collective I O optimizations. We compare the implementation strategies with HDF5 and analyze both. Our tests indicate programming convenience and significant I O performance improvement with this parallel netCDF interface.
MPI-IO is a parallel I O interface specified in the MPI-2 standard. It is implemented and used on a wide range of platforms. The most popular implementation, ROMIO @cite_13 is implemented portably on top of an abstract I O device layer @cite_1 @cite_22 that enables portability to new underlying I O systems. One of the most important features in ROMIO is collective I O operations, which adopt a two-phase I O strategy @cite_11 @cite_20 @cite_18 @cite_2 and improve the parallel I O performance by significantly reducing the number of I O requests that would otherwise result in many small, noncontiguous I O requests. However, MPI-IO reads and writes data in a raw format without providing any functionality to effectively manage the associated metadata. Nor does it guarantee data portability, thereby making it inconvenient for scientists to organize, transfer, and share their application data.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_1", "@cite_2", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2082416889", "2104486653", "2111925167", "2174300520", "2567936601", "2108155100", "2083200599" ], "abstract": [ "A number of applications on parallel computers deal with very large data sets that cannot fit in main memory. In such applications, data must be stored in files on disks and fetched into memory during program execution. Parallel programs with large out-of-core arrays stored in files must read write smaller sections of the arrays from to files. In this article, we describe a method for accessing sections of out-of-core arrays efficiently. Our method, the extended two-phase method, uses collective l O: Processors cooperate to combine several l O requests into fewer larger granularity requests, to reorder requests so that the file is accessed in proper sequence, and to eliminate simultaneous l O requests for the same data. In addition, the l O workload is divided among processors dynamically, depending on the access requests. We present performance results obtained from two real out-of-core parallel applications - matrix multiplication and a Laplace's equation solver - and several synthetic access patterns, all on the Intel Touchstone Delta. These results indicate that the extended two-phase method significantly outperformed a direct (noncollective) method for accessing out-of-core array sections.", "We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I O functions (open, lseek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I O, asynchronous I O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, and file preallocation. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.", "We propose a strategy for implementing parallel I O interfaces portably and efficiently. We have defined an abstract device interface for parallel I O, called ADIO. Any parallel I O API can be implemented on multiple file systems by implementing the API portably on top of ADIO, and implementing only ADIO on different file systems. This approach simplifies the task of implementing an API and yet exploits the specific high performance features of individual file systems. We have used ADIO to implement the Intel PFS interface and subsets of MPI-IO and IBM PIOFS interfaces on PFS, PIOFS, Unix, and NFS file systems. Our performance studies indicate that the overhead of using ADIO as an implementation strategy is very low.", "The I O access patterns of parallel programs often consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I O needs are met by making many small, distinct I O requests, however, the I O performance degrades drastically. To avoid this problem, MPI-IO allows users to access a noncontiguous data set with a single I O function call. This feature provides MPI-IO implementations an opportunity to optimize data access. We describe how our MPI-IO implementation, ROMIO, delivers high performance in the presence of noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I O for noncontiguous requests from multiple processes. We describe how one can implement these optimizations portably on multiple machines and file systems, control their memory requirements, and also achieve high performance. We demonstrate the performance and portability with performance results for three applications--an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC)--on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.", "ROMIO is a high-performance, portable implementation of MPI-IO (the I O chapter in MPI-2). This document describes how to install and use ROMIO version 1.0.0 on the following machines: IBM SP; Intel Paragon; HP Convex Exemplar; SGI Origin 2000, Challenge, and Power Challenge; and networks of workstations (Sun4, Solaris, IBM, DEC, SGI, HP, FreeBSD, and Linux).", "We are developing a compiler and runtime support system called PASSION: Parallel and Scalable Software for Input-Output. PASSION provides software support for I O intensive out-of-core loosely synchronous problems. This paper gives an overview of the PASSION Runtime Library and describes two of the optimizations incorporated in it, namely data prefetching and data sieving. Performance improvements provided by these optimizations on the Intel Touchstone Delta are discussed together with an out-of-core median filtering application. >", "As scientists expand their models to describe physical phenomena of increasingly large extent, I O becomes crucial and a system with limited I O capacity can severely constrain the performance of the entire program.We provide experimental results, performed on an lntel Touchtone Delta and nCUBE 2 I O system, to show that the performance of existing parallel I O systems can vary by several orders of magnitude as a function of the data access pattern of the parallel program. We then propose a two-phase access strategy, to be implemented in a runtime system, in which the data distribution on computational nodes is decoupled from storage distribution. Our experimental results show that performance improvements of several orders of magnitude over direct access based data distribution methods can be obtained, and that performance for most data access patterns can be improved to within a factor of 2 of the best performance. Further, the cost of redistribution is a very small fraction of the overall access cost." ] }
cs0305010
1668544585
Numerous systems for dissemination, retrieval, and archiving of documents have been developed in the past. Those systems often focus on one of these aspects and are hard to extend and combine. Typically, the transmission protocols, query and filtering languages are fixed as well as the interfaces to other systems. We rather envisage the seamless establishment of networks among the providers, repositories and consumers of information, supporting information retrieval and dissemination while being highly interoperable and extensible. We propose a framework with a single event-based mechanism that unifies document storage, retrieval, and dissemination. This framework offers complete openness with respect to document and metadata formats, transmission protocols, and filtering mechanisms. It specifies a high-level building kit, by which arbitrary processors for document streams can be incorporated to support the retrieval, transformation, aggregation and disaggregation of documents. Using the same kit, interfaces for different transmission protocols can be added easily to enable the communication with various information sources and information consumers.
@cite_12 is a push-model publish subscribe system for alerting within a wide-area network. It offers scalability by distributing filters over servers within the network and saving bandwidth by filtering close to the event sources and bundling similar subscriptions. Siena is modular and offers sophisticated filtering mechanisms including dynamic configuration and distribution. It lacks openness, document stream transformation, and scheduling.
{ "cite_N": [ "@cite_12" ], "mid": [ "2131975004" ], "abstract": [ "The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i.e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service’s clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service’s interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used" ] }
math0304100
2115080784
The Shub-Smale Tau Conjecture is a hypothesis relating the number of integral roots of a polynomial f in one variable and the Straight-Line Program (SLP) complexity of f. A consequence of the truth of this conjecture is that, for the Blum-Shub-Smale model over the complex numbers, P differs from NP. We prove two weak versions of the Tau Conjecture and in so doing show that the Tau Conjecture follows from an even more plausible hypothesis. Our results follow from a new p-adic analogue of earlier work relating real algebraic geometry to additive complexity. For instance, we can show that a nonzero univariate polynomial of additive complexity s can have no more than 15+s^3(s+1)(7.5)^s s! =O(e^ s s ) roots in the 2-adic rational numbers Q_2, thus dramatically improving an earlier result of the author. This immediately implies the same bound on the number of ordinary rational roots, whereas the best previous upper bound via earlier techniques from real algebraic geometry was a quantity in Omega((22.6)^ s^2 ). This paper presents another step in the author's program of establishing an algorithmic arithmetic version of fewnomial theory.
That the @math -conjecture is still open is a testament to the fact that we know far less about the complexity measures @math and @math than we should. For example, there is still no more elegant method known to compute @math for a fixed polynomial than brute force enumeration. Also, the computability of additive complexity is still an open question, although a more efficient variant (allowing radicals as well) can be computed in triply exponential time @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2032345729" ], "abstract": [ "We design an algorithm for computing the generalized (algebraic circuits with root extracting; cf. Pippenger [J. Comput. System Sci., 22 (1981), pp. 454--470], Ja'Ja' [Proc. 22nd IEEE FOCS, 1981, pp. 95--100], Grigoriev, Singer, and Yao [ SIAM J. Comput., 24 (1995), pp. 242--246]) additive complexity of any rational function. It is the first computability result of this sort on the additive complexity of algebraic circuits." ] }