aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Wolpert, Tumer and Frank @cite_1 construct a formalism for the so-called Collective Intelligence ( coin )neural net applied to Internet traffic routing. The approach involves automatically initializing and updating the local utility functions of individual rl agents (nodes) from the global utility and observed local dynamics. Their simulation outperforms a Full Knowledge Shortest Path Algorithm on a sample network of seven nodes. Coin networks employ a method similar in spirit to the research presented here. They rely on a distributed rl algorithm that converges on local optima without endowing each agent node with explicit knowledge of network topology. However, coin differs form our approach in requiring the introduction of preliminary structure into the network by dividing it into semi-autonomous neighborhoods that share a local utility function and encourage cooperation. In contrast, all the nodes in our network update their algorithms directly from the global reward.
{ "cite_N": [ "@cite_1" ], "mid": [ "2126306851" ], "abstract": [ "A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Applying reinforcement learning to communication often involves optimizing performance with respect to multiple criteria. For a recent discussion on this challenging issue see Shelton @cite_11 . In the context of wireless communication it was addressed by Brown @cite_17 who considers the problem of finding a power management policy that simultaneously maximizes the revenue earned by providing communication while minimizing battery usage. The problem is defined as a stochastic shortest path with discounted infinite horizon, where discount factor varies to model power loss. This approach resulted in significant ( @math 100 @math 6$ computers.
{ "cite_N": [ "@cite_17", "@cite_11" ], "mid": [ "2157758004", "2033976720" ], "abstract": [ "This paper examines the application of reinforcement learning to a wireless communication problem. The problem requires that channel utility be maximized while simultaneously minimizing battery usage. We present a solution to this multi-criteria problem that is able to significantly reduce power consumption. The solution uses a variable discount factor to capture the effects of battery usage.", "This thesis considers three complications that arise from applying reinforcement learning to a real-world application. In the process of using reinforcement learning to build an adaptive electronic market-maker, we find the sparsity of data, the partial observability of the domain, and the multiple objectives of the agent to cause serious problems for existing reinforcement learning algorithms. We employ importance sampling (likelihood ratios) to achieve good performance in partially observable Markov decision processes with few data. Our importance sampling estimator requires no knowledge about the environment and places few restrictions on the method of collecting data. It can be used efficiently with reactive controllers, finite-state controllers, or policies with function approximation. We present theoretical analyses of the estimator and incorporate it into a reinforcement learning algorithm. Additionally, this method provides a complete return surface which can be used to balance multiple objectives dynamically. We demonstrate the need for multiple goals in a variety of applications and natural solutions based on our sampling method. The thesis concludes with example results from employing our algorithm to the domain of automated electronic market-making. Thesis Supervisor: Tomaso Poggio Title: Professor of Brain and Cognitive Science" ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Subramanian, Druschel and Chen @cite_12 adopt an approach from ant colonies that is very similar in spirit. The individual hosts in their network keep routing tables with the associated costs of sending a packet to other hosts (such as which routers it has to traverse and how expensive they are). These tables are periodically updated by "ants"-messages whose function is to assess the cost of traversing links between hosts. The ants are directed probabilistically along available paths. They inform the hosts along the way of the costs associated with their travel. The hosts use this information to alter their routing tables according to an update rule. There are two types of ants. Regular ants use the routing tables of the hosts to alter the probability of being directed along a certain path. After a number of trials, all regular ants on the same mission start using the same routes. Their function is to allow the host tables to converge on the correct cost figure in case the network is stable. Uniform ants take any path with equal probability. They are the ones who continue exploring the network and assure successful adaptation to changes in link status or link cost.
{ "cite_N": [ "@cite_12" ], "mid": [ "2171299282" ], "abstract": [ "We investigate two new distributed routing algorithms for data networks based on simple biological \"ants\" that explore the network and rapidly learn good routes, using a novel variation of reinforcement learning. These two algorithms are fully adaptive to topology changes and changes in link costs in the network, and have space and computational overheads that are competitive with traditional packet routing algorithms: although they can generate more routing traffic when the rate of failures in a network is low, they perform much better under higher failure rates. Both algorithms are more resilient than traditional algorithms, in the sense that random corruption of routing state has limited impact on the computation of paths. We present convergence theorems for both of our algorithms drawing on the theory of non-stationary and stationary discrete-time Markov chains over the reals. We present an extensive empirical evaluation of our algorithms on a simulator that is widely used in the computer networks community for validating and testing protocols. We present comparative results on data delivery performance, aggregate routing traffic (algorithm overhead), as well as the degree of resilience for our new algorithms and two traditional routing algorithms in current use. We also show that the performance of our algorithms scale well with increase in network size-using a realistic topology." ] }
cs0703156
2950341706
In case-based reasoning, the adaptation of a source case in order to solve the target problem is at the same time crucial and difficult to implement. The reason for this difficulty is that, in general, adaptation strongly depends on domain-dependent knowledge. This fact motivates research on adaptation knowledge acquisition (AKA). This paper presents an approach to AKA based on the principles and techniques of knowledge discovery from databases and data-mining. It is implemented in CABAMAKA, a system that explores the variations within the case base to elicit adaptation knowledge. This system has been successfully tested in an application of case-based reasoning to decision support in the domain of breast cancer treatment.
@cite_0 , the idea of @cite_12 is reused to extend the approach of @cite_4 : some learning algorithms (in particular, C4.5) are applied to the adaptation cases of @math , to induce general adaptation knowledge.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_12" ], "mid": [ "1526119037", "", "1498118703" ], "abstract": [ "Design is a complex open-ended task and it is unreasonable to expect a case-base to contain representatives of all possible designs. Therefore, adaptation is a desirable capability for case-based design systems, but acquiring adaptation knowledge can involve significant effort. In this paper adaptation knowledge is induced separately for different criteria associated with the retrieved solution, using knowledge sources implicit in the case-base. This provides a committee of learners and their combined advice is better able to satisfy design constraints and compatibility requirements compared to a single learner. The main emphasis of the paper is to evaluate the impact of specific-to-general and general-to-specific learning on adaptation knowledge acquired by committee members. For this purpose we conduct experiments on a real tablet formulation problem which is tackled as a decomposable design task. Evaluation results suggest that adaptation achieves significant gains compared to a retrieve-only CBR system, but shows that both learning biases can be beneficial for different decomposed sub-tasks.", "", "A major challenge for case-based reasoning (CBR) is to overcome the knowledge-engineering problems incurred by developing adaptation knowledge. This paper describes an approach to automating the acquisition of adaptation knowledge overcoming many of the associated knowledge-engineering costs. This approach makes use of inductive techniques, which learn adaptation knowledge from case comparison. We also show how this adaptation knowledge can be usefully applied. The method has been tested in a property-evaluation CBR system and the technique is illustrated by examples taken from this domain. In addition, we examine how any available domain knowledge might be exploited in such an adaptation-rule learning-system." ] }
cs0702151
1670047909
A streaming model is one where data items arrive over long period of time, either one item at a time or in bursts. Typical tasks include computing various statistics over a sliding window of some fixed time-horizon. What makes the streaming model interesting is that as the time progresses, old items expire and new ones arrive. One of the simplest and central tasks in this model is sampling. That is, the task of maintaining up to @math uniformly distributed items from a current time-window as old items expire and new ones arrive. We call sampling algorithms succinct if they use provably optimal (up to constant factors) worst-case memory to maintain @math items (either with or without replacement). We stress that in many applications structures that have expected succinct representation as the time progresses are not sufficient, as small probability events eventually happen with probability 1. Thus, in this paper we ask the following question: are Succinct Sampling on Streams (or @math -algorithms)possible, and if so for what models? Perhaps somewhat surprisingly, we show that @math -algorithms are possible for all variants of the problem mentioned above, i.e. both with and without replacement and both for one-at-a-time and bursty arrival models. Finally, we use @math algorithms to solve various problems in sliding windows model, including frequency moments, counting triangles, entropy and density estimations. For these problems we present solutions with provable worst-case memory guarantees.
Datar, Gionis, Indyk and Motwani @cite_19 pioneered the research in this area, presenting exponential histograms, effective and simple solutions for a wide class of functions over sliding windows. In particular, they gave a memory-optimal algorithm for count, sum, average, @math and other functions. Gibbons and Tirthapura @cite_46 improved the results for sum and count, providing memory and time-optimal algorithms. Feigenbaum, Kannan and Zhang @cite_33 addressed the problem of computing diameter. Lee and Ting in @cite_48 gave a memory-optimal solution for the relaxed version of the count problem. Chi, Wang, Yu and Muntz @cite_2 addressed a problem of frequent itemsets. Algorithms for frequency counts and quantiles were proposed by Arasu and Manku @cite_18 . Further improvement for counts was reported by Lee and Ting @cite_41 . Babcock, Datar, Motwani and O'Callaghan @cite_23 provided an effective solution of variance and @math -medians problems. Algorithms for rarity and similarity were proposed by Datar and Muthukrishnan @cite_39 . Golab, DeHaan, Demaine, Lopez-Ortiz and Munro @cite_43 provided an effective algorithm for finding frequent elements. Detailed surveys of recent results can be found in @cite_29 @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_41", "@cite_48", "@cite_29", "@cite_39", "@cite_24", "@cite_19", "@cite_43", "@cite_23", "@cite_2", "@cite_46" ], "mid": [ "2152637787", "2007025103", "", "", "1965972569", "2600017706", "", "2004154913", "2886756691", "2124507579", "1569347009", "1990465412" ], "abstract": [ "We consider the problem of maintaining e-approximate counts and quantiles over a stream sliding window using limited space. We consider two types of sliding windows depending on whether the number of elements N in the window is fixed (fixed-size sliding window) or variable (variable-size sliding window). In a fixed-size sliding window, both the ends of the window slide synchronously over the stream. In a variable-size sliding window, an adversary slides the window ends independently, and therefore has the ability to vary the number of elements N in the window.We present various deterministic and randomized algorithms for approximate counts and quantiles. All of our algorithms require O(1 e polylog(1 e, N)) space. For quantiles, this space requirement is an improvement over the previous best bound of O(1 e2 polylog(1 e, N)). We believe that no previous work on space-efficient approximate counts over sliding windows exists.", "We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of @math points or a sliding window of size @math , any exact algorithm for diameter requires @math bits of space. We present a simple @math -approximation algorithm for computing the diameter in the streaming model. Our main result is an @math -approximation algorithm that maintains the diameter in two dimensions in the sliding-window model using @math bits of space, where @math is the maximum, over all windows, of the ratio of the diameter to the minimum non-zero distance between any two points in the window.", "", "", "1 Introduction 2 Map 3 The Data Stream Phenomenon 4 Data Streaming: Formal Aspects 5 Foundations: Basic Mathematical Ideas 6 Foundations: Basic Algorithmic Techniques 7 Foundations: Summary 8 Streaming Systems 9 New Directions 10 Historic Notes 11 Concluding Remarks Acknowledgements References.", "In the windowed data stream model, we observe items coming in over time. At any time t, we consider the window of the last N observations a t -(N-1), a t-(N-2),..., a t , each a i E 1,...,u ; we are required to support queries about the data in the window. A crucial restriction is that we are only allowed o(N) (often polylogarithmic in N) storage space, so not all items within the window can be archived. We study two basic problems in the windowed data stream model. The first is the estimation of the rarity of items in the window. Our second problem is one of estimating similarity between two data stream windows using the Jacard's coefficient. The problems of estimating rarity and similarity have many applications in mining massive data sets. We present novel, simple algorithms for estimating rarity and similarity on windowed data streams, accurate up to factor 1 ± e using space only logarithmic in the window size.", "", "We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that using O(1 e log2N) bits of memory, we can estimate the number of 1's to within a factor of 1 + e. We also give a matching lower bound of Ω(1 e log2 N) memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers. We provide matching upper and lower bounds for this more general problem as well. We apply our techniques to obtain efficient algorithms for the L p norms (for p e [1, 2]) of vectors under the sliding window model. Using the algorithm for the basic counting problem, one can adapt many other techniques to work for the sliding window model, with a multiplicative overhead of O(1 elog N) in memory and a 1 + e factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.", "", "The sliding window model is useful for discounting stale data in data stream applications. In this model, data elements arrive continually and only the most recent N elements are used when answering queries. We present a novel technique for solving two important and related problems in the sliding window model---maintaining variance and maintaining a k--median clustering. Our solution to the problem of maintaining variance provides a continually updated estimate of the variance of the last N values in a data stream with relative error of at most e using O(1 e 2 log N) memory. We present a constant-factor approximation algorithm which maintains an approximate k--median solution for the last N data points using O(k τ4 N2τ log2 N) memory, where τ < 1 2 is a parameter which trades off the space bound with the approximation factor of O(2O(1 τ)).", "This paper considers the problem of mining closed frequent itemsets over a sliding window using limited memory space. We design a synopsis data structure to monitor transactions in the sliding window so that we can output the current closed frequent itemsets at any time. Due to time and memory constraints, the synopsis data structure cannot monitor all possible itemsets. However, monitoring only frequent itemsets make it impossible to detect new itemsets when they become frequent. In this paper, we introduce a compact data structure, the closed enumeration tree (CET), to maintain a dynamically selected set of item-sets over a sliding-window. The selected itemsets consist of a boundary between closed frequent itemsets and the rest of the itemsets. Concept drifts in a data stream are reflected by boundary movements in the CET. In other words, a status change of any itemset (e.g., from non-frequent to frequent) must occur through the boundary. Because the boundary is relatively stable, the cost of mining closed frequent item-sets over a sliding window is dramatically reduced to that of mining transactions that can possibly cause boundary movements in the CET. Our experiments show that our algorithm performs much better than previous approaches.", "This paper presents algorithms for estimating aggregate functions over a \"sliding window\" of the N most recent data items in one or more streams. Our results include: For a single stream, we present the first e-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first e for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words. In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires O(N) space. We present the first randomized (e,s)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (e,s)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words. < olOur results are obtained using a novel family of synopsis data structures." ] }
cs0702030
1616730662
This paper addresses the following question, which is of interest in the design and deployment of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W N should the band be partitioned into to maximize the number of simultaneous transmissions in the network? In an interference-limited ad-hoc network, dividing the available spectrum results in two competing effects: on the positive side, it reduces the number of users on each band and therefore decreases the interference level which leads to an increased SINR, while on the negative side the SINR requirement for each transmission is increased because the same information rate must be achieved over a smaller bandwidth. Exploring this tradeoff between bandwidth and SINR and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, we analytically derive the optimal SINR threshold (which directly corresponds to the optimal spectral efficiency) on this tradeoff curve and show that it is a function of only the path loss exponent. Furthermore, the optimal SINR point lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited) regimes. In order to operate at this optimal point, the number of frequency bands (i.e., the reuse factor) should be increased until the threshold SINR, which is an increasing function of the reuse factor, is equal to the optimal value.
The transmission capacity framework introduced in @cite_3 is used to quantify the throughput of such a network, since this metric captures notions of spatial density, data rate, and outage probability, and is more amenable to analysis than the more popular transport capacity @cite_5 . Using tools from stochastic geometry @cite_6 , the distribution of interference from other concurrent transmissions at a reference receiving node The randomness in interference is only due to the random positions of the interfering nodes and fading. is characterized as a function of the spatial density of transmitters, the path-loss exponent, and possibly the fading distribution. The distribution of SINR at the receiving node can then be computed, and an outage occurs whenever the SINR falls below some threshold @math . The outage probability is clearly an increasing function of the density of transmissions, and the transmission capacity is defined to be the maximum density of successful transmissions such that the outage probability is no larger than some prescribed constant @math .
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_3" ], "mid": [ "2137775453", "2118166339", "2095796369" ], "abstract": [ "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "Mathematical Foundation. Point Processes I--The Poisson Point Process. Random Closed Sets I--The Boolean Model. Point Processes II--General Theory. Point Processes III--Construction of Models. Random Closed Sets II--The General Case. Random Measures. Random Processes of Geometrical Objects. Fibre and Surface Processes. Random Tessellations. Stereology. References. Indexes.", "In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes." ] }
cs0702030
1616730662
This paper addresses the following question, which is of interest in the design and deployment of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W N should the band be partitioned into to maximize the number of simultaneous transmissions in the network? In an interference-limited ad-hoc network, dividing the available spectrum results in two competing effects: on the positive side, it reduces the number of users on each band and therefore decreases the interference level which leads to an increased SINR, while on the negative side the SINR requirement for each transmission is increased because the same information rate must be achieved over a smaller bandwidth. Exploring this tradeoff between bandwidth and SINR and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, we analytically derive the optimal SINR threshold (which directly corresponds to the optimal spectral efficiency) on this tradeoff curve and show that it is a function of only the path loss exponent. Furthermore, the optimal SINR point lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited) regimes. In order to operate at this optimal point, the number of frequency bands (i.e., the reuse factor) should be increased until the threshold SINR, which is an increasing function of the reuse factor, is equal to the optimal value.
The problem studied in this work is essentially the optimization of frequency reuse in uncoordinated spatial (ad hoc) networks, which is a well studied problem in the context of cellular networks (see for example @cite_1 and references therein). In both settings the tradeoff is between the bandwidth utilized per cell transmission, which is inversely proportional to the frequency reuse factor, and the achieved SINR per transmission. A key difference is that in cellular networks, regular frequency reuse patterns can be planned and implemented, whereas in an ad hoc network this is impossible and so the best that can be hoped for is uncoordinated frequency reuse. Another crucial difference is in terms of analytical tractability. Although there has been a tremendous amount of work on optimization of frequency reuse for cellular networks, these efforts do not, to the best of our knowledge, lend themselves to clean analytical results. On the contrary, in this work we are able to derive very simple analytical results in the random network setting that very cleanly show the dependence of the optimal reuse factor on system parameters such as path loss exponent and rate.
{ "cite_N": [ "@cite_1" ], "mid": [ "2145417574" ], "abstract": [ "From the Publisher: The indispensable guide to wireless communications—now fully revised and updated! Wireless Communications: Principles and Practice, Second Edition is the definitive modern text for wireless communications technology and system design. Building on his classic first edition, Theodore S. Rappaport covers the fundamental issues impacting all wireless networks and reviews virtually every important new wireless standard and technological development, offering especially comprehensive coverage of the 3G systems and wireless local area networks (WLANs) that will transform communications in the coming years. Rappaport illustrates each key concept with practical examples, thoroughly explained and solved step by step. Coverage includes: An overview of key wireless technologies: voice, data, cordless, paging, fixed and mobile broadband wireless systems, and beyond Wireless system design fundamentals: channel assignment, handoffs, trunking efficiency, interference, frequency reuse, capacity planning, large-scale fading, and more Path loss, small-scale fading, multipath, reflection, diffraction, scattering, shadowing, spatial-temporal channel modeling, and microcell indoor propagation Modulation, equalization, diversity, channel coding, and speech coding New wireless LAN technologies: IEEE 802.11a b, HIPERLAN, BRAN, and other alternatives New 3G air interface standards, including W-CDMA, cdma2000, GPRS, UMTS, and EDGE Bluetooth wearable computers, fixed wireless and Local Multipoint Distribution Service (LMDS), and other advanced technologies Updated glossary of abbreviations and acronyms, and a thorolist of references Dozens of new examples and end-of-chapter problems Whether you're a communications network professional, manager, researcher, or student, Wireless Communications: Principles and Practice, Second Edition gives you an in-depth understanding of the state of the art in wireless technology—today's and tomorrow's." ] }
cs0702078
2953373266
We present a local algorithm for finding dense subgraphs of bipartite graphs, according to the definition of density proposed by Kannan and Vinay. Our algorithm takes as input a bipartite graph with a specified starting vertex, and attempts to find a dense subgraph near that vertex. We prove that for any subgraph S with k vertices and density theta, there are a significant number of starting vertices within S for which our algorithm produces a subgraph S' with density theta O(log n) on at most O(D k^2) vertices, where D is the maximum degree. The running time of the algorithm is O(D k^2), independent of the number of vertices in the graph.
The closely related densest @math -subgraph problem is to identify the subgraph with the largest number of edges among all subgraphs of exactly @math vertices. This problem is considerably more difficult, and there is a large gap between the best approximation algorithms and hardness results known for the problem (see @cite_2 @cite_5 ).
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2036836182", "2010787744" ], "abstract": [ "This paper considers the problem of computing the dense k -vertex subgraph of a given graph, namely, the subgraph with the most edges. An approximation algorithm is developed for the problem, with approximation ratio O(n δ ) , for some δ < 1 3 .", "Given an n-vertex graph G and a parameter k, we are to find a k-vertex subgraph with the maximum number of edges. This problem is NP-hard. We show that the problem remains NP-hard even when the maximum degree in G is three. When G contains a k-clique, we give an algorithm that for any e sub sub<) e). We study the applicability of semidefinite programming for approximating the dense k-subgraph problem. Our main result in this respect is negative, showing that for k @ n1 3, semidefinite programs fail to distinguish between graphs that contain k-cliques and graphs in which the densest k-vertex subgraph has average degree below logn." ] }
cs0702113
2950657197
We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph. In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity. In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.
Randomized algorithms appear in other literature related to the cut and cycle spaces. For example, @cite_7 computes the genus of an embedded graph @math while observing" part of it. They use random perturbation and balancing steps to compute a on @math and the dual graph of @math . Their computational model is quite different from the one here, e.g. they allow a face to modify the values of all its incident edges in a single time step.
{ "cite_N": [ "@cite_7" ], "mid": [ "2001374475" ], "abstract": [ "Harmonic and analytic functions have natural discrete analogues. Harmonic functions can be defined on every graph, while analytic functions (or, more precisely, holomorphic forms) can be defined on graphs embedded in orientable surfaces. Many important properties of the \"true\" harmonic and analytic functions can be carried over to the discrete setting. We prove that a nonzero analytic function can vanish only on a very small connected piece. As an application, we describe a simple local random process on embedded graphs, which have the property that observing them in a small neighborhood of a node through a polynomial time, we can infer the genus of the surface." ] }
cs0701037
2951856549
DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.
DejaVu @cite_2 (whose development overlapped that of DMTCP) also provides transparent user-level checkpointing of distributed process based on sockets. However, DejaVu appears to be much slower than DMTCP. For example, in the Chombo benchmark, Ruscio et al report executing ten checkpoints per hour with 45 checkpoints in 2 seconds, with essentially zero overhead between checkpoints. Nevertheless, DejaVu is also able to checkpoint InfiniBand connections by using a customized version of MVAPICH. DejaVu takes a more invasive approach than DMTCP, by logging all communication and by using page protection to detect modification of memory pages between checkpoints. This accounts for additional overhead during normal program execution that is not present in DMTCP. Since DejaVu was not publicly available at the time of this writing, a direct timing comparison on a common benchmark was not possible.
{ "cite_N": [ "@cite_2" ], "mid": [ "2131053137" ], "abstract": [ "This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing." ] }
cs0701037
2951856549
DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.
The remaining work on distributed transparent checkpointing can be divided into two categories: User-level MPI libraries for checkpointing @cite_10 @cite_6 @cite_5 @cite_30 @cite_15 @cite_0 @cite_9 @cite_27 @cite_12 : works for distributed processes, but only if they communicate exclusively through MPI (Message Passing Interface). Typically restricted to a particular dialect of MPI. Kernel-level (system-level) checkpointing @cite_7 @cite_19 @cite_34 @cite_25 @cite_20 @cite_4 @cite_8 : modification of kernel; requirements on matching package version to kernel version.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_20", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_34", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2155204206", "2162351670", "2116115793", "2118926411", "86720035", "", "2155662278", "2105039796", "2045879521", "21025594", "2069891156", "2171453084", "2115367411", "2095487435", "60276110", "2114035455" ], "abstract": [ "As high performance clusters continue to grow in size, the mean time between failures shrinks. Thus, the issues of fault tolerance and reliability are becoming one of the challenging factors for application scalability. The traditional disk-based method of dealing with faults is to checkpoint the state of the entire application periodically to reliable storage and restart from the recent checkpoint. The recovery of the application from faults involves (often manually) restarting applications on all processors and having it read the data from disks on all processors. The restart can therefore take minutes after it has been initiated. Such a strategy requires that the failed processor can be replaced so that the number of processors at checkpoint-time and recovery-time are the same. We present FTC-Charms ++, a fault-tolerant runtime based on a scheme for fast and scalable in-memory checkpoint and restart. At restart, when there is no extra processor, the program can continue to run on the remaining processors while minimizing the performance penalty due to losing processors. The method is useful for applications whose memory footprint is small at the checkpoint state, while a variation of this scheme - in-disk checkpoint restart can be applied to applications with large memory footprint. The scheme does not require any individual component to be fault-free. We have implemented this scheme for Charms++ and AMPI (an adaptive version of MPl). This work describes the scheme and shows performance data on a cluster using 128 processors.", "We develop an availability solution, called SafetyNet, that uses a unified, lightweight checkpoint recovery mechanism to support multiple long-latency fault detection schemes. At an abstract level, SafetyNet logically maintains multiple, globally consistent checkpoints of the state of a shared memory multiprocessor (i.e., processors, memory, and coherence permissions), and it recovers to a pre-fault checkpoint of the system and re-executes if a fault is detected. SafetyNet efficiently coordinates checkpoints across the system in logical time and uses \"logically atomic\" coherence transactions to free checkpoints of transient coherence state. SafetyNet minimizes performance overhead by pipelining checkpoint validation with subsequent parallel execution.We illustrate SafetyNet avoiding system crashes due to either dropped coherence messages or the loss of an interconnection network switch (and its buffered messages). Using full-system simulation of a 16-way multiprocessor running commercial workloads, we find that SafetyNet (a) adds statistically insignificant runtime overhead in the common-case of fault-free execution, and (b) avoids a crash when tolerated faults occur.", "This article describes the motivation, design and implementation of Berkeley Lab Checkpoint Restart (BLCR), a system-level checkpoint restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.", "Paper presents design, implementation, features and applications of kernel based process checkpointing system CHPOX (checkpointing for Linux). Comparison of CHPOX to other Linux checkpointing systems is given. Conclusions about CHPOX advantages, shortcomings and future work directions are made.", "", "", "To be able to fully exploit ever larger computing platforms, modern HPC applications and system software must be able to tolerate inevitable faults. Historically, MPI implementations that incorporated fault tolerance capabilities have been limited by lack of modularity, scalability and usability. This paper presents the design and implementation of an infrastructure to support checkpoint restart fault tolerance in the Open MPI project. We identify the general capabilities required for distributed checkpoint restart and realize these capabilities as extensible frameworks within Open MPI's modular component architecture. Our design features an abstract interface for providing and accessing fault tolerance services without sacrificing performance, robustness, or flexibility. Although our implementation includes support for some initial checkpoint restart mechanisms, the framework is meant to be extensible and to encourage experimentation of alternative techniques within a production quality MPI implementation.", "Global Computing platforms, large scale clusters and future TeraGRID systems gather thousands of nodes for computing parallel scientific applications. At this scale, node failures or disconnections are frequent events. This Volatility reduces the MTBF of the whole system in the range of hours or minutes. We present MPICH-V, an automatic Volatility tolerant MPI environment based on uncoordinated checkpoint roll-back and distributed message logging. MPICH-V architecture relies on Channel Memories, Checkpoint servers and theoretically proven protocols to execute existing or new, SPMD and Master-Worker MPI applications on volatile nodes. To evaluate its capabilities, we run MPICH-V within a framework for which the number of nodes, Channels Memories and Checkpoint Servers can be completely configured as well as the node Volatility. We present a detailed performance evaluation of every component of MPICH-V and its global performance for non-trivial parallel applications. Experimental results demonstrate good scalability and high tolerance to node volatility.", "As high performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. To address these issues, we present the design and implementation of a system for providing coordinated checkpointing and rollback recovery for MPI-based parallel applications. Our approach integrates the Berkeley Lab BLCR kernel-level process checkpoint system with the LAM implementation of MPI through a defined checkpoint restart interface. Checkpointing is transparent to the application, allowing the system to be used for cluster maintenance and scheduling reasons as well as for fault tolerance. Experimental results show negligible communication performance impact due to the incorporation of the checkpoint support capabilities into LAM MPI.", "The Los Alamos Message Passing Interface (LA-MPI) is an end-to-end network-failure-tolerant message-passing system designed for terascale clusters. LAMPI is a standard-compliant implementation of MPI designed to tolerate network-related failures including I O bus errors, network card errors, and wire-transmission errors. This paper details the distinguishing features of LA-MPI, including support for concurrent use of multiple types of network interface, and reliable message transmission utilizing multiple network paths and routes between a given source and destination. In addition, performance measurements on production-grade platforms are presented.", "As high-performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. We integrated one user-level checkpointing and rollback recovery (CRR) library to LAM MPI, a high performance implementation of the Message Passing Interface (MPI), to improve its availability. Compared with the current CRR implementation of LAM MPI, our work supports file checkpointing and own higher portability, which can run on more platforms including IA32 and IA64 Linux. In addition, the test shows that less than 15 performance overhead is introduced by the CRR mechanism of our implementation.", "Checkpointing of parallel applications can be used as the core technology to provide process migration. Both checkpointing and migration, are an important issue for parallel applications on networks of workstations. The CoCheck environment which we present in this paper introduces a new approach to provide checkpointing and migration for parallel applications. CoCheck sits on top of the message passing library and achieves consistency at a level above the message passing system. It uses an existing single process checkpointer which is available for a wide range of systems. Hence, CoCheck can be easily adapted to both, different message passing systems and new machines.", "We present a new distributed checkpoint-restart mechanism, Cruz, that works without requiring application, library, or base kernel modifications. This mechanism provides comprehensive support for checkpointing and restoring application state, both at user level and within the OS. Our implementation builds on Zap, a process migration mechanism, implemented as a Linux kernel module, which operates by interposing a thin layer between applications and the OS. In particular, we enable support for networked applications by adding migratable IP and MAC addresses, and checkpoint-restart of socket buffer state, socket options, and TCP state. We leverage this capability to devise a novel method for coordinated checkpoint-restart that is simpler than prior approaches. For instance, it eliminates the need to flush communication channels by exploiting the packet re-transmission behavior of TCP and existing OS support for packet filtering. Our experiments show that the overhead of coordinating checkpoint-restart is negligible, demonstrating the scalability of this approach.", "A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault tolerant programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and nonblocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks.", "", "The running times of many computational science applications, such as protein-folding using ab initio methods, are much longer than the mean-time-to-failure of high-performance computing platforms. To run to completion, therefore, these applications must tolerate hardware failures.In this paper, we focus on the stopping failure model in which a faulty process hangs and stops responding to the rest of the system. We argue that tolerating such faults is best done by an approach called application-level coordinated non-blocking checkpointing, and that existing fault-tolerance protocols in the literature are not suitable for implementing this approach.We then present a suitable protocol, which is implemented by a co-ordination layer that sits between the application program and the MPI library. We show how this protocol can be used with a precompiler that instruments C MPI programs to save application and MPI library state. An advantage of our approach is that it is independent of the MPI implementation. We present experimental results that argue that the overhead of using our system can be small." ] }
cs0701037
2951856549
DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.
A crossover between these two categories is the kernel level checkpointer BLCR @cite_7 @cite_19 . BLCR is particularly notable because of its widespread usage. BLCR itself can only checkpoint processes on a single machine. However some MPI libraries (including some versions of OpenMPI, LAM MPI, MVAPICH2, and MPICH-V) are able to integrate with BLCR to provide distributed checkpointing.
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "2045879521", "2116115793" ], "abstract": [ "As high performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. To address these issues, we present the design and implementation of a system for providing coordinated checkpointing and rollback recovery for MPI-based parallel applications. Our approach integrates the Berkeley Lab BLCR kernel-level process checkpoint system with the LAM implementation of MPI through a defined checkpoint restart interface. Checkpointing is transparent to the application, allowing the system to be used for cluster maintenance and scheduling reasons as well as for fault tolerance. Experimental results show negligible communication performance impact due to the incorporation of the checkpoint support capabilities into LAM MPI.", "This article describes the motivation, design and implementation of Berkeley Lab Checkpoint Restart (BLCR), a system-level checkpoint restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd." ] }
cs0701037
2951856549
DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.
Much MPI-specific work has been based on coordinated checkpointing and the use of hooks into communication by the MPI library @cite_10 @cite_6 . In contrast, our goal is to support more general distributed scientific software.
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2095487435", "2155662278" ], "abstract": [ "A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault tolerant programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and nonblocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks.", "To be able to fully exploit ever larger computing platforms, modern HPC applications and system software must be able to tolerate inevitable faults. Historically, MPI implementations that incorporated fault tolerance capabilities have been limited by lack of modularity, scalability and usability. This paper presents the design and implementation of an infrastructure to support checkpoint restart fault tolerance in the Open MPI project. We identify the general capabilities required for distributed checkpoint restart and realize these capabilities as extensible frameworks within Open MPI's modular component architecture. Our design features an abstract interface for providing and accessing fault tolerance services without sacrificing performance, robustness, or flexibility. Although our implementation includes support for some initial checkpoint restart mechanisms, the framework is meant to be extensible and to encourage experimentation of alternative techniques within a production quality MPI implementation." ] }
cs0701037
2951856549
DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.
In addition to distributed checkpointing, many packages exist which perform single-process checkpointing @cite_11 @cite_16 @cite_35 @cite_3 @cite_14 @cite_28 @cite_26 @cite_13 @cite_22 @cite_17 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_22", "@cite_28", "@cite_17", "@cite_3", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "", "2010439775", "1874306839", "2165022815", "2048894106", "1510894298", "2138165844", "1520339130", "1579345337", "1537929875" ], "abstract": [ "", "We have developed and implemented a checkpointing and restart algorithm for parallel programs running on commercial uniprocessors and shared-memory multiprocessors. The algorithm runs concurrently with the target program, interrupts the target program for small, fixed amounts of time and is transparent to the checkpointed program and its compiler. The algorithm achieves its efficiency through a novel use of address translation hardware that allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed.", "The goal of Winckp is to transparently checkpoint and recover applications on Windows NT. The definition of transparency is no modifications to applications at all, period. There is no need to get source code, or object code. It does not involve compilation, linking or generation of a different executable. We employ window message logging replaying to recreate states that are otherwise difficult to recover by checkpointing alone. In the paper, we describe the design and implementation of Winckp, and present the challenges and limitations. The software is available for download from http: www.bell-labs.com projects swift.", "Given the scale of massively parallel systems, occurrence of faults is no longer an exception but a regular event. Periodic checkpointing is becoming increasingly important in these systems. However, huge memory footprints of parallel applications place severe limitations on scalability of normal checkpointing techniques. Incremental checkpointing is a well researched technique that addresses scalability concerns, but most of the implementations require paging support from hardware and the underlying operating system, which may not be always available. In this paper, we propose a software based adaptive incremental checkpoint technique which uses a secure hash function to uniquely identify changed blocks in memory. Our algorithm is the first self-optimizing algorithm that dynamically computes the optimal block boundaries, based on the history of changed blocks. This provides better opportunities for minimizing checkpoint file size. Since the hash is computed in software, we do not need any system support for this. We have implemented and tested this mechanism on the BlueGene L system. Our results on several well-known benchmarks are encouraging, both in terms of reduction in average checkpoint file size and adaptivity towards application's memory access patterns.", "Presents the results of an implementation of several algorithms for checkpointing and restarting parallel programs on shared-memory multiprocessors. The algorithms are compared according to the metrics of overall checkpointing time, overhead imposed by the checkpointer on the target program, and amount of time during which the checkpointer interrupts the target program. The best algorithm measured achieves its efficiency through a variation of copy-on-write, which allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed. >", "", "This paper describes the design and implementation of a system that uses virtual machine technology [1] to provide fast, transparent application migration. This is the first system that can migrate unmodified applications on unmodified mainstream Intel x86-based operating system, including Microsoft Windows, Linux, Novell NetWare and others. Neither the application nor any clients communicating with the application can tell that the application has been migrated. Experimental measurements show that for a variety of workloads, application downtime caused by migration is less than a second.", "Multiple threads running in a single, shared address space is a simple model for writing parallel programs for symmetric multiprocessor (SMP) machines and for overlapping I O and computation in programs run on either SMP or single processor machines. Often a long running program’s user would like the program to save its state periodically in a checkpoint from which it can recover in case of a failure. This paper introduces the first system to provide checkpointing support for multithreaded programs that use LinuxThreads, the POSIX based threads library for Linux. The checkpointing library is simple to use, automatically takes checkpoint, is flexible, and efficient. Virtually all of the overhead of the checkpointing system comes from saving the checkpoint to disk. The checkpointing library added no measurable overhead to tested application programs when they took no checkpoints. Checkpoint file size is approximately the same size as the checkpointed process’s address space. On the current implementation WATER-SPATIAL from the SPLASH2 benchmark suite saved a 2.8 MB checkpoint in about 0.18 seconds for local disk or about 21.55 seconds for an NFS mounted disk. The overhead of saving state to disk can be minimized through various techniques including varying the checkpoint interval and excluding regions of the address space from checkpoints.", "Clusters of industry-standard multiprocessors are emerging as a competitive alternative for large-scale parallel computing. However, these systems have several disadvantages over large-scale multiprocessors, including complex thread scheduling and increased susceptibility to failure. This paper describes the design and implementation of two user-level mechanisms in the Brazos parallel programming environment that address these issues on clusters of multiprocessors running Windows NT: thread migration and checkpointing. These mechanisms offer several benefits: (1) The ability to tolerate the failure of multiple computing nodes with minimal runtime overhead and short recovery time. (2) The ability to add and remove computing nodes while applications continue to run, simplifying scheduled maintenance operations and facilitating load balancing. (3) The ability to tolerate power failures by performing a checkpoint before shutdown or by migrating computation threads to other stable nodes. Brazos is a distributed system that supports both shared memory and message passing parallel programming paradigms on networks of Intel x86-based multiprocessors running Windows NT. The performance of thread migration in Brazos is an order of magnitude faster than previously reported Windows NT implementations, and is competitive with implementations on other operating systems. The checkpoint facility exhibits low runtime overhead and fast recovery time.", "Checkpointing is a simple technique for rollback recovery: the state of an executing program is periodically saved to a disk file from which it can be recovered after a failure. While recent research has developed a collection of powerful techniques for minimizing the overhead of writing checkpoint files, checkpointing remains unavailable to most application developers. In this paper we describe libckpt, a portable checkpointing tool for Unix that implements all applicable performance optimizations which are reported in the literature. While libckpt can be used in a mode which is almost totally transparent to the programmer, it also supports the incorporation of user directives into the creation of checkpoints. This user-directed checkpointing is an innovation which is unique to our work." ] }
cs0701001
2950643428
Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms.
The concept of STDMA for multihop wireless ad hoc networks was formalized in @cite_7 . Centralized algorithms @cite_12 @cite_9 as well as distributed algorithms @cite_11 @cite_3 have been proposed for generating reuse schedules. The problem of determining an optimal minimum-length STDMA schedule for a general multihop ad hoc network is NP-complete for both link and broadcast scheduling @cite_14 . In fact, this is closely related to the problem of determining the minimum number of colors to color all the edges (or vertices) of a graph under certain adjacency constraints. However, most wireless ad hoc networks can be modeled by planar or close-to-planar graphs and thus near-optimal edge coloring algorithms can be developed for these restricted classes of graphs.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_3", "@cite_12", "@cite_11" ], "mid": [ "2103182676", "2063078918", "2115871876", "1829981230", "2153086189", "2161157451" ], "abstract": [ "Algorithms for transmission scheduling in multihop broadcast radio networks are presented. Both link scheduling and broadcast scheduling are considered. In each instance, scheduling algorithms are given that improve upon existing algorithms both theoretically and experimentally. It is shown that tree networks can be scheduled optimally and that arbitrary networks can be scheduled so that the schedule is bounded by a length that is proportional to a function of the network thickness times the optimum. Previous algorithms could guarantee only that the schedules were bounded by a length no worse than the maximum node degree times optimum. Since the thickness is typically several orders of magnitude less than the maximum node degree, the algorithms presented represent a considerable theoretical improvement. Experimentally, a realistic model of a radio network is given and the performance of the new algorithms is studied. These results show that, for both types of scheduling, the new algorithms (experimentally) perform consistently better than earlier methods. >", "In this paper we define a broadcast channel access protocol called spatial TDMA, which is designed specifically to operate in a multihop packet radio environment where the location of the nodes of the network is assumed to be fixed. The defined protocol assigns transmission rights to nodes in the network in a local TDMA fashion and is collisionfree. Methods for determining slot allocations are developed, and an approximate solution is given for determining the assignment of capacities for the links of the network that minimizes the average delay of messages in the system.", "A parallel algorithm based on an artificial neural network model for broadcast scheduling problems in packet radio networks is presented. The algorithm requires n*m processing elements for an n-mode-m-slot radio network problem. The algorithm is verified by simulating 13 different networks. >", "This paper proposes a solution to providing a collision free channel allocation in a multihop mobile radio network. An efficient solution to this problem provides spatial reuse of the bandwidth whenever possible. A robust solution maintains the collision free property of the allocation under any combination of topological changes. The node organization algorithm presented in this paper provides a completely distributed, maximally localized execution of collision free channel allocation. It allows for parallel channel allocation in stationary and mobile networks with provable spatial reuse properties. A simpler version of the algorithm provides also a highly localized distributed coloring algorithm of dynamic graphs.", "Two polynomial-time algorithms are given for scheduling conversations in a spread spectrum radio network. The constraint on conversations is that each station can converse with only one other station at a time. The first algorithm is strongly polynomial and finds a schedule of minimum length that allows each pair of neighboring stations to converse directly for a prescribed length of time. The second algorithm is designed for the situation in which messages must be relayed multiple hops. The algorithm produces, in polynomial time, a routing vector and compatible link schedule that jointly meet a prespecified end-to-end demand, so that the schedule has the smallest possible length. >", "We present a distributed algorithm for obtaining a fair time slot allocation for link activation in a multihop radio network. We introduce the concept of maximal fairness in which the termination of a fair allocation algorithm is related to maximal reuse of the channel under a given fairness metric. The fairness metric can be freely interpreted as the expected link traffic load demands, link priorities, etc. Since respective demands for time slot allocation will not necessarily be equal, we define fairness in terms of the closeness of allocation to respective link demands while preserving the collision free property. The algorithm can be used in conjunction with existing link activation algorithms to provide a fairer and fuller utilization of the channel." ] }
cs0701001
2950643428
Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms.
A significant work in STDMA link scheduling is reported in @cite_14 , in which the authors show that tree networks can be scheduled optimally, oriented graphs can be scheduled near-optimally and arbitrary networks can be scheduled such that the schedule is bounded by a length proportional to the graph thickness The thickness of a graph is the minimum number of planar graphs into which the given graph can be partitioned. times the optimum number of colors.
{ "cite_N": [ "@cite_14" ], "mid": [ "2103182676" ], "abstract": [ "Algorithms for transmission scheduling in multihop broadcast radio networks are presented. Both link scheduling and broadcast scheduling are considered. In each instance, scheduling algorithms are given that improve upon existing algorithms both theoretically and experimentally. It is shown that tree networks can be scheduled optimally and that arbitrary networks can be scheduled so that the schedule is bounded by a length that is proportional to a function of the network thickness times the optimum. Previous algorithms could guarantee only that the schedules were bounded by a length no worse than the maximum node degree times optimum. Since the thickness is typically several orders of magnitude less than the maximum node degree, the algorithms presented represent a considerable theoretical improvement. Experimentally, a realistic model of a radio network is given and the performance of the new algorithms is studied. These results show that, for both types of scheduling, the new algorithms (experimentally) perform consistently better than earlier methods. >" ] }
cs0701001
2950643428
Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms.
A probabilistic analysis of the throughput performance of graph-based scheduling algorithms under the physical interference model is derived in @cite_0 . The authors determine the optimal number of simultaneous transmissions by maximizing a lower bound on the physical throughput and subsequently propose a truncated graph-based scheduling algorithm that provides probabilistic guarantees for network throughput.
{ "cite_N": [ "@cite_0" ], "mid": [ "2154570806" ], "abstract": [ "Many published algorithms used for scheduling transmissions in packet radio networks are based on finding maximal independent sets in an underlying graph. Such algorithms are developed under the assumptions of variations of the protocol interference model, which does not take the aggregated effect of interference into consideration. We provide a probabilistic analysis for the throughput performance of such graph based scheduling algorithms under the physical interference model. We show that in many scenarios a significant portion of transmissions scheduled based on the protocol interference model result in unacceptable signal-to-interference and noise ratio (SINR) at intended receivers. Our analytical as well as simulation results indicate that, counter intuitively, maximization of the cardinality of independent sets does not necessarily increase the throughput of a network. We introduce the truncated graph based scheduling algorithm (TGSA) that provides probabilistic guarantees for the throughput performance of the network." ] }
cs0701001
2950643428
Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms.
In @cite_20 , the authors consider wireless mesh networks with half duplex and full duplex orthogonal channels, wherein each node can transmit to at most one node and or receive from at most @math nodes ( @math ) during any time slot. They investigate the joint problem of routing flows and scheduling link transmissions to analyze the achievability of a given rate vector between multiple source-destination pairs. The scheduling problem is solved as an edge-coloring problem on a multi-graph and the necessary conditions from scheduling problem lead to constraints on the routing problem, which is then formulated as a linear optimization problem. Correspondingly, the authors present a greedy coloring algorithm to obtain a 2-approximate solution to the chromatic index problem and describe a polynomial time approximation algorithm to obtain an @math -optimal solution of the routing problem using the primal dual approach. Finally, they evaluate the performance of their algorithms via simulations.
{ "cite_N": [ "@cite_20" ], "mid": [ "2106117595" ], "abstract": [ "This paper considers the problem of determining the achievable rates in multi-hop wireless mesh networks with orthogonal channels. We classify wireless networks with orthogonal channels into two types, half duplex and full duplex, and consider the problem of jointly routing the flows and scheduling transmissions to achieve a given rate vector. We develop tight necessary and sufficient conditions for the achievability of the rate vector. We develop efficient and easy to implement Fully Polynomial Time Approximation Schemes for solving the routing problem. The scheduling problem is a solved as a graph edge-coloring problem. We show that this approach guarantees that the solution obtained is within 50 of the optimal solution in the worst case (within 67 of the optimal solution in a common special case) and, in practice, is close to 90 of the optimal solution on the average. The approach that we use is quite flexible and can be extended to handle more sophisticated interference conditions, and routing with diversity requirements." ] }
cs0701082
2949509156
In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time.
Recently, decidability of classes of imperative programs has been studied in @cite_19 @cite_10 @cite_1 . Tiwari considers real-valued programs with no nested loops and no branching inside a loop @cite_1 . Such programs correspond to one-binary-rule CLP( @math ). The author provides decidability results for subclasses of these programs. Our approach does not restrict nesting of loops and it allows internal branching . While in general termination of such programs is undecidable @cite_1 , we identified a subclass of programs with decidable termination property. Termination of the following CLP( @math ) program and its imperative equivalent can be shown by our method but not by the one proposed in @cite_1 . ] [ ]
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_10" ], "mid": [ "1575647584", "2136333450", "1530375435" ], "abstract": [ "We show that termination of a class of linear loop programs is decidable. Linear loop programs are discrete-time linear systems with a loop condition governing termination, that is, a while loop with linear assignments. We relate the termination of such a simple loop, on all initial values, to the eigenvectors corresponding to only the positive real eigenvalues of the matrix defining the loop assignments. This characterization of termination is reminiscent of the famous stability theorems in control theory that characterize stability in terms of eigenvalues.", "In order to verify semialgebraic programs, we automatize the Floyd Naur Hoare proof method. The main task is to automatically infer valid invariants and rank functions. First we express the program semantics in polynomial form. Then the unknown rank function and invariants are abstracted in parametric form. The implication in the Floyd Naur Hoare verification conditions is handled by abstraction into numerical constraints by Lagrangian relaxation. The remaining universal quantification is handled by semidefinite programming relaxation. Finally the parameters are computed using semidefinite programming solvers. This new approach exploits the recent progress in the numerical resolution of linear or bilinear matrix inequalities by semidefinite programming using efficient polynomial primal dual interior point methods generalizing those well-known in linear programming to convex optimization. The framework is applied to invariance and termination proof of sequential, nondeterministic, concurrent, and fair parallel imperative polynomial programs and can easily be extended to other safety and liveness properties.", "We present an automated method for proving the termination of an unnested program loop by synthesizing linear ranking functions. The method is complete. Namely, if a linear ranking function exists then it will be discovered by our method. The method relies on the fact that we can obtain the linear ranking functions of the program loop as the solutions of a system of linear inequalities that we derive from the program loop. The method is used as a subroutine in a method for proving termination and other liveness properties of more general programs via transition invariants; see [PR03]." ] }
cs0701082
2949509156
In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time.
Similarly to @cite_1 , Podelski and Rybalchenko have considered programs with no nested loops and no branching inside a loop. However, they focused on integer programs and provide a polynomial time decidability technique for a subclass of such programs. In case of general programs their technique can be applied to provide a sufficient condition for liveness.
{ "cite_N": [ "@cite_1" ], "mid": [ "1575647584" ], "abstract": [ "We show that termination of a class of linear loop programs is decidable. Linear loop programs are discrete-time linear systems with a loop condition governing termination, that is, a while loop with linear assignments. We relate the termination of such a simple loop, on all initial values, to the eigenvectors corresponding to only the positive real eigenvalues of the matrix defining the loop assignments. This characterization of termination is reminiscent of the famous stability theorems in control theory that characterize stability in terms of eigenvalues." ] }
cs0701094
2949115009
It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors.
Among all these solutions, we have chosen to focus on the multipoint relay protocol ( ) described in @cite_4 for several reasons:
{ "cite_N": [ "@cite_4" ], "mid": [ "2156603709" ], "abstract": [ "We discuss the mechanism of multipoint relays (MPRs) to efficiently flood broadcast messages in mobile wireless networks. Multipoint relaying is a technique to reduce the number of redundant re-transmissions while diffusing a broadcast message in the network. We discuss the principle and the functioning of MPRs, and propose a heuristic to select these MPRs in a mobile wireless environment. We also analyze the complexity of this heuristic and prove that the computation of a multipoint relay set with minimal size is NP-complete. Finally, we present some simulation results to show the efficiency of multipoint relays." ] }
cs0701094
2949115009
It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors.
It is efficient using the unit disk graph model. It is used in the well-known standardized routing protocol @cite_10 . It can be used for other miscellaneous purposes (, computing connected dominating sets @cite_6 ).
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2117526746", "1543191305" ], "abstract": [ "In this paper we propose and discuss an optimized link state routing protocol, named OLSR, for mobile wireless networks. The protocol is based on the link state algorithm and it is proactive (or table-driven) in nature. It employs periodic exchange of messages to maintain topology information of the network at each node. OLSR is an optimization over a pure link state protocol as it compacts the size of information sent in the messages, and furthermore, reduces the number of retransmissions to flood these messages in an entire network. For this purpose, the protocol uses the multipoint relaying technique to efficiently and economically flood its control messages. It provides optimal routes in terms of number of hops, which are immediately available when needed. The proposed protocol is best suitable for large and dense ad hoc networks.", "Multipoint relays offer an optimized way of flooding packets in a radio network. However, this technique requires the last hop knowledge: to decide wether or not a flooding packet is retransmitted, a node needs to know from which node the packet was received. When considering broadcasting at IP level, this information may be difficult to obtain. We thus propose a scheme for computing an optimized connected dominating set from multipoint relays. This set allows to efficiently broadcast packets without the last hop information with performances close to multipoint relay flooding." ] }
cs0701094
2949115009
It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors.
Obviously, the tricky part of this protocol lies in the selection of the set of relays @math within the @math -hop neighbors of a node @math : the smaller this set is, the smaller the number of retransmissions is and the more efficient the broadcast is. Unfortunately, finding such a set so that it is the smallest possible one is a -complete problem, so a greedy heuristic is proposed by , which can be found in @cite_12 . Considering a node @math , it can be described as follows:
{ "cite_N": [ "@cite_12" ], "mid": [ "2163322487" ], "abstract": [ "It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d, where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown." ] }
cs0701094
2949115009
It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors.
Being the broadcast protocol used in , has been the subject of miscellaneous studies since its publication. For example in @cite_3 , authors analyze how relays are selected and conclude that almost $75
{ "cite_N": [ "@cite_3" ], "mid": [ "2194282251" ], "abstract": [ "OLSR is a recent routing protocol for multi-hop wireless ad-hoc networks standardized by the IETF. It uses the concept of Multi-Point Relays (MPR) to minimize the overhead of routing messages and limit the harmful effects of broadcast in such networks. In this report, we are interested in the performance evaluation of Multi-Point Relays selection. We analyze the mean number of selected MP= R in the network and their spatial distribution." ] }
cs0701190
1614193546
The distribution of files using decentralized, peer-to-peer (P2P) systems, has significant advantages over centralized approaches. It is however more difficult to settle on the best approach for file sharing. Most file sharing systems are based on query string searches, leading to a relatively simple but inefficient broadcast or to an efficient but relatively complicated index in a structured environment. In this paper we use a browsable peer-to-peer file index consisting of files which serve as directory nodes, interconnecting to form a directory network. We implemented the system based on BitTorrent and Kademlia. The directory network inherits all of the advantages of decentralization and provides browsable, efficient searching. To avoid conflict between users in the P2P system while also imposing no additional restrictions, we allow multiple versions of each directory node to simultaneously exist -- using popularity as the basis for default browsing behavior. Users can freely add files and directory nodes to the network. We show, using a simulation of user behavior and file quality, that the popularity based system consistently leads users to a high quality directory network; above the average quality of user updates. Q
@cite_8 is a P2P file sharing system that provides a global namespace and automatic availability management. It allows any user to modify any portion of the namespace by modifying, adding, and deleting files and directories. Wayfinder's global namespace is constructed by the system automatically merging the local namespaces of individual nodes. @cite_24 is a server less distributed file system. Farsite logically functions as a centralized file server but its physical realization is dispersed among a network of untrusted workstations. @cite_4 is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers. @cite_27 is a global distributed Internet file system that also focuses on scalability. @cite_6 is a distributed file system that focuses on allowing multiple concurrent writers to files.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_24", "@cite_27" ], "mid": [ "2104210894", "1488732870", "", "2121133177", "2150676586" ], "abstract": [ "OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.", "Social networks offering unprecedented content sharing are rapidly developing over the Internet. Unfortunately, it is often difficult to both locate and manage content in these networks, particularly when they are implemented on current peer-to-peer technologies. In this paper, we describe Wayfinder, a peer-to-peer file system that targets the needs of medium-sized content sharing communities. Wayfinder seeks to advance the state-of-the-art by providing three synergistic abstractions: a global namespace that is uniformly accessible across connected and disconnected operation, content-based queries that can be persistently embedded into the global namespace, and automatic availability management. Interestingly, Wayfinder achieves much of its functionality through the use of a peer-to-peer indexed data storage system called PlanetP: essentially, Wayfinder constructs the global namespace, locates specific files, and performs content searches by posing appropriate queries to PlanetP. We describe this query-based design and present preliminary performance measurements of a prototype implementation.", "", "Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.", "The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail." ] }
cs0612043
2950828832
We analyze the ability of peer to peer networks to deliver a complete file among the peers. Early on we motivate a broad generalization of network behavior organizing it into one of two successive phases. According to this view the network has two main states: first centralized - few sources (roots) hold the complete file, and next distributed - peers hold some parts (chunks) of the file such that the entire network has the whole file, but no individual has it. In the distributed state we study two scenarios, first, when the peers are patient'', i.e, do not leave the system until they obtain the complete file; second, peers are impatient'' and almost always leave the network before obtaining the complete file.
A lot of work has been devoted to the area of file sharing in P2P networks. Many experimental papers provide practical strategies and preliminary results concerning the behavior of these kind of networks. @cite_10 for instance, the authors essentially describe properties like liveness and downloading rate by means of extended experiments and simulations under several assumptions.
{ "cite_N": [ "@cite_10" ], "mid": [ "1853723677" ], "abstract": [ "Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution." ] }
cs0612072
2949346966
Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.
Together, our results represent a new theoretical study of stochastic versions of budget optimization problems in search-related advertising. The budget optimization problem was studied recently @cite_4 in the fixed model, when @math 's are known. On one hand, our study is more general, with the emphasis on the uncertainty in modeling @math 's and the stochastic models we have formulated. We do not know of prior work in this area that formulates and uses our stochastic models. On the other hand, our study is less general as it does not consider the interaction between keywords that occurs when a user's search query matches two or more keywords, which is studied in @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2119914577" ], "abstract": [ "Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1 e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem." ] }
cs0612072
2949346966
Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.
Recently, @cite_9 considered an online knapsack problem with the assumption of small element sizes, and @cite_2 considered an online knapsack problem with a random order of element arrival, both motivated by bidding in advertising auctions. The difference with our work is that these authors consider the problem in the online algorithms framework, and analyze the competitive ratios of the obtained algorithms. In contrast, our algorithms make decisions offline, and we analyze the obtained approximation ratios for the expected value of the objective. Also, our algorithms base their decisions on the probability distributions of the clicks, whereas the authors of @cite_2 and @cite_9 do not assume any advance knowledge of these distributions. The two approaches are in some sense complementary: online algorithms have the disadvantage that in practice it may not be possible to make new decisions about bidding every time that a query arrives, and stochastic optimization has the disadvantage of requiring the knowledge of the probability distributions.
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2155551092", "2164792208" ], "abstract": [ "We consider the budget-constrained bidding optimization problem for sponsored search auctions, and model it as an online (multiple-choice) knapsack problem. We design both deterministic and randomized algorithms for the online (multiple-choice) knapsack problems achieving a provably optimal competitive ratio. This translates back to fully automatic bidding strategies maximizing either profit or revenue for the budget-constrained advertiser. Our bidding strategy for revenue maximization is oblivious (i.e., without knowledge) of other bidders' prices and or click-through-rates for those positions. We evaluate our bidding algorithms using both synthetic data and real bidding data gathered manually, and also discuss a sniping heuristic that strictly improves bidding performance. With sniping and parameter tuning enabled, our bidding algorithms can achieve a performance ratio above 90 against the optimum by the omniscient bidder.", "We consider situations in which a decision-maker with a fixed budget faces a sequence of options, each with a cost and a value, and must select a subset of them online so as to maximize the total value. Such situations arise in many contexts, e.g., hiring workers, scheduling jobs, and bidding in sponsored search auctions. This problem, often called the online knapsack problem, is known to be inapproximable. Therefore, we make the enabling assumption that elements arrive in a randomorder. Hence our problem can be thought of as a weighted version of the classical secretary problem, which we call the knapsack secretary problem. Using the random-order assumption, we design a constant-competitive algorithm for arbitrary weights and values, as well as a e-competitive algorithm for the special case when all weights are equal (i.e., the multiple-choice secretary problem). In contrast to previous work on online knapsack problems, we do not assume any knowledge regarding the distribution of weights and values beyond the fact that the order is random." ] }
cs0612072
2949346966
Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.
There has been a lot of other work on search-related auctions in the presence of budgets, but it has primarily focused on the game-theoretic aspects @cite_7 @cite_11 , strategy-proof mechanisms @cite_16 @cite_0 , and revenue maximization @cite_3 @cite_13 .
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_0", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "1975392791", "", "2564127794", "", "2015324401", "2127347613" ], "abstract": [ "We investigate the \"generalized second price\" auction (GSP), a new mechanism which is used by search engines to sell online advertising that most Internet users encounter daily. GSP is tailored to its unique environment, and neither the mechanism nor the environment have previously been studied in the mechanism design literature. Although GSP looks similar to the Vickrey-Clarke-Groves (VCG) mechanism, its properties are very different. In particular, unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truth-telling is not an equilibrium of GSP. To analyze the properties of GSP in a dynamic environment, we describe the generalized English auction that corresponds to the GSP and show that it has a unique equilibrium. This is an ex post equilibrium that results in the same payoffs to all players as the dominant strategy equilibrium of VCG.", "", "We consider the problem of online keyword advertising auctions among multiple bidders with limited budgets, and propose a bidding heuristic to optimize the utility for bidders by equalizing the return-on-investment for each bidder across all keywords. We show that natural auction mechanisms combined with this heuristic can experience chaotic cycling (as is the case with many current advertisement auction systems), and therefore propose a modified class of mechanisms with small random perturbations. This perturbation is reminiscent of the small time-dependent perturbations employed in the dynamical systems literature to convert many types of chaos into attracting motions. We show that our perturbed mechanism provably converges in the case of first-price auctions and experimentally converges in the case of second-price auctions. Moreover, we show that our bidder-optimal system does not decrease the revenue of the auctioneer in the sense that it converges to the unique market equilibrium in the case of first-price auctions. In the case of second-price auctions, we conjecture that it converges to the non-unique “supplyaware” market equilibrium. We also observe that our perturbed auction scheme is useful in a broader context: In general, it can allow bidders to “share” a particular item, leading to stable allocations and pricing for the bidders, and improved revenue for the auctioneer.", "", "We study the problem of optimally allocating online advertisement space to budget-constrained advertisers. This problem was defined and studied from the perspective of worst-case online competitive analysis by Our objective is to find an algorithm that takes advantage of the given estimates of the frequencies of keywords to compute a near optimal solution when the estimates are accurate, while at the same time maintaining a good worst-case competitive ratio in case the estimates are totally incorrect. This is motivated by real-world situations where search engines have stochastic information that provide reasonably accurate estimates of the frequency of search queries except in certain highly unpredictable yet economically valuable spikes in the search pattern. Our approach is a black-box approach: we assume we have access to an oracle that uses the given estimates to recommend an advertiser everytime a query arrives. We use this oracle to design an algorithm that provides two performance guarantees: the performance guarantee in the case that the oracle gives an accurate estimate, and its worst-case performance guarantee. Our algorithm can be fine tuned by adjusting a parameter α, giving a tradeoff curve between the two performance measures with the best competitive ratio for the worst-case scenario at one end of the curve and the optimal solution for the scenario where estimates are accurate at the other en. Finally, we demonstrate the applicability of our framework by applying it to two classical online problems, namely the lost cow and the ski rental problems.", "We present a truthful auction for pricing advertising slots on a web-page assuming that advertisements for different merchants must be ranked in decreasing order of their (weighted) bids. This captures both the Overture model where bidders are ranked in order of the submitted bids, and the Google model where bidders are ranked in order of the expected revenue (or utility) that their advertisement generates. Assuming separable click-through rates, we prove revenue-equivalence between our auction and the non-truthful next-price auctions currently in use." ] }
cs0612086
1645234836
We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.
The only semantics supported by Deno or VVWV is to enforce Lamport's happens-before relation @cite_11 ; all actions are assumed be mutually non-commuting. Happens-before captures potential causality; however an event may happen-before another even if they are not truly dependent. This paper further generalizes VVWV by considering semantic constraints.
{ "cite_N": [ "@cite_11" ], "mid": [ "1973501242" ], "abstract": [ "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become." ] }
cs0612086
1645234836
We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.
Bayou @cite_7 supports arbitrary application semantics. User-supplied code controls whether an action is committed or aborted. However the system imposes an arbitrary total execution order. Bayou centralises decision at a single primary replica.
{ "cite_N": [ "@cite_7" ], "mid": [ "2117260615" ], "abstract": [ "Bayou is a replicated, weakly consistent storage system designed for a mobile computing environment that includes portable machines with less than ideal network connectivity. To maximize availability, users can read and write any accessible replica. Bayou’s design has focused on supporting application-specific mechanisms to detect and resolve the update conflicts that naturally arise in such a system, ensuring that replicas move towards eventual consistency, and defining a protocol by which the resolution of update conflicts stabilizes. It includes novel methods for conflict detection, called dependency checks, and per -write conflict resolution based on client-provid ed mer ge procedures. To guarantee eventual consistency, Bayou servers must be able to rollback the effects of previously executed writes and redo them according to a global serialization order . Furthermore, Bayou permits clients to observe the results of all writes received by a server , including tentative writes whose conflicts have not been ultimately resolved. This paper presents the motivation for and design of these mechanisms and describes the experiences gained with an initial implementation of the system." ] }
cs0612086
1645234836
We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.
IceCube @cite_14 introduced the idea of reifying semantics with constraints. The IceCube algorithm computes optimal proposals, minimizing the number of dead actions. Like Bayou, commitment in IceCube is centralised at a primary. Compared to this article, IceCube supports a richer constraint vocabulary, which is useful for applications, but harder to reason about formally.
{ "cite_N": [ "@cite_14" ], "mid": [ "2154193287" ], "abstract": [ "We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static and dynamic reconciliation constraints between all pairs of actions, proposes schedules that satisfy the static constraints, and validates them against the dynamic constraints. Preliminary experience indicates that strong static constraints successfully contain the potential combinatorial explosion of the simulation stage. With weaker static constraints, the system still finds good solutions in a reasonable time." ] }
cs0612086
1645234836
We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.
Generalized Paxos @cite_10 and Generic Broadcast @cite_4 take commutativity relations into account and compute a partial order. They do not consider any other semantic relations. Both Generalized Paxos @cite_10 and our algorithm make progress when a majority is not reached, although through different means. Generalized Paxos starts a new election instance, whereas our algorithm waits for a plurality decision.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "2106670435", "202775458" ], "abstract": [ "Theoretician’s Abstract Consensus has been regarded as the fundamental problem that must be solved to implement a fault-tolerant distributed system. However, only a weaker problem than traditional consensus need be solved. We generalize the consensus problem to include both traditional consensus and this weaker version. A straightforward generalization of the Paxos consensus algorithm implements general consensus. The generalizations of consensus and of the Paxos algorithm require a mathematical detour de force into a type of object called a command-structure set.", "A sub for use in an acoustic telemetry system provides a sound path of low impedance to facilitate the transmission of acoustic signals through a string of pipe positioned in a wellbore. The sub section of pipe includes a tubular receiving member positioned in the bore of a pipe section and spaced from the interior walls thereof. Ports and flow channels are also provided to permit the passage of well fluids through the pipe section when an acoustic instrument is positioned in the tubular receiving member. A lateral shoulder is formed in the interior wall of the receiving member and is arranged to matingly receive a flat portion on the instrument. The abutment of the flat portion of the instrument and the shoulder provides a positive sound path for transmission of longitudinal sound waves from the instrument to the receiver member. The receiver member in turn is connected to the pipe section by longitudinal portions essentially concentric with the axis of the pipe section so as to provide a direct low impedance sound path for the efficient direct transmission of longitudinal sound waves." ] }
cs0611102
1647784286
We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform.
A proposal for a user interface for prevents Trojan horses from tampering with application output. @cite_17 Kernelizing the graphics server and delegating window manager tasks to the application level is a prototypical solution in @cite_27 . However, it is not compatible with the Windows platform used on the vast majority of existing client computers.
{ "cite_N": [ "@cite_27", "@cite_17" ], "mid": [ "2107252100", "1927589643" ], "abstract": [ "Malware such as Trojan horses and spyware remain to be persistent security threats that exploit the overly complex graphical user interfaces of today's commodity operating systems. In this paper, we present the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses these problems while retaining compatibility to legacy operating systems. We describe our approach of kernelizing the window server and present the deployed security mechanisms and protocols. Our implementation comprises only 1,500 lines of code while supporting commodity software such as X11 applications alongside protected graphical security applications. We discuss key techniques such as client-side window handling, a new floating-labels mechanism, drag-and-drop, and denial-of-service-preventing resource management. Furthermore, we present an application scenario to evaluate the feasibility, performance, and usability of our approach", "If signaling channels can only be driven by a trusted path, they cannot be exploited by trojan horses in untrusted software. To this end, the SMITE secure computer system provides a general-purpose trusted path, based on a screen editor, which would act as the users' normal interface to the system. The feasibility of the approach relies on the use of a sympathetic computer architecture, which supports a fine grain of protection. The authors describe the trusted path and the user interface of the SMITE system. They discuss the formal specification of the display functions. They examine the use of SMITE for high-assurance applications. >" ] }
cs0611102
1647784286
We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform.
In the Microsoft Windows operating system, applications typically receive information about user actions by messages. Since these can be sent by malicious programs as well, they are a convenient attack vector. It is a vulnerability by design -- Windows treats all processes equally that run on the same desktop. If one needs an undisturbed interface, a separate attached to the interactive should be assigned. That approach is pursued by @cite_5 . However, managing separate desktops can be cumbersome for software developers. So most of today's software that interacts with a local user runs in a single desktop shared by benign and malign programs.
{ "cite_N": [ "@cite_5" ], "mid": [ "24868454" ], "abstract": [ "With the advent of networks that span administrative domains, increasing mobility, and even global-area networks, we find ourselves more and more often in situations where we do not know the potential parties accessing our system. Yet, we choose to collaborate with them: For example, we frequently browse unknown Web sites, or invite unknown parties to access our servers. I call a scenario in which parties choose to collaborate that do not necessarily trust each other, or even know each other, an ad-hoc collaboration . This dissertation investigates how we can protect our sensitive resources in the presence of ad-hoc collaboration. In particular, I study three ad-hoc collaboration scenarios and propose novel access control schemes for each of them. In my first system I propose and implement an access control mechanism for distributed Java applications that can span administrative domains. It uses an access control logic to allow servers to reason about the access privileges of unknown clients. My second system presents a simple security model for the personal computer, in which the user's workstation is divided into multiple desktops. Each desktop is sealed off from the others, confining the possibly dangerous results of ad-hoc collaboration. My last system investigates ad-hoc collaboration with hand-held computers. I present a framework that allows developers to write “split applications”: Part of the application runs on a trusted, but computationally limited, small computer, and part of the application runs on an untrusted, but more powerful PC." ] }
cs0611102
1647784286
We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform.
This problem is encountered by local security applications such as electronic signature software @cite_10 , virus scanners, personal fire walls etc. In @cite_28 a dilemma is pointed out when notifying users about security events. Users are notified about presence of a possibly malicious program that could hide that very notification immediately. Some improvements to dialog-based security are shown in @cite_8 . Application output should be defended against hiding. Actions should be delayed so that users could interfere when a program is controlled by simulated input or scripting. can be used to achieve undisturbed output instead of the co-operative Windows GDI. @cite_20 , @cite_21 , @cite_7 Modifying the web browser to convey meta-information to the user about which window can be trusted is advocated by @cite_6 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_6", "@cite_10", "@cite_20" ], "mid": [ "2118761104", "", "2101587116", "2101685556", "2132324287", "2099438928", "1593097332" ], "abstract": [ "Client computers are often a weak link in a technical network infrastructure. Increasing the security of client systems and applications against malicious software attacks increases the security of the network as a whole. Our work solves integrity and authenticity of input, confidentiality, integrity and authenticity of output. We present components to integrate a trusted path into an application to directly communicate with a user at a personal computer. This allows security sensitive parts of applications to continue operating while being attacked with malicious software in an event-driven system. Our approach uses widely employed COTS software – DirectX – and can be varied in design and implementation, hence making it more difficult to defeat with generic attack tools.", "", "Corruption or disclosure of sensitive user documents can be among the most lasting and costly effects of malicious software attacks. Many malicious programs specifically target files that are likely to contain important user data. Researchers have approached this problem by developing techniques for restricting access to resources on an application-by-application basis. These so-called \"sandbox environments,\" though effective, are cumbersome and difficult to use. In this paper, we present a prototype Windows NT 2000 tool that addresses malicious software threats to user data by extending the existing set of file-access permissions. Management and configuration options make the tool unobtrusive and easy to use. We have conducted preliminary experiments to assess the usability of the tool and to evaluate the effects of improvements we have made. Our work has produced an intuitive data-centric method of protecting valuable documents that provides an additional layer of defense beyond existing antivirus solutions.", "Technology aimed at making life easier for game developers is an issue of controversy among security experts. Objections arise out of concerns of stability of a game-friendly platform. However, this kind of programming interfaces can be used to promote security as well. We use Microsoft's DirectX platform to access input and output devices directly. Thereby we enable applications to distinguish between user actions and simulated behaviour by malicious code. With modest effort for a developer we are able to ensure authenticity and integrity of mouse and keyboard input and the display's integrity.", "Computer security protocols usually terminate in a computer; however, the human-based services which they support usually terminate in a human. The gap between the human and the computer creates potential for security problems. We examine this gap, as it is manifested in secure Web servers. demonstrated the potential, in 1996, for malicious servers to impersonate honest servers. In this paper, we show how malicious servers can still do this---and can also forge the existence of an SSL session and the contents of the alleged server certificate. We then consider how to systematically defend against Web spoofing, by creating a trusted path from the browser to the human user. We present potential designs, propose a new one, prototype it in open-source Mozilla, and demonstrate its effectiveness via user studies.", "Electronic signatures are introduced by more and more countries as legally binding means for signing electronic documents with the primary hope of boosting e-commerce and e-government. Given that the underlying cryptographic methods are sufficiently strong, attacks by Trojan horse programs on electronic signatures are becoming increasingly popular. Most of the current systems either employ costly or inflexible – yet still inadequate – defence mechanisms or simply ignore the threat. A signatory has to trust the manufacturer of the software that it will work in the intended way. In the past, Trojan horse programs have shown to be of growing concern for end-user computers. Software for electronic signatures must provide protection against Trojan horses attacking the legally relevant signing process. In a survey of commercial of the shelf signature software programs we found severe vulnerabilities that can easily be exploited by an attacker. In this work we propose a secure electronic paper as a countermeasure. It is a collection of preventive and restorative methods that provides, in parallel to traditional signatures on paper, a high degree of protection of the system against untrustworthy programs. We focus our attention on Microsoft Windows NT and Windows 98, two operating systems most likely to be found on the customers' computers. The resulting system is an assembly of a small number of inexpensive building blocks that offers reliable protection against Trojan horse programs attempting to forge electronic signatures.", "A method for protecting the video memory on a computer system from being illicitly copied. The invention decrypts a previously encrypted image and displays it on the video screen. During the time the image is displayed, the invention protects it from being copied by other running applications. This is accomplished in multithreaded operating systems by first issuing a multithreaded locking primitive to the video memory resource, and then inserting a pending video hardware request that will take precedence over any subsequent video memory access requests. The pending request serves the purpose of destroying the contents of video memory. The pending request is passive in that it does not execute unless a malicious program has removed the video memory lock." ] }
math-ph0611049
1611194078
Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur.
As mentioned in the previous section, simulations of flux lines in type-II superconductors using the PIMC method have been done, generating the Abrikosov lattice ( @cite_13 , @cite_0 ). However, the superconductor model has periodic boundary conditions in the xy-plane, is a different problem altogether, and is not applicable to trapped fluids. No Monte Carlo studies of the model of @cite_18 have been done to date and dynamical simulations have been confined to a handful of vortices. @cite_5 added a white noise term to the KMD Hamiltonian, Equation , to study vortex reconnection in comparison to direct Navier-Stokes, but he confined his simulations to two vortices. Direct Navier-Stokes simulations of a large number of vortices are beyond our computational capacities.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_18", "@cite_13" ], "mid": [ "2148252125", "2010371781", "", "2023644284" ], "abstract": [ "Using path integral Monte Carlo we simulate a 3D system of up to 1000 magnetic flux lines by mapping it onto interacting bosons in 2 1 1D. With increasing temperatures we find first order melting from an ordered solid to an entangled liquid signaled by a finite entropy jump and sharp discontinuities of the defect density and the structure factor SG. For a particular density of strong columnar pins the crystal is transformed into a Bose glass phase with patches of crystalline order disrupted by the trapped vortices at the pinning sites but with no overall positional or orientational order. This glassy phase melts into a defected entangled liquid through a continuous transition.", "We show that the stochastic differential equation (SDE) model for the merger of two identical two-dimensional vortices proposed by Agullo and Verga [“Exact two vortices solution of Navier–Stokes equation,” Phys. Rev. Lett. 78, 2361 (1997)] is a special case of a more general class of SDE models for N interacting vortex filaments. These toy models include vorticity diffusion via a white noise forcing of the inviscid equations, and thus extend inviscid models to include core dynamics and topology change (e.g., merger in two dimensions and vortex reconnection in three dimensions). We demonstrate that although the N=2 two-dimensional model is qualitatively and quantitatively incorrect, it can be dramatically improved by accounting for self-advection. We then extend the two-dimensional SDE model to three dimensions using the semi-inviscid asymptotic approximation of [“Simplified equations for the interactions of nearly parallel vortex filaments,” J. Fluid Mech. 288, 201 (1995)] for nearly parallel...", "", "We present an extensive numerical study of vortex matter using the mapping to two-dimensional bosons and path-integral Monte Carlo simulations. We find a ital first-order vortex lattice melting transition into an ital entangled vortex liquid. The jumps in entropy and density are consistent with experimental results on YBa sub 2 Cu sub 3 O sub 7 minus delta . The liquid is denser than the lattice and has a correlation length l sub z approx 1.7 var_epsilon a sub 0 in the direction parallel to the field. In the language of bosons we find a sharp quantum phase transition from a Wigner crystal to a superfluid, even in the case of logarithmic interaction. We also measure the excitation spectrum of the Bose system and find the roton minimum to be insensitive to the range of the interaction. copyright ital 1998 ital The American Physical Society" ] }
math-ph0611049
1611194078
Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur.
@cite_12 has done some excellent simulations of vortex tangles in He-4 with rotation, boundary walls, and vortex reconnections to study disorder in rotating superfluid turbulence. Because vortex tangles are extremely curved, they applied the full Biot-Savart law to calculate the motion of the filaments in time. Their study did not include any sort of comparison to 2-D models because for most of the simulation vortices were far too tangled. The inclusion of rigid boundary walls, although correct for the study of He-4, also makes the results only tangentially applicable to the KMD system we use.
{ "cite_N": [ "@cite_12" ], "mid": [ "2090218426" ], "abstract": [ "Almost all studies of vortex states in helium II have been concerned with either ordered vortex arrays or disordered vortex tangles. This work numerically studies what happens in the presence of both rotation (which induces order) and thermal counterflow (which induces disorder). We find a new statistically steady state in which the vortex tangle is polarized along the rotational axis. Our results are used to interpret an instability that was discovered experimentally by [Phys. Rev. Lett. 50, 190 (1983)] and the vortex state beyond the instability that has been unexplained until now." ] }
math-ph0611049
1611194078
Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur.
Other related work on the statistical mechanics of turbulence in 3-D vortex lines can be found in @cite_4 and @cite_3 in addition to @cite_17 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_17" ], "mid": [ "1535313248", "1983052591", "" ], "abstract": [ "We introduce a statistical ensemble for a single vortex filament of a three dimensional incompressible fluid. The core of the vortex is modeled by a quite generic stochastic process. We prove the existence of the partition function for both positive and a limited range of negative temperatures.", "A system of equations determining average velocity of ideal incompressible fluid is derived from the assumption that fluid motion is ergodic. Two flows are considered: one vortex line in a bounded cylindrical domain and a flow of almost circular vortex lines. In the first case the averaged equations have the form of an eigenvalue problem similar to that for Schrődinger’s equation.", "" ] }
cs0611003
2141738320
Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance.
The traditional synchronization techniques describe in @cite_11 @cite_13 @cite_1 @cite_4 @cite_10 all operate fundamentally on the idea of communicating timing information from one set of nodes to the next. One other approach to synchronization that has recently received much attention is to apply mathematical models of natural phenomena to engineered networks. A model for the emergence of synchrony in pulse-coupled oscillators was developed in @cite_14 for a fully-connected group of identical oscillators. @cite_5 , this convergence to synchrony result was extended to networks that were not fully connected.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_1", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2122690764", "2154953441", "2159219540", "", "2165135460", "2094636498", "2143229717" ], "abstract": [ "Time synchronization is important for any distributed system. In particular, wireless sensor networks make extensive use of synchronized time in many contexts (e.g. for data fusion, TDMA schedules, synchronized sleep periods, etc.). Existing time synchronization methods were not designed with wireless sensors in mind, and need to be extended or redesigned. Our solution centers around the development of a deterministic time synchronization method relevant for wireless sensor networks. The proposed solution features minimal complexity in network bandwidth, storage and processing and can achieve good accuracy. Highly relevant for sensor networks, it also provides tight, deterministic bounds on both the offsets and clock drifts. A method to synchronize the entire network in preparation for data fusion is presented. A real implementation of a wireless ad-hoc network is used to evaluate the performance of the proposed approach.", "A simple model for synchronous firing of biological oscillators based on Peskin's model of the cardiac pacemaker (Mathematical aspects of heart physiology, Courant Institute of Mathematical Sciences, New York University, New York, 1975, pp. 268-278) is studied. The model consists of a population of identical integrate-and-fire oscillators. The coupling between oscillators is pulsatile: when a given oscillator fires, it pulls the others up by a fixed amount, or brings them to the firing threshold, whichever is less. The main result is that for almost all initial conditions, the population evolves to a state in which all the oscillators are firing synchronously. The relationship between the model and real communities of biological oscillators is discussed; examples include populations of synchronously flashing fireflies, crickets that chirp in unison, electrically synchronous pacemaker cells, and groups of women whose menstrual cycles become mutually synchronized.", "This paper presents lightweight tree-based synchronization (LTS) methods for sensor networks. First, a single-hop, pair-wise synchronization scheme is analyzed. This scheme requires the exchange of only three messages and has Gaussian error properties. The single-hop approach is extended to a centralized multi-hop synchronization method. Multi-hop synchronization consists of pair-wise synchronizations performed along the edges of a spanning tree. Multi-hop synchronization requires only n-1 pair-wise synchronizations for a network of n nodes. In addition, we show that the communication complexity and accuracy of multi-hop synchronization is a function of the construction and depth of the spanning tree; several spanning-tree construction algorithms are described. Further, the required refresh rate of multi-hop synchronization is shown as a function of clock drift and the accuracy of single-hop synchronization. Finally, a distributed multi-hop synchronization is presented where nodes keep track of their own clock drift and their synchronization accuracy. In this scheme, nodes initialize their own resynchronization as needed.", "", "A class of synchronization protocols for dense, large-scale sensor networks is presented. The protocols build on the recent work of Hong, Cheow, and Scaglione [5, 6] in which the synchronization update rules are modeled by a system of pulse-coupled oscillators. In the present work, we define a class of models that converge to a synchronized state based on the local communication topology of the sensor network only, thereby lifting the all-to-all communication requirement implicit in [5, 6]. Under some rather mild assumptions of the connectivity of the network over time, these protocols still converge to a synchronized state when the communication topology is time varying.", "Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms.", "Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85 ± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68 ± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions." ] }
cs0611003
2141738320
Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance.
The convergence result is clearly desirable for synchronization in networks and in @cite_0 theoretical and simulation results suggested that such a technique could be adapted to communication and sensor networks. Experimental validation for the ideas of @cite_14 was obtained in @cite_18 where the authors implemented the Reachback Firefly Algorithm (RFA) on TinyOS-based motes.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_14" ], "mid": [ "2098897082", "2030381790", "2154953441" ], "abstract": [ "Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity.", "Synchronicity is a useful abstraction in many sensor network applications. Communication scheduling, coordinated duty cycling, and time synchronization can make use of a synchronicity primitive that achieves a tight alignment of individual nodes' firing phases. In this paper we present the Reachback Firefly Algorithm (RFA), a decentralized synchronicity algorithm implemented on TinyOS-based motes. Our algorithm is based on a mathematical model that describes how fireflies and neurons spontaneously synchronize. Previous work has assumed idealized nodes and not considered realistic effects of sensor network communication, such as message delays and loss. Our algorithm accounts for these effects by allowing nodes to use delayed information from the past to adjust the future firing phase. We present an evaluation of RFA that proceeds on three fronts. First, we prove the convergence of our algorithm in simple cases and predict the effect of parameter choices. Second, we leverage the TinyOS simulator to investigate the effects of varying parameter choice and network topology. Finally, we present results obtained on an indoor sensor network testbed demonstrating that our algorithm can synchronize sensor network devices to within 100 μsec on a real multi-hop topology with links of varying quality.", "A simple model for synchronous firing of biological oscillators based on Peskin's model of the cardiac pacemaker (Mathematical aspects of heart physiology, Courant Institute of Mathematical Sciences, New York University, New York, 1975, pp. 268-278) is studied. The model consists of a population of identical integrate-and-fire oscillators. The coupling between oscillators is pulsatile: when a given oscillator fires, it pulls the others up by a fixed amount, or brings them to the firing threshold, whichever is less. The main result is that for almost all initial conditions, the population evolves to a state in which all the oscillators are firing synchronously. The relationship between the model and real communities of biological oscillators is discussed; examples include populations of synchronously flashing fireflies, crickets that chirp in unison, electrically synchronous pacemaker cells, and groups of women whose menstrual cycles become mutually synchronized." ] }
cs0611003
2141738320
Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance.
The problem with these emergent synchronization results is that the fundamental theory assumes all nodes have nearly the same firing period. Results from @cite_0 and @cite_18 show that the convergence results may hold when nodes have approximately the same firing period, but the authors of @cite_18 explain that clock skew will degrade synchronization performance. Since we are not aware of any results that provide an extension to deal with networks of nodes with arbitrary firing periods, our work focuses on synchronization algorithms that explicitly estimate clock skew.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "2098897082", "2030381790" ], "abstract": [ "Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity.", "Synchronicity is a useful abstraction in many sensor network applications. Communication scheduling, coordinated duty cycling, and time synchronization can make use of a synchronicity primitive that achieves a tight alignment of individual nodes' firing phases. In this paper we present the Reachback Firefly Algorithm (RFA), a decentralized synchronicity algorithm implemented on TinyOS-based motes. Our algorithm is based on a mathematical model that describes how fireflies and neurons spontaneously synchronize. Previous work has assumed idealized nodes and not considered realistic effects of sensor network communication, such as message delays and loss. Our algorithm accounts for these effects by allowing nodes to use delayed information from the past to adjust the future firing phase. We present an evaluation of RFA that proceeds on three fronts. First, we prove the convergence of our algorithm in simple cases and predict the effect of parameter choices. Second, we leverage the TinyOS simulator to investigate the effects of varying parameter choice and network topology. Finally, we present results obtained on an indoor sensor network testbed demonstrating that our algorithm can synchronize sensor network devices to within 100 μsec on a real multi-hop topology with links of varying quality." ] }
cs0611052
2953215000
For a large number of random constraint satisfaction problems, such as random k-SAT and random graph and hypergraph coloring, there are very good estimates of the largest constraint density for which solutions exist. Yet, all known polynomial-time algorithms for these problems fail to find solutions even at much lower densities. To understand the origin of this gap we study how the structure of the space of solutions evolves in such problems as constraints are added. In particular, we prove that much before solutions disappear, they organize into an exponential number of clusters, each of which is relatively small and far apart from all other clusters. Moreover, inside each cluster most variables are frozen, i.e., take only one value. The existence of such frozen variables gives a satisfying intuitive explanation for the failure of the polynomial-time algorithms analyzed so far. At the same time, our results establish rigorously one of the two main hypotheses underlying Survey Propagation, a heuristic introduced by physicists in recent years that appears to perform extraordinarily well on random constraint satisfaction problems.
Finally, the authors prove that the property has a pair of satisfying assignments at distance @math " has a sharp threshold, thus boosting their constant probability result for having a pair of satisfying assignments at a given distance to a high probability one. To the best of our understanding, these three are the only results established in @cite_0 . Combined, they imply that for every @math , there is @math and constants @math , such that in @math : W.h.p. every pair of satisfying assignments has distance either less than @math or more than @math . For every @math , there is a pair of truth assignments that have distance @math . We note that even if the maximizer in the second moment computation was determined rigorously and coincided with the heuristic guess of @cite_0 , the strongest statement that can be inferred from the above two assertions in terms of establishing clustering" is: for every @math , there is @math , such that @math has at least two clusters.
{ "cite_N": [ "@cite_0" ], "mid": [ "2115885909" ], "abstract": [ "We investigate geometrical properties of the random K-satisfiability problem using the notion of x-satisfiability: a formula is x-satisfiable is there exist two SAT-assignments differing in Nx variables. We show the existence of a sharp threshold for this property as a function of the clause density. For large enough K, we prove that there exists a region of clause density, below the satisfiability threshold, where the landscape of Hamming distances between SAT-assignments experiences a gap: pairs of SAT-assignments exist at small x, and around x=12, but they do not exist at intermediate values of x. This result is consistent with the clustering scenario which is at the heart of the recent heuristic analysis of satisfiability using statistical physics analysis (the cavity method), and its algorithmic counterpart (the survey propagation algorithm). Our method uses elementary probabilistic arguments (first and second moment methods), and might be useful in other problems of computational and physical interest where similar phenomena appear." ] }
cs0611135
2952342381
Support Vector Machines (SVMs) are well-established Machine Learning (ML) algorithms. They rely on the fact that i) linear learning can be formalized as a well-posed optimization problem; ii) non-linear learning can be brought into linear learning thanks to the kernel trick and the mapping of the initial search space onto a high dimensional feature space. The kernel is designed by the ML expert and it governs the efficiency of the SVM approach. In this paper, a new approach for the automatic design of kernels by Genetic Programming, called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a well-founded fitness function inspired from the margin criterion, and a co-evolution framework ensuring the computational scalability of the approach. Empirical validation on standard ML benchmark demonstrates that EKM is competitive using state-of-the-art SVMs with tuned hyper-parameters.
The most relevant work to EKM is the Genetic Kernel Support Vector Machine (GK-SVM) @cite_1 . GK-SVM similarly uses GP within an SVM-based approach, with two main differences compared to EKM. On one hand, GK-SVM focuses on feature construction, using GP to optimize mapping @math (instead of the kernel). On the other hand, the fitness function used in GK-SVM suffers from a quadratic complexity in the number of training examples. Accordingly, all datasets but one considered in the experimentations are small (less than 200 examples). On a larger dataset, the authors acknowledge that their approach does not improve on a standard SVM with well chosen parameters. Another related work similarly uses GP for feature construction, in order to classify time series @cite_16 . The set of features (GP trees) is further evolved using a GA, where the fitness function is based on the accuracy of an SVM classifier. Most other works related to evolutionary optimization within SVMs (see @cite_15 ) actually focus on parametric optimization, e.g. achieving features selection or tuning some parameters.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_1" ], "mid": [ "1986490585", "", "1978917315" ], "abstract": [ "The problem of model selection for support vector machines (SVMs) is considered. We propose an evolutionary approach to determine multiple SVM hyperparameters: The covariance matrix adaptation evolution strategy (CMA-ES) is used to determine the kernel from a parameterized kernel space and to control the regularization. Our method is applicable to optimize non-differentiable kernel functions and arbitrary model selection criteria. We demonstrate on benchmark datasets that the CMA-ES improves the results achieved by grid search already when applied to few hyperparameters. Further, we show that the CMA-ES is able to handle much more kernel parameters compared to grid-search and that tuning of the scaling and the rotation of Gaussian kernels can lead to better results in comparison to standard Gaussian kernels with a single bandwidth parameter. In particular, more flexibility of the kernel can reduce the number of support vectors.", "", "The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optimum settings for a particular problem. This paper proposes a classification technique, which we call the Genetic Kernel SVM (GK SVM), that uses Genetic Programming to evolve a kernel for a SVM classifier. Results of initial experiments with the proposed technique are presented. These results are compared with those of a standard SVM classifier using the Polynomial, RBF and Sigmoid kernel with various parameter settings" ] }
hep-th0610185
1993186195
We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral.
In @cite_21 @cite_30 Erdem, and in @cite_25 't Hooft and Nobbenhuis discuss a novel kind of symmetry transformation consisting of a rotation of real positional coordinates to the imaginary axis, with the aim of ruling out a cosmological constant. The rotated representation that these authors use for their non-relativistic particle toy modle is identical to the one we discuss in section for the indefinite inner product theory. These authors are particularly interested in the relationship between the real and imaginary coordinate representations. Since we study this relationship in detail in the present article, it is conceivable that our mathematical framework may have further applications in this direction.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_25" ], "mid": [ "", "2006504824", "2008434812" ], "abstract": [ "", "We introduce a symmetry principle that forbids a bulk cosmological constant in six and ten dimensions. Then the symmetry is extended in six dimensions so that it insures absence of 4-dimensional cosmological constant induced by the six-dimensional curvature scalar, at least, for a class of metrics. A small cosmological constant may be induced in this scheme by breaking of the symmetry by a small amount.", "In this paper we study a new symmetry argument that results in a vacuum state with strictly vanishing vacuum energy. This argument exploits the well-known feature that de Sitter and Anti- de Sitter space are related by analytic continuation. When we drop boundary and hermiticity conditions on quantum fields, we get as many negative as positive energy states, which are related by transformations to complex space. The paper does not directly solve the cosmological constant problem, but explores a new direction that appears worthwhile." ] }
hep-th0610185
1993186195
We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral.
Also related to the cosmological constant problem is the paper @cite_11 , which introduces phantom fields to cancel the ordinary matter contribution to the vacuum energy. Again, our non-perturbative approach to phantom fields may be useful in the study of these models.
{ "cite_N": [ "@cite_11" ], "mid": [ "1983032885" ], "abstract": [ "We study a symmetry, schematically Energy → — Energy, which suppresses matter contributions to the cosmological constant. The requisite negative energy fluctuations are identified with a ghost'' copy of the Standard Model. Gravity explicitly, but weakly, violates the symmetry, and naturalness requires General Relativity to break down at short distances with testable consequences. If this breakdown is accompanied by gravitational Lorentz-violation, the decay of flat spacetime by ghost production is acceptably slow. We show that inflation works in our scenario and can lead to the initial conditions required for standard Big Bang cosmology." ] }
hep-th0610185
1993186195
We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral.
In scattering theory, so-called Siegert or Gamow states may be used to represent resonances. These are states with complex momentum @cite_5 , which may be given precise mathematical meaning in the framework of section of the current article, where such states are defined as distributions on test function spaces of Gel'fand-Shilov type. Since we are mainly interested in field theory applications where interactions are polynomial, we do not treat sufficiently general potentials for our results to be directly applicable to many traditional non-relativistic scattering problems, but perhaps the mathematical machinery can be generalized.
{ "cite_N": [ "@cite_5" ], "mid": [ "2030838706" ], "abstract": [ "Abstract The orthogonality and completeness properties of the resonant states as defined by Humblet and Rosenfeld are investigated using a simple regularization method first suggested by Zel'dovich. It is found that at leat for finite-range potentials, the set of bound states and any finite number of proper resonant states can be completed by a set of continuum states. This makes it possible to expand the scattering and reaction amplitudes in such a way that their resonance behabiour is exhibited, and the dependence of the corresponding partial widths on the interaction becomes more explicit." ] }
hep-th0610185
1993186195
We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral.
The author of @cite_12 describes a treatment of these states in the context of rigged Hilbert spaces. His construction should be very closely related to the one of the current article.
{ "cite_N": [ "@cite_12" ], "mid": [ "1967199649" ], "abstract": [ "Using the complex coordinate method, it is possible to compute the physical properties of atomic resonances above the ionization threshold. We show how to define and compute the electronic densities associated with these atomic resonances. The method is general and is illustrated for atomic Rydberg states in static and time-dependent external fields." ] }
math0610051
2950957448
We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation and general hyperbolic equations. The problem is to evaluate numerically a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory at high frequencies. Because an FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations, and as low as @math in storage space. It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: 1) a diffeomorphism which is handled by means of a nonuniform FFT and 2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is that the separation rank of the residual kernel is . Several numerical examples demonstrate the efficiency and accuracy of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.
In the case where @math , the operator is said to be pseudodifferential. In this simpler setting, it is known that separated variables expansions of the symbol @math are good strategies for reducing complexity. For instance, Bao and Symes @cite_8 propose a numerical method based on a Fourier series expansion of the symbol in the angular variable arg @math , and a polyhomogeneous expansion in @math , which is a particularly effective example of separation of variables.
{ "cite_N": [ "@cite_8" ], "mid": [ "2141044221" ], "abstract": [ "A simple algorithm is described for computing general pseudo-differential operator actions. Our approach is based on the asymptotic expansion of the symbol together with the fast Fourier transform (FFT). The idea is motivated by the characterization of the pseudo-differential operator algebra. We show that the algorithm is efficient through analyzing its complexity. Some numerical experiments are also presented." ] }
math0610051
2950957448
We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation and general hyperbolic equations. The problem is to evaluate numerically a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory at high frequencies. Because an FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations, and as low as @math in storage space. It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: 1) a diffeomorphism which is handled by means of a nonuniform FFT and 2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is that the separation rank of the residual kernel is . Several numerical examples demonstrate the efficiency and accuracy of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.
We would also like to acknowledge the line of research related to Filon-type quadratures for oscillatory integrals @cite_14 . When the integrand is of the form @math with @math smooth and @math large, it is not always necessary to sample the integrand at the Nyquist rate. For instance, integration of a polynomial interpolant of @math (Filon quadrature) provides an accurate approximation to @math using fewer and fewer evaluations of the function @math as @math . While these ideas are important, they are not directly applicable in the case of FIOs. The reasons are threefold. First, we make no notable assumption on the support of the function to which the operator is applied, meaning that the oscillations of @math may be on the same scale as those of the exponential @math . Second the phase does not in general have a simple formula that would lend itself to precomputations. And third, Filon-type quadratures do not address the problem of simplifying computations of such oscillatory integrals at once (i.e. computing a family of integrals indexed by @math in the case of FIOs).
{ "cite_N": [ "@cite_14" ], "mid": [ "2061749829" ], "abstract": [ "Highly-oscillatory integrals are allegedly difficult to calculate. The main assertion of this paper is that that impression is incorrect. As long as appropriate quadrature methods are used, their accuracy increases when oscillation becomes faster and suitable choice of quadrature points renders this welcome phenomenon more pronounced. We focus our analysis on Filon-type quadrature and analyse its behaviour in a range of frequency regimes for integrals of the form ∫ h 0 f(x)e iωx w(x)dx, where h > 0 is small and |ω| large. Our analysis is applied to modified Magnus methods for highly-oscillatory ordinary differential equations." ] }
cs0610046
2951738093
The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.
@cite_7 presented the filter algorithm requiring @math comparisons per element in the worst case and an average-case performance over independent and identically distributed (i.i.d.) noise data of slightly more than 3 comparisons per element. @cite_6 presented a better alternative: the filter algorithm was shown to average 3 comparisons per element for i.i.d. input signals and @cite_4 presented an asynchronous implementation.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_7" ], "mid": [ "", "2148168055", "2752885492" ], "abstract": [ "", "We present a novel algorithm for calculating the running maximum or minimum value of a 1-D sequence over a sliding data window. The new algorithm stores a pruned ordered list of data elements that have the potential to become maxima or minima across the data window at some future time instant. This algorithm has a number of advantages over competing algorithms, including balanced computational requirements for a variety of signals and the potential for reduced processing and storage requirements for long data windows. We show through both analysis and simulation that for an L-element running window, the new algorithm uses approximately three comparisons and 2logL+1 memory locations per output sample on average for i.i.d. signals, independent of the signal distribution.", "From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning." ] }
cs0610046
2951738093
The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.
@cite_2 proposed a fast algorithm based on anchors. They do not improve on the number of comparisons per element. For window sizes ranging from 10 to 30 and data values ranging from 0 to 255, their implementation has a running time lower than their implementation by as much as 30 implementation by as much as 15 than 15, but is outperformed similarly for smaller window sizes, and both are comparable for a window size equals to 15. The Droogenbroeck-Buckley filter pseudocode alone requires a full page compared to a few lines for algorithm. Their experiments did not consider window sizes beyond @math nor arbitrary floating point data values.
{ "cite_N": [ "@cite_2" ], "mid": [ "2047293099" ], "abstract": [ "Several efficient algorithms for computing erosions and openings have been proposed recently. They improve on van Herk's algorithm in terms of number of comparisons for large structuring elements. In this paper we introduce a theoretical framework of anchors that aims at a better understanding of the process involved in the computation of erosions and openings. It is shown that the knowledge of opening anchors of a signal f is sufficient to perform both the erosion and the opening of f. Then we propose an algorithm for one-dimensional erosions and openings which exploits opening anchors. This algorithm improves on the fastest algorithms available in literature by approximately 30 in terms of computation speed, for a range of structuring element sizes and image contents." ] }
cs0610105
1616788770
We present a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
Unlike statistical databases @cite_22 @cite_4 @cite_10 @cite_17 @cite_23 , micro-data datasets contain actual records of individuals even after anonymization. A popular approach to micro-data privacy is @math -anonymity @cite_18 @cite_8 @cite_20 . The data publisher must determine in advance which of the attributes are available to the adversary (these are called quasi-identifiers''), and which are the sensitive attributes'' to be protected. @math -anonymization ensures that each quasi-identifier'' tuple occurs in at least @math records in the anonymized database. It is well-known that @math -anonymity does not guarantee privacy, because the values of sensitive attributes associated with a given quasi-identifier may not be sufficiently diverse @cite_19 @cite_5 or because the adversary has access to background knowledge @cite_19 . Mere knowledge of the @math -anonymization algorithm may be sufficient to break privacy @cite_2 . Furthermore, @math -anonymization completely fails on high-dimensional datasets @cite_6 , such as the Netflix Prize dataset and most real-world datasets of individual recommendations and purchases.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_8", "@cite_6", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2159024459", "2113427031", "2044307594", "2119047901", "1606251440", "2134167315", "2010523825", "1964835534", "", "2128906841", "", "2169134473" ], "abstract": [ "Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.", "This paper considers the problem of providing security to statistical databases against disclosure of confidential information. Security-control methods suggested in the literature are classified into four general approaches: conceptual, query restriction, data perturbation, and output perturbation. Criteria for evaluating the performance of the various security-control methods are identified. Security-control methods that are based on each of the four approaches are discussed, together with their performance with respect to the identified evaluation criteria. A detailed comparative analysis of the most promising methods for protecting dynamic-online statistical databases is also presented. To date no single security-control method prevents both exact and partial disclosures. There are, however, a few perturbation-based methods that prevent exact disclosure and enable the database administrator to exercise \"statistical disclosure control.\" Some of these methods, however introduce bias into query responses or suffer from the 0 1 query-set-size problem (i.e., partial disclosure is possible in case of null query set or a query set of size 1). We recommend directing future research efforts toward developing new methods that prevent exact disclosure and provide statistical-disclosure control, while at the same time do not suffer from the bias problem and the 0 1 query-set-size problem. Furthermore, efforts directed toward developing a bias-correction mechanism and solving the general problem of small query-set-size would help salvage a few of the current perturbation-based methods.", "This note proposes a statistical perturbation scheme to protect a statistical database against compromise. The proposed scheme can handle the security of numerical as well as nonnumerical sensitive fields. Furthermore, knowledge of some records in a database does not help to compromise unknown records. We use Chebyshev's inequality to analyze the trade-offs among the magnitude of the perturbations, the error incurred by statistical queries, and the size of the query set to which they apply. We show that if the statistician is given absolute error guarantees, then a compromise is possible, but the cost is made exponential in the size of the database.", "Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.", "In recent years, the wide availability of personal data has made the problem of privacy preserving data mining an important one. A number of methods have recently been proposed for privacy preserving data mining of multidimensional data records. One of the methods for privacy preserving data mining is that of anonymization, in which a record is released only if it is indistinguishable from k other entities in the data. We note that methods such as k-anonymity are highly dependent upon spatial locality in order to effectively implement the technique in a statistically robust way. In high dimensional space the data becomes sparse, and the concept of spatial locality is no longer easy to define from an application point of view. In this paper, we view the k-anonymization problem from the perspective of inference attacks over all possible combinations of attributes. We show that when the data contains a large number of attributes which may be considered quasi-identifiers, it becomes difficult to anonymize the data without an unacceptably high amount of information loss. This is because an exponential number of combinations of dimensions can be used to make precise inference attacks, even when individual attributes are partially specified within a range. We provide an analysis of the effect of dimensionality on k-anonymity methods. We conclude that when a data set contains a large number of attributes which are open to inference attacks, we are faced with a choice of either completely suppressing most of the data or losing the desired level of anonymity. Thus, this paper shows that the curse of high dimensionality also applies to the problem of privacy preserving data mining.", "Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.", "We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to 0, 1 . The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11].", "The problem of information disclosure has attracted much interest from the research community in recent years. When disclosing information, the challenge is to provide as much information as possible (optimality) while guaranteeing a desired safety property for privacy (such as l-diversity). A typical disclosure algorithm uses a sequence of disclosure schemas to output generalizations in the nonincreasing order of data utility; the algorithm releases the first generalization that satisfies the safety property. In this paper, we assert that the desired safety property cannot always be guaranteed if an adversary has the knowledge of the underlying disclosure algorithm. We propose a model for the additional information disclosed by an algorithm based on the definition of deterministic disclosure function (DDF), and provide definitions of p-safe and p-optimal DDFs. We give an analysis for the complexity to compute a p-optimal DDF. We show that deciding whether a DDF is p-optimal is an NP-hard problem, and only under specific conditions, we can solve the problem in polynomial time with respect to the size of the set of all possible database instances and the length of the disclosure generalization sequence. We then consider the problem of microdata disclosure and the safety condition of l-diversity. We relax the notion of p-optimality to weak p-optimality, and develop a weak p-optimal algorithm which is polynomial in the size of the original table and the length of the generalization sequence.", "", "A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data.", "", "We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all. An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering." ] }
cs0610105
1616788770
We present a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
Our main case study is the Netflix Prize dataset of movie ratings. We are aware of only one previous paper that considered privacy of movie ratings. In collaboration with the MovieLens recommendation service, Frankowski correlated public mentions of movies in the MovieLens discussion forum with the users' movie rating histories in the MovieLens dataset @cite_7 . The algorithm uses the entire public record as the background knowledge (29 ratings per user, on average), and is not robust if this knowledge is imprecise ( , if the user publicly mentioned movies which he did not rate).
{ "cite_N": [ "@cite_7" ], "mid": [ "1981794546" ], "abstract": [ "In today's data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people's intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn't work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven't rated." ] }
cs0610137
2950868392
The Software Transactional Memory (STM) model is an original approach for controlling concurrent accesses to ressources without the need for explicit lock-based synchronization mechanisms. A key feature of STM is to provide a way to group sequences of read and write actions inside atomic blocks, similar to database transactions, whose whole effect should occur atomically. In this paper, we investigate STM from a process algebra perspective and define an extension of asynchronous CCS with atomic blocks of actions. Our goal is not only to set a formal ground for reasoning on STM implementations but also to understand how this model fits with other concurrency control mechanisms. We also view this calculus as a test bed for extending process calculi with atomic transactions. This is an interesting direction for investigation since, for the most part, actual works that mix transactions with process calculi consider compensating transactions, a model that lacks all the well-known ACID properties. We show that the addition of atomic transactions results in a very expressive calculus, enough to easily encode other concurrent primitives such as guarded choice and multiset-synchronization ( a la join-calculus). The correctness of our encodings is proved using a suitable notion of bisimulation equivalence. The equivalence is then applied to prove interesting laws of transactions'' and to obtain a simple normal form for transactions.
Linked to the upsurge of works on Web Services (and on long running Web transactions), a larger body of works is concerned with formalizing . In this context, each transactive block of actions is associated with a compensation (code) that has to be run if a failure is detected. The purpose of compensation is to undo most of the visible actions that have been performed and, in this case, atomicity, isolation and durability are obviously violated. We give a brief survey of works that formalize compensable processes using process calculi. These works are of two types: (1) @cite_19 @cite_4 @cite_2 , which are extensions of process calculi (like @math or join-calculus) for describing transactional choreographies where composition take place dynamically and where each service describes its possible interactions and compensations; (2) @cite_22 @cite_17 @cite_11 @cite_20 , where ad hoc process algebras are designed from scratch to describe the possible flow of control among services. These calculi are oriented towards the orchestration of services and service failures. This second approach is also followed in @cite_21 @cite_12 where two frameworks for composing transactional services are presented.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_22", "@cite_21", "@cite_19", "@cite_2", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "", "2150864072", "1974168649", "", "1587877137", "1580055894", "", "", "2460103410" ], "abstract": [ "", "In global computing applications the availability of a mechanism for some form of committed choice can be useful, and sometimes necessary. It can conveniently handle, e.g., distributed agreements and negotiations with nested choice points. We propose a linguistic extension of the Join calculus for programming nested commits, called Committed Join (cJoin). It provides primitives for explicit abort programmable compensations and interactions between negotiations. We give the operational semantics of cJoin in the reflexive CHAM style. Then we discuss its expressiveness on the basis of a few examples and encodings. Finally, we provide a big-step semantics for cJoin processes that can be typed as shallow and we show that shallow processes are serializable.", "A key aspect when aggregating business processes and web services is to assure transactional properties of process executions. Since transactions in this context may require long periods of time to complete, traditional mechanisms for guaranteeing atomicity are not always appropriate. Generally the concept of long running transactions relies on a weaker notion of atomicity based on compensations. For this reason, programming languages for service composition cannot leave out two key aspects: compensations, i.e. ad hoc activities that can undo the effects of a process that fails to complete, and transactional boundaries to delimit the scope of a transactional flow. This paper presents a hierarchy of transactional calculi with increasing expressiveness. We start from a very small language in which activities can only be composed sequentially. Then, we progressively introduce parallel composition, nesting, programmable compensations and exception handling. A running example illustrates the main features of each calculus in the hierarchy.", "", "We study long-running transactions in open component-based distributed applications, such as Web Services platforms. Long-running transactions describe time-extensive activities that involve several distributed components. Henceforth, in case of failure, it is usually not possible to restore the initial state, and firing a compensation process is preferable. Despite the interest of such transactional mechanisms, a formal modeling of them is still lacking. In this paper we address this issue by designing an extension of the asynchronous π-calculus with long-running transactions (and sequences) – the πt -calculus. We study the practice of πt-calculus, by discussing few paradigmatic examples, and its theory, by defining a semantics and providing a correct encoding of πt-calculus into asynchronous π-calculus.", "A timed extension of π-calculus with a transaction construct – the calculus Webπ – is studied. The underlying model of Webπ relies on networks of processes; time proceeds asynchronously at the network level, while it is constrained by the local urgency at the process level. Namely process reductions cannot be delayed to favour idle steps. The extensional model – the timed bisimilarity – copes with time and asynchrony in a different way with respect to previous proposals. In particular, the discriminating power of timed bisimilarity is weaker when local urgency is dropped. A labelled characterization of timed bisimilarity is also discussed.", "", "", "A long-running transaction is an interactive component of a distributed system which must be executed as if it were a single atomic action. In principle, it should not be interrupted or fail in the middle, and it must not be interleaved with other atomic actions of other concurrently executing components of the system. In practice, the illusion of atomicity for a long-running transaction is achieved with the aid of compensation actions supplied by the original programmer: because the transaction is interactive, familiar automatic techniques of check-pointing and rollback are no longer adequate. This paper constructs a model of long-running transactions within the framework of the CSP process algebra, showing how the compensations are orchestrated to achieve the illusion of atomicity. It introduces a method for declaring that a process is a transaction, and for declaring a compensation for it in case it needs to be rolled back after it has committed. The familiar operator of sequential composition is redefined to ensure that all necessary compensations will be called in the right order if a later failure makes this necessary. The techniques are designed to work well in a highly concurrent and distributed setting. In addition we define an angelic choice operation, implemented by speculative execution of alternatives; its judicious use can improve responsiveness of a system in the face of the unpredictable latencies of remote communication. Many of the familiar properties of process algebra are preserved by these new definitions, on reasonable assumptions of the correctness and independence of the programmer-declared compensations." ] }
math-ph0609072
2949598775
We study nodal sets for typical eigenfunctions of the Laplacian on the standard torus in 2 or more dimensions. Making use of the multiplicities in the spectrum of the Laplacian, we put a Gaussian measure on the eigenspaces and use it to average over the eigenspace. We consider a sequence of eigenvalues with multiplicity N tending to infinity. The quantity that we study is the Leray, or microcanonical, measure of the nodal set. We show that the expected value of the Leray measure of an eigenfunction is constant. Our main result is that the variance of Leray measure is asymptotically 1 (4 pi N), as N tends to infinity, at least in dimensions 2 and at least 5.
The study of nodal lines of random waves goes back to Longuet-Higgins @cite_7 @cite_13 who computed various statistics of nodal lines for Gaussian random waves in connection with the analysis of ocean waves. Berry @cite_10 suggested to model highly excited quantum states for classically chaotic systems by using various random wave models, and also computed fluctuations of various quantities in these models (see e.g. @cite_2 ). See also Zelditch @cite_11 . The idea of averaging over a single eigenspace in the presence of multiplicities appears in B 'erard @cite_17 who computed the expected surface measure of the nodal set for eigenfunctions of the Laplacian on spheres. Neuheisel @cite_9 also worked on the sphere and studied the statistics of Leray measure. He gave an upper bound for the variance, which we believe is not sharp.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_9", "@cite_17", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2072863189", "2115189693", "", "2561781306", "2094161636", "2018458110", "183347149" ], "abstract": [ "A number of statistical properties of a random, moving surface are obtained in the special case when the surface is Gaussian and isotropic. The results may be stated with special simplicity for a ‘ring9 spectrum when the energy in the spectrum is confined to one particular wavelength y. In particular, the average density of maxima per unit area equals pie (2 3y 2 ), and the average length, per unit area, of the contour drawn at the mean level equals pie ( 2y)", "The following statistical properties are derived for a random, moving, Gaussian surface: (1) the probability distribution of the surface elevation and of the magnitude and orientation of the gradient; (2) the average number of zero-crossings per unit distance along a line in an arbitrary direction; (3) the average length of the contours per unit area, and the distribution of their direction; (4) the average density of maxima and minima per unit area of the surface, and the average density of specular points (i.e, points where the two components of gradient take given values); (5) the probability distribution of the velocities of zero-crossings along a given line; (6) the probability distribution of the velocities of contours and of specular points; (7) the probability distribution of the envelope and phase angle, and hence (8) when the spectrum is narrow, the probability distribution of the heights of maxima and minima and the distribution of the intervals between successive zero-crossings along an arbitrary line. All the results are expressed in terms of the two-dimensional energy spectrum of the surface, and are found to involve the moments of the spectrum up to a finite order only. (1), (3), (4), (5) and (6) are discussed in detail for the special case of a narrow spectrum. The converse problem is also studied and solved: given certain statistical properties of the surface, to find a convergent sequence of approximations to the energy spectrum. The problems arise in connexion with the statistical analysis of the sea surface. (More detailed summaries are given at the beginning of each part of the paper.)", "", "", "For real (time-reversal symmetric) quantum billiards, the mean length L of nodal line is calculated for the nth mode (n>>1), with wavenumber k, using a Gaussian random wave model adapted locally to satisfy Dirichlet or Neumann boundary conditions. The leading term is of order k (i.e. √n), and the first (perimeter) correction, dominated by an unanticipated long-range boundary effect, is of order log k (i.e. log n), with the same sign (negative) for both boundary conditions. The leading-order state-to-state fluctuations δL are of order √log k. For the curvature κ of nodal lines, |κ| and √κ2 are of order k, but |κ|3 and higher moments diverge. For complex (e.g. Aharonov-Bohm) billiards, the mean number N of nodal points (phase singularities) in the mode has a leading term of order k2 (i.e. n), the perimeter correction (again a long-range effect) is of order klog k (i.e. √nlog n) (and positive, notwithstanding nodal depletion near the boundary) and the fluctuations δN are of order k√log k. Generalizations of the results for mixed (Robin) boundary conditions are stated.", "The form of the wavefunction psi for a semiclassical regular quantum state (associated with classical motion on an N-dimensional torus in the 2N-dimensional phase space) is very different from the form of psi for an irregular state (associated with stochastic classical motion on all or part of the (2N-1)-dimensional energy surface in phase space). For regular states the local average probability density Pi rises to large values on caustics at the boundaries of the classically allowed region in coordinate space, and psi exhibits strong anisotropic interference oscillations. For irregular states Pi falls to zero (or in two dimensions stays constant) on 'anticaustics' at the boundary of the classically allowed region, and psi appears to be a Gaussian random function exhibiting more moderate interference oscillations which for ergodic classical motion are statistically isotropic with the autocorrelation of psi given by a Bessel function.", "An umbrella is convertible between an expanded and contracted position to adjust the area of coverage thereof when the umbrella is in the opened condition. The umbrella is formed with a pair of sticks which extend from the handle thereof with the sticks being pivoted at the handle for movement between a spread position whereby the sticks may be placed in a generally V-shaped configuration, and a position wherein the sticks are brought together to be generally parallel with each other. Each stick includes a slider which may move relative to the stick in order to effect opening and closing of the canopy of the umbrella, which canopy is generally supported upon the ends of the sticks by ribs. With the sticks in a generally parallel position, a smaller area of coverage is provided by the umbrella. By spreading the sticks apart into the V-shaped configuration by pivotal motion thereof about the handle, the canopy of the umbrella is spread to cover a larger area and the umbrella may be used, for example, by two persons." ] }
cs0609026
2950493373
The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet. We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.
@cite_3 study the file popularity, file availability, download performance, content lifetime and pollution level on a popular tracker site. This work is orthogonal to ours as they do not study the core algorithms of , but rather focus on the contents distributed using and on the users behavior. The work that is the most closely related to our study was done by @cite_29 . In this paper, the authors provide seminal insights into based on data collected from a log for a yet popular torrent, even if a sketch of a local vision from a local peer perspective is presented. Their results provide information on peers behavior, and show a correlation between uploaded and downloaded amount of data. Our work differs from @cite_29 in that we provide a thorough measurement-based analysis of the rarest first and choke algorithms. We also study a large variety of torrents, which allows us not to be biased toward a particular type of torrent. Moreover, without pretending to answer all possible questions that arise from a simple yet powerful protocol as , we provide new insights into the rarest first and choke algorithms.
{ "cite_N": [ "@cite_29", "@cite_3" ], "mid": [ "1853723677", "2166707941" ], "abstract": [ "Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.", "Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems." ] }
cs0609166
1640198638
We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.
Other work in private communication-efficient protocols for specific functions includes the Private Information Retrieval problem @cite_9 @cite_6 @cite_2 , building decision trees @cite_10 , set intersection and matching @cite_12 , and @math 'th-ranked element @cite_0 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "", "2154654620", "1578837377", "1963094505", "2047370889", "2143087446" ], "abstract": [ "", "We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n sup spl epsiv ) communication complexity for any spl epsiv >0.", "Given two or more parties possessing large, confidential datasets, we consider the problem of securely computing the k th -ranked element of the union of the datasets, e.g. the median of the values in the datasets. We investigate protocols with sublinear computation and communication costs. In the two-party case, we show that the k th -ranked element can be computed in log k rounds, where the computation and communication costs of each round are O(log M), where log M is the number of bits needed to describe each element of the input data. The protocol can be made secure against a malicious adversary, and can hide the sizes of the original datasets. In the multi-party setting, we show that the k th -ranked element can be computed in log M rounds, with O(s log M) overhead per round, where s is the number of parties. The multi-party protocol can be used in the two-party case and can also be made secure against a malicious adversary.", "We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.", "In this paper we address the issue of privacy preserving data mining. Specifically, we consider a scenario in which two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. Our work is motivated by the need both to protect privileged information and to enable its use for research or other purposes. The above problem is a specific example of secure multi-party computation and, as such, can be solved using known generic protocols. However, data mining algorithms are typically complex and, furthermore, the input usually consists of massive data sets. The generic protocols in such a case are of no practical use and therefore more efficient protocols are required. We focus on the problem of decision tree learning with the popular ID3 algorithm. Our protocol is considerably more efficient than generic solutions and demands both very few rounds of communication and reasonable bandwidth.", "We consider the problem of computing the intersection of private datasets of two parties, where the datasets contain lists of elements taken from a large domain. This problem has many applications for online collaboration. We present protocols, based on the use of homomorphic encryption and balanced hashing, for both semi-honest and malicious environments. For lists of length k, we obtain O(k) communication overhead and O(k ln ln k) computation. The protocol for the semi-honest environment is secure in the standard model, while the protocol for the malicious environment is secure in the random oracle model. We also consider the problem of approximating the size of the intersection, show a linear lower-bound for the communication overhead of solving this problem, and provide a suitable secure protocol. Lastly, we investigate other variants of the matching problem, including extending the protocol to the multi-party setting as well as considering the problem of approximate matching." ] }
cs0609166
1640198638
We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.
The breakthrough @cite_15 gives a general technique for converting any protocol into a private protocol with little communication overhead. It is not the end of the story, however, because the computation may increase exponentially.
{ "cite_N": [ "@cite_15" ], "mid": [ "2069717895" ], "abstract": [ "A secure function evaluation protocol allows two parties to jointly compute a function f(x,y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: “any function f that can be computed using polynomial resources can be computed securely using polynomial resources” (where “resources” refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f . Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f . We propose a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f . We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter''. We show several simple applications of this new methodology resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the Millionaires problem, where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation." ] }
cs0609166
1640198638
We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.
Work in private approximations include @cite_19 that introduced the notion as a conference paper in 2001 and gave several protocols. Some negative results were given in @cite_13 for approximations to NP-Hard functions; more on NP-hard search problems appears in @cite_11 . Recently, @cite_18 gives a private approximation to the Euclidean norm that is central to our paper.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_13", "@cite_11" ], "mid": [ "2152590851", "1572949938", "", "2047515520" ], "abstract": [ "Approximation algorithms can sometimes provide efficient solutions when no efficient exact computation is known. In particular, approximations are often useful in a distributed setting where the inputs are held by different parties and may be extremely large. Furthermore, for some applications, the parties want to compute a function of their inputs securely without revealing more information than necessary. In this work, we study the question of simultaneously addressing the above efficiency and security concerns via what we call secure approximations.We start by extending standard definitions of secure (exact) computation to the setting of secure approximations. Our definitions guarantee that no additional information is revealed by the approximation beyond what follows from the output of the function being approximated. We then study the complexity of specific secure approximation problems. In particular, we obtain a sublinear-communication protocol for securely approximating the Hamming distance and a polynomial-time protocol for securely approximating the permanent and related #P-hard problems.", "In [12] a private approximation of a function f is defined to be another function F that approximates f in the usual sense, but does not reveal any information about x other than what can be deduced from f(x). We give the first two-party private approximation of the l2 distance with polylogarithmic communication. This, in particular, resolves the main open question of [12]. We then look at the private near neighbor problem in which Alice has a query point in 0,1 d and Bob a set of n points in 0,1 d, and Alice should privately learn the point closest to her query. We improve upon existing protocols, resolving open questions of [13,10]. Then, we relax the problem by defining the private approximate near neighbor problem, which requires introducing a notion of secure computation of approximations for functions that return sets of points rather than values. For this problem we give several protocols with sublinear communication.", "", "Many approximation algorithms have been presented in the last decades for hard search problems. The focus of this paper is on cryptographic applications, where it is desired to design algorithms which do not leak unnecessary information. Specifically, we are interested in private approximation algorithms -- efficient algorithms whose output does not leak information not implied by the optimal solutions to the search problems. Privacy requirements add constraints on the approximation algorithms; in particular, known approximation algorithms usually leak a lot of information.For functions, [, ICALP 2001] presented a natural requirement that a private algorithm should not leak information not implied by the original function. Generalizing this requirement to search problems is not straightforward as an input may have many different outputs. We present a new definition that captures a minimal privacy requirement from such algorithms -- applied to an input instance, it should not leak any information that is not implied by its collection of exact solutions. Although our privacy requirement seems minimal, we show that for well studied problems, as vertex cover and 3SAT, private approximation algorithms are unlikely to exist even for poor approximation ratios. Similar to [, STOC 2001], we define a relaxed notion of approximation algorithms that leak (little) information, and demonstrate the applicability of this notion by showing near optimal approximation algorithms for 3SAT that leak little information." ] }
cs0609166
1640198638
We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.
Statistical work such as @cite_5 also addresses approximate summaries over large databases, but differs from our work in many parameters, such as the number of players and the allowable communication.
{ "cite_N": [ "@cite_5" ], "mid": [ "2169134473" ], "abstract": [ "We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all. An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering." ] }
cs0609166
1640198638
We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.
There are many papers that address the Heavy Hitters problem and sketching in general, in a variety of contexts. Many of the needed ideas can be seen in @cite_8 and other important papers include @cite_3 @cite_17 @cite_4 @cite_16 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_3", "@cite_16", "@cite_17" ], "mid": [ "2047424291", "", "2080745194", "2167973519", "2040063291" ], "abstract": [ "(MATH) A vector A of length N is defined implicitly, via a stream of updates of the form \"add 5 to A3.\" We give a sketching algorithm, that constructs a small sketch from the stream of updates, and a reconstruction algorithm, that produces a B-bucket piecewise-constant representation (histogram) H for A from the sketch, such that ||A—H||x(1+e)||A—Hopt|&#124, where the error ||A—H|| is either @math (absolute) or @math (root-mean-square) error. The time to process a single update, time to reconstruct the histogram, and size of the sketch are each bounded by poly(B,log(N),log||A,1 e. Our result is obtained in two steps. First we obtain what we call a robust histogram approximation for A, a histogram such that adding a small number of buckets does not help improve the representation quality significantly. From the robust histogram, we cull a histogram of desired accruacy and B buckets in the second step. This technique also provides similar results for Haar wavelet representations, under @math error. Our results have applications in summarizing data distributions fast and succinctly even in distributed settings.", "", "The frequency moments of a sequence containingmielements of typei, 1?i?n, are the numbersFk=?ni=1mki. We consider the space complexity of randomized algorithms that approximate the numbersFk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbersF0,F1, andF2can be approximated in logarithmic space, whereas the approximation ofFkfork?6 requiresn?(1)space. Applications to data bases are mentioned as well.", "Most database management systems maintain statistics on the underlying relation. One of the important statistics is that of the \"hot items\" in the relation: those that appear many times (most frequently, or more than some threshold). For example, end-biased histograms keep the hot items as part of the histogram and are used in selectivity estimation. Hot items are used as simple outliers in data mining, and in anomaly detection in networking applications.We present a new algorithm for dynamically determining the hot items at any time in the relation that is undergoing deletion operations as well as inserts. Our algorithm maintains a small space data structure that monitors the transactions on the relation, and when required, quickly outputs all hot items, without rescanning the relation in the database. With user-specified probability, it is able to report all hot items. Our algorithm relies on the idea of \"group testing\", is simple to implement, and has provable quality, space and time guarantees. Previously known algorithms for this problem that make similar quality and performance guarantees can not handle deletions, and those that handle deletions can not make similar guarantees without rescanning the database. Our experiments with real and synthetic data shows that our algorithm is remarkably accurate in dynamically tracking the hot items independent of the rate of insertions and deletions.", "This paper presents algorithms for tracking (approximate) join and self-join sizes in limited storage, in the presence of insertions and deletions to the data set(s). Such algorithms detect changes in join and self-join sizes without an expensive recomputation from the base data, and without the large space overhead required to maintain such sizes exactly. Query optimizers rely on fast, high-quality estimates of join sizes in order to select between various join plans, and estimates of self-join sizes are used to indicate the degree of skew in the data. For self-joins, we consider two approaches proposed in N. (The space complexity of approximating the frequency moments, J. Comput. System Sci.58 (1999), 137?147), which we denote tug-of-war and sample-count . We present fast algorithms for implementing these approaches, and extensions to handle deletions as well as insertions. We also report on the first experimental study of the two approaches, on a range of synthetic and real-world data sets. Our study shows that tug-of-war provides more accurate estimates for a given storage limit than sample-count, which in turn is far more accurate than a standard sampling-based approach. For example, tug-of-war needed only 4-256 memory words, depending on the data set, in order to estimate the self-join size to within a 15 relative error; on average, this is over 4 times (50 times) fewer memory words than needed by sample-count (standard sampling, resp.) to obtain a similar accuracy. For joins, we propose schemes based on maintaining a small signature of each relation independently, such that join sizes can be quickly and accurately estimated between any pair of relations using only these signatures. We show that taking random samples for join signatures can lead to an inaccurate estimation unless the sample size is quite large; moreover, we show that no other signature scheme can significantly improve upon sampling without further assumptions. These negative results are shown to hold even in the presence of sanity bounds. On the other hand, we present a fast join signature scheme based on tug-of-war signatures that provides guarantees on join size estimation as a function of the self-join sizes of the joining relations; this scheme can significantly improve upon the sampling scheme." ] }
cs0609122
2952623723
We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect.
relay channels are studied in terms of ergodic capacity in @cite_47 and in terms of in @cite_17 . The latter considers the protocol only, presents a lower bound on the performance and designs space-time block codes. This lower bound is not tight in general and is valid only if the number of relay antennas is less than or equal to the number of source antennas.
{ "cite_N": [ "@cite_47", "@cite_17" ], "mid": [ "2106801259", "2025889675" ], "abstract": [ "We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.", "In this work, we extend the nonorthogonal amplify-and-forward (NAF) cooperative diversity scheme to the multiple-input multiple-output (MIMO) channel. A family of space-time block codes for a half-duplex MIMO NAF fading cooperative channel with N relays is constructed. The code construction is based on the nonvanishing determinant (NVD) criterion and is shown to achieve the optimal diversity-multiplexing tradeoff (DMT) of the channel. We provide a general explicit algebraic construction, followed by some examples. In particular, in the single-relay case, it is proved that the Golden code and the 4times4 Perfect code are optimal for the single-antenna and two-antenna cases, respectively. Simulation results reveal that a significant gain (up to 10 dB) can be obtained with the proposed codes, especially in the single-antenna case" ] }
cs0609122
2952623723
We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect.
The multiple-access relay channel ( ) is introduced in @cite_43 @cite_0 @cite_39 . In , the relay helps multiple sources simultaneously to reach a common destination. The for the half-duplex with single antenna nodes is studied in @cite_31 @cite_37 @cite_34 . In @cite_31 , the authors find that is optimal for low multiplexing gains; however, this protocol remains to be suboptimal for high multiplexing gains analogous to the single-source relay channel. This region, where is suboptimal, is achieved by the multiple access amplify and forward ( ) protocol @cite_37 @cite_34 .
{ "cite_N": [ "@cite_37", "@cite_39", "@cite_0", "@cite_43", "@cite_31", "@cite_34" ], "mid": [ "2139934818", "2097739512", "2098567664", "", "2121284405", "" ], "abstract": [ "This paper studies the diversity-multiplexing trade-off for the multiaccess relay channel (MARC) with static, flat fading. It develops two simple strategies, namely multiaccess amplify-and-forward (MAF) and multiaccess decode-and-forward (MDF), that help the users gain the benefit of cooperative diversity without changes in their devices. Results suggest that in the regime of light system loads, both strategies offer improved performance to each user as if no other users interfere or contend for the relay. However, they are inferior to the optimal dynamic decode forward (DDF) protocol in this regime. In the regime of heavy loads, the MARC with MAF offers better performance, and MARC with MDF degenerates into the multiaccess channel (MAC) without a relay. Moreover, the MAF protocol is optimal in the regime of high multiplexing.", "A three-tier hierarchical wireless sensor network is considered that consists of a cluster of sensors, an intermediate relay with better computing and communication capabilities than the sensors, and a central server or access point. Such a network can be modeled as a multiple-access relay channel (MARC) with additive white Gaussian noise and fading. Capacity bounds for this network are presented with and without constraints on simultaneous reception and transmission by the relay. The results identify cooperative strategies between the relay and sensors for increasing network capacity. These strategies also preserve limited battery resources by eliminating the need for cooperation between sensors.", "Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.", "", "In this correspondence, the performance of the automatic repeat request-dynamic decode and forward (ARQ-DDF) cooperation protocol is analyzed in two distinct scenarios. The first scenario is the multiple access relay channel where a single relay is dedicated to simultaneously help two multiple access users. For this setup, it is shown that the ARQ-DDF protocol achieves the channel's optimal diversity multiplexing tradeoff (DMT). The second scenario is the cooperative vector multiple access channel where two users cooperate in delivering their messages to a destination equipped with two receiving antennas. For this setup, a new variant of the ARQ-DDF protocol is developed where the two users are purposefully instructed not to cooperate in the first round of transmission. Lower and upper bounds on the achievable DMT are then derived. These bounds are shown to converge to the optimal tradeoff as the number of transmission rounds increases.", "" ] }
cs0609122
2952623723
We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect.
When multiple single antenna relays are present, the papers @cite_12 @cite_9 @cite_49 @cite_27 @cite_32 @cite_22 @cite_30 show that diversity gains similar to multi-input single-output ( ) or single-input multi-output ( ) systems are achievable for Rayleigh fading channels. Similarly, @cite_3 @cite_48 @cite_18 @cite_29 upper bound the system behavior by or if all links have Rayleigh fading. In other words, relay systems behave similar to either transmit or receive antenna arrays. is first analyzed in @cite_2 in terms of achievable rates only, where the authors compare a two-source two-destination cooperative system with a @math and show that the former is multiplexing gain limited by 1, whereas the latter has maximum multiplexing gain of 2.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_48", "@cite_9", "@cite_29", "@cite_32", "@cite_3", "@cite_27", "@cite_49", "@cite_2", "@cite_12" ], "mid": [ "2113346794", "2123517571", "2106453003", "2136209931", "2153727902", "2128295617", "", "2152121970", "", "2148158411", "2089318831", "2120448104" ], "abstract": [ "We examine a network consisting of one source, one destination and two amplifying and forwarding relays and consider a scenario in which destination and relays can have various processing limitations. For all possible diversity combining schemes at the relays and at the destination, we find diversity order results analytically and confirm our findings through numerical calculations of bit error rate (BER) versus signal-to-noise-ratio (SNR) curves. We compare our results with direct transmission, well known transmit diversity methods and traditional multihop transmission and conclude that diversity reception in multihop networks provides the lowest error rate.", "We consider a wireless network with multiple terminals (m spl ges 2) as well as multiple receive antennas at the destination (N spl ges 1) and obtain diversity multiplexing tradeoff upper bounds for some recently proposed cooperative diversity protocols. The results obtained show that irrespective of the coding scheme employed none of these protocols can yield a diversity order greater than N+m-1. Further, the tradeoff bounds indicate that the protocol yielding the best diversity order may change with the multiplexing gain. Using these bounds we propose a switching strategy that yields the best tradeoff bound which also significantly outperforms the optimal tradeoff curve for the baseline noncooperative system at all possible multiplexing gains.", "We investigate reliability exchange schemes that allow a group of radios to act as a distributed antenna array in which radio links are used to exchange information between the various radios. We consider a scenario in which multiple nodes receive independent copies of the same message. Each node independently decodes its message and then participates in a process of \"smart\" information exchange with the other nodes. Each node transmits its estimates of the a posteriori probabilities of some set of bits, and these estimates are used as a priori information by other receivers. The a priori information is used to perform maximum a posteriori decoding on the received sequence. This process of decoding and information exchange can be repeated several times. Simulation results show that the performance can be significantly improved by careful selection of the bits for which information is exchanged.", "We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1 N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.", "Closed form expressions for the statistics of the harmonic mean of two independent exponential variates are presented. These statistical results are then applied to study the performance of wireless communication systems with non-regenerative relays over flat Rayleigh fading channels. It is shown that these results can either be exact or tight lower bounds on the performance of these systems depending on the choice of the relay gain. More specifically, outage probability formulas for noise limited systems are obtained. Furthermore, outage capacity and bit error rate (BER) expressions for binary differential phase shift keying are derived. Finally, comparisons between regenerative and non-regenerative systems are presented. Numerical results show that the former systems clearly outperform the latter ones for low average signal-to-noise-ratio (SNR). They also show that the two systems have similar performance at high average SNR.", "Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this \"best\" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.", "", "We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols.", "", "When mobiles cannot support multiple antennas due to size or other constraints, conventional space-time coding cannot be used to provide uplink transmit diversity. To address this limitation, the concept of cooperation diversity has been introduced, where mobiles achieve uplink transmit diversity by relaying each other's messages. A particularly powerful variation of this principle is coded cooperation. Instead of a simple repetition relay, coded cooperation partitions the codewords of each mobile and transmits portions of each codeword through independent fading channels. This paper presents two extensions to the coded cooperation framework. First, we increase the diversity of coded cooperation in the fast-fading scenario via ideas borrowed from space-time codes. We calculate bounds for the bit- and block-error rates to demonstrate the resulting gains. Second, since cooperative coding contains two code components, it is natural to apply turbo codes to this framework. We investigate the application of turbo codes in coded cooperation and demonstrate the resulting gains via error bounds and simulations.", "At high SNR the capacity of a point-to point MIMO system with N T transmit antenna and NR receive antenna is min N T, NR log(SNR) + O(1). The factor in front of the log is called the multiplexing gain. In this paper we consider a network with 2N nodes (N source destination pairs) that each have only a single antenna. These single antenna nodes could cooperate to form larger virtual arrays, usually called cooperative diversity, user cooperation, or coded cooperation. The question we ask is: how large a multiplexing gain is possible. We prove that for N = 2 the multiplexing gain is 1, and consider generalizations to larger networks", "This paper presents theoretical characterizations and analysis for the physical layer of multihop wireless communications channels. Four channel models are considered and developed: the decoded relaying multihop channel; the amplified relaying multihop channel; the decoded relaying multihop diversity channel; and the amplified relaying multihop diversity channel. Two classifications are discussed: decoded relaying versus amplified relaying, and multihop channels versus multihop diversity channels. The channel models are compared, through analysis and simulations, with the \"singlehop\" (direct transmission) reference channel on the basis of signal-to-noise ratio, probability of outage, probability of error, and optimal power allocation. Each of the four channel models is shown to outperform the singlehop reference channel under the condition that the set of intermediate relaying terminals is selected intelligently. Multihop diversity channels are shown to outperform multihop channels. Amplified relaying is shown to outperform decoded relaying despite noise propagation. This is attributed to the fact that amplified relaying does not suffer from the error propagation which limits the performance of decoded relaying channels to that of their weakest link." ] }
quant-ph0608199
2951547182
Assume that two distant parties, Alice and Bob, as well as an adversary, Eve, have access to (quantum) systems prepared jointly according to a tripartite state. In addition, Alice and Bob can use local operations and authenticated public classical communication. Their goal is to establish a key which is unknown to Eve. We initiate the study of this scenario as a unification of two standard scenarios: (i) key distillation (agreement) from classical correlations and (ii) key distillation from pure tripartite quantum states. Firstly, we obtain generalisations of fundamental results related to scenarios (i) and (ii), including upper bounds on the key rate. Moreover, based on an embedding of classical distributions into quantum states, we are able to find new connections between protocols and quantities in the standard scenarios (i) and (ii). Secondly, we study specific properties of key distillation protocols. In particular, we show that every protocol that makes use of pre-shared key can be transformed into an equally efficient protocol which needs no pre-shared key. This result is of practical significance as it applies to quantum key distribution (QKD) protocols, but it also implies that the key rate cannot be locked with information on Eve's side. Finally, we exhibit an arbitrarily large separation between the key rate in the standard setting where Eve is equipped with quantum memory and the key rate in a setting where Eve is only given classical memory. This shows that assumptions on the nature of Eve's memory are important in order to determine the correct security threshold in QKD.
The first to spot a relation between the classical and the quantum development were Gisin and Wolf; in analogy to in quantum information theory, they conjectured the existence of , namely classical correlation that can only be created from key but from which no key can be distilled @cite_37 . Their conjecture remains unsolved, but has stimulated the community in search for an answer.
{ "cite_N": [ "@cite_37" ], "mid": [ "1518891070" ], "abstract": [ "After carrying out a protocol for quantum key agreement over a noisy quantum channel, the parties Alice and Bob must process the raw key in order to end up with identical keys about which the adversary has virtually no information. In principle, both classical and quantum protocols can be used for this processing. It is a natural question which type of protocols is more powerful. We show that the limits of tolerable noise are identical for classical and quantum protocols in many cases. More specifically, we prove that a quantum state between two parties is entangled if and only if the classical random variables resulting from optimal measurements provide some mutual classical information between the parties. In addition, we present evidence which strongly suggests that the potentials of classical and of quantum protocols are equal in every situation. An important consequence, in the purely classical regime, of such a correspondence would be the existence of a classical counterpart of so-called bound entanglement, namely \"bound information\" that cannot be used for generating a secret key by any protocol. This stands in sharp contrast to what was previously believed." ] }
cs0608031
1573220314
With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals.
The secure positioning technique with RF mainly discussed in this paper was proposed in @cite_11 @cite_2 . The distance bounding protocols using bidirectional communication to upper bound claimant's distance was first introduced in @cite_6 , and the proposal in @cite_2 is based on the protocols @cite_6 . For easier implementation, a secure positioning technique with a distance bounding protocol using ultrasound and radio communication was proposed in @cite_4 , but it has a security vulnerability to the replay attack due to its use of ultrasound. In @cite_14 , a distance bounding protocol for RFID is proposed. The protocol uses duplex radio communication, and is designed to lessen the processing load of RFID as far as possible.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6", "@cite_2", "@cite_11" ], "mid": [ "2127785555", "2164947865", "1543653153", "2159796097", "" ], "abstract": [ "Radio-frequency identification tokens, such as contactless smartcards, are vulnerable to relay attacks if they are used for proximity authentication. Attackers can circumvent the limited range of the radio channel using transponders that forward exchanged signals over larger distances. Cryptographic distance-bounding protocols that measure accurately the round-trip delay of the radio signal provide a possible countermeasure. They infer an upper bound for the distance between the reader and the token from the fact that no information can propagate faster than at the speed of light. We propose a new distance-bounding protocol based on ultra-wideband pulse communication. Aimed at being implementable using only simple, asynchronous, low-power hardware in the token, it is particularly well suited for use in passive low-cost tokens, noisy environments and high-speed applications.", "With the growing prevalence of sensor and wireless networks comes a new demand for location-based access control mechanisms. We introduce the concept of secure location verification, and we show how it can be used for location-based access control. Then, we present the Echo protocol, a simple method for secure location verification. The Echo protocol is extremely lightweight: it does not require time synchronization, cryptography, or very precise clocks. Hence, we believe that it is well suited for use in small, cheap, mobile devices.", "It is often the case in applications of cryptographic protocols that one party would like to determine a practical upper-bound on the physical distance to the other party. For instance, when a person conducts a cryptographic identification protocol at an entrance to a building, the access control computer in the building would like to be ensured that the person giving the responses is no more than a few meters away. The \"distance bounding\" technique we introduce solves this problem by timing the delay between sending out a challenge bit and receiving back the corresponding response bit. It can be integrated into common identification protocols. The technique can also be applied in the three-party setting of \"wallets with observers\" in such a way that the intermediary party can prevent the other two from exchanging information, or even developing common coinflips.", "So far, the problem of positioning in wireless networks has been studied mainly in a nonadversarial setting. In this paper, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations.", "" ] }
cs0608031
1573220314
With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals.
The protocol called Temporal Leashes is proposed in @cite_0 for detection of the specific attack called the wormhole attack. The protocol detects the attack by checking the packet transmission time measured by tightly synchronized clocks of a sender and a receiver.
{ "cite_N": [ "@cite_0" ], "mid": [ "2157921329" ], "abstract": [ "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies." ] }
cs0608031
1573220314
With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals.
On the other hand, there are location verification protocols which substantially make use of the physical properties of broadcasted radio waves @cite_8 @cite_15 . In @cite_8 , their proposal depends on the intensity and the directivity of broadcasted radio waves for location verification. In @cite_15 , their proposal with duplex radio communication assumes spatial isotropic propagation of radio waves by use of mobile terminal's omni-directional antenna, and uses its particular geometric relation for location verification. But both proposals have a security vulnerability to malicious modification of the assumed physical properties of radio waves. There are many possible ways, especially for a mobile terminal user, to carry out the physical modification of radio waves, e.g., by fraudulently using a directional antenna for the mobile terminal, or by surrounding the mobile terminal with carefully chosen mediums or materials.
{ "cite_N": [ "@cite_15", "@cite_8" ], "mid": [ "2162413906", "2150212044" ], "abstract": [ "Authentication in conventional networks (like the Internet) is usually based upon something you know (e.g., a password), something you have (e.g., a smartcard) or something you are (biometrics). In mobile ad-hoc networks, location information can also be used to authenticate devices and users. We focus on how a provers can securely show that (s)he is within a certain distance to a verifier. Brands and Chaum proposed the distance bounding protocol as a secure solution for this problem. However, this protocol is vulnerable to a so-called \"terrorist fraud attack\". In this paper, we explain how to modify the distance bounding protocol to make it resistant to this kind of attacks. Recently, two other secure distance bounding protocols were published. We discuss the properties of these protocols and show how to use it as a building block in a location verification scheme", "Secure location verification is a recently stated problem that has a number of practical applications. The problem requires a wireless sensor network to confirm that a potentially malicious prover is located in a designated area. The original solution to the problem, as well as solutions to related problems, exploits the difference between propagation speeds of radio and sound waves to estimate the position of the prover. In this paper, we propose a solution that leverages the broadcast nature of the radio signal emitted by the prover and the distributed topology of the network. The idea is to separate the functions of the sensors. Some sensors are placed such that they receive the signal from the prover if it is inside the protected area. The others are positioned so that they can only receive the signal from the prover outside the area. Hence, the latter sensors reject the prover if they hear its signal. Our solution is versatile and it deals with provers using either omni-directional or directional propagation of radio signals without requiring any special hardware besides a radio transceiver. We estimate the bounds on the number of sensors required to protect the areas of various shapes and extend our solution to handle complex radio signal propagation, optimize sensor placement, and operate without precise topology information" ] }
cs0608051
2951605536
Inspired by the classical theory of modules over a monoid, we give a first account of the natural notion of module over a monad. The associated notion of morphism of left modules ("Linear" natural transformations) captures an important property of compatibility with substitution, in the heterogeneous case where "terms" and variables therein could be of different types as well as in the homogeneous case. In this paper, we present basic constructions of modules and we show examples concerning in particular abstract syntax and lambda-calculus.
We have introduced the notion of module over a monad, and more importantly the notion of linearity for transformations among such modules and we have tried to show that this notion is ubiquitous as soon as syntax and semantics are concerned. Our thesis is that the point of view of modules opens some new room for initial algebra semantics, as we sketched for typed @math -calculus (see also @cite_5 ).
{ "cite_N": [ "@cite_5" ], "mid": [ "1816420085" ], "abstract": [ "A process for the preparation of a textured protein-containing material in which an amylolytic fungus is grown on a moist starch based substrate which includes a nitrogen source assimilable by the fungus the substrate being provided in the form of small, partially gelatinized particles. During growth, the fungus degrades and utilizes a large proportion of the starch, resulting in a dense matrix of closely interwoven mycelia, randomly dispersed with substances containing the residual starch or starch degradation products. On the denaturation of the fungal mycelium, the product assumes a tough but resilient texture and when diced or minced has a similar appearance to meat." ] }
cs0607040
1663216048
This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed.
A rich body of research has been developed to investigate methodologies for the exploitation of or-parallelism from Prolog executions on SMPs. Comprehensive surveys describing and comparing these methodologies have appeared, e.g., @cite_44 @cite_32 @cite_12 .
{ "cite_N": [ "@cite_44", "@cite_32", "@cite_12" ], "mid": [ "27834061", "1985039455", "" ], "abstract": [ "From the Publisher: Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of Logic Programming Languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in Logic Programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs will be useful for people implementing parallel logic programming systems, parallel symbolic systems Parallel AI systems, and parallel theorem proving systems. This work will also be useful to people who wish to learn about implementation of parallel logic programming systems.", "Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.", "" ] }
cs0607040
1663216048
This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed.
A theoretical analysis of the properties of different methodologies has been presented in @cite_33 @cite_15 . These works provide an abstraction of the environment representation problem as a data structure problem on dynamic trees. These studies identify the presence of unavoidable overheads in the dynamic management of environments in a parallel setting, and recognize methods with constant-time environment creation and access as optimal methods for environment representation. Methods such as stack-copying @cite_14 , binding arrays @cite_25 , and recomputation @cite_30 meet such requirements.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_15", "@cite_25" ], "mid": [ "2063476219", "2016462478", "1983963066", "2018331535", "2112482891" ], "abstract": [ "Previous investigations have suggested the use of multiple communicating processors for executing logic programs. However, this strategy lacks efficiency due to competition for memory and communication bandwidth, and this is a problem that has been largely neglected. In this paper we propose a realistic model for executing logic programs with low overhead on multiple processors. Our proposal does not involve shared memory or copying computation state between processors. The model organises computations over the nondeterministic proof tree so that different processors explore unique deterministic computation paths independently, in order to exploit the “OR-parallelism” present in a program. We discuss the advantages of this approach over previous ones, and suggest control strategies for making it effective in practice.", "Muse (Multi-sequential Prolog engines) is a simple and efficient approach to Or-parallel execution of Prolog programs. It is based on having several sequential Prolog engines, each with its local address space, and some shared memory space. It is currently implemented on a 7-processors machine with local shared memory constructed at SICS, a 16-processors Sequent Symmetry, a 96-processors BBN Butterfly I, and a 45-processors BBN Butterfly II. The sequential SICStus Prolog system has been adapted to Or-parallel implementation. Extra overhead associated with this adaptation is very low in comparison with the other approaches. The speed-up factor is very close to the number of processors in the system for a large class of problems. The goal of this paper is to present the Muse execution model, some of its implementation issues, a variant of Prolog suitable for multiprocessor implementations, and some experimental results obtained from two different multiprocessor systems.", "We formalize the implementation mechanisms required to support or-parallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operationsn performed on the data structure, for the problem of guaranteeing correct semantics during or-parallel execution. The lower bound Ω(lgn) formally proves the impossibility of achieving an ideal implementation (i.e., parallel implementation with constant time overhead per operation). We also derive an upper bound of ( O ( [3] n ) ) per operation for or-parallel execution. This upper bound is far better than what has been achieved in the existing or-parallel systems and indicates that faster implementations may be feasible.", "The single most serious issue in the development of a parallel implementation of non-deterministic programming languages and systems (e.g., logic programming, constraint programming, search-based artificial intelligence systems) is the dynamic management of the binding environments-i.e., the ability to associate with each parallel computation the correct set of bindings values representing the solution generated by that particular branch of the non-deterministic computation. The problem has been abstracted and formally studied previously (ACM Trans. Program. Lang. Syst. 15(4) (1993) 659; New Generation Comput. 17(3) (1999) 285), but to date only relatively inefficient data structures (ACM Trans. Program. Lang. Syst. (2002); New Generation Comput. 17(3) (1999) 285; J. Funct. Logic Program. Special issue #1 (1999)) have been developed to solve it. We provide a very efficient solution to the problem (O(lgn) per operation). This is a significant improvement over previously best known @W(n3) solution. Our solution is provably optimal for the pointer machine model. We also show how the solution can be extended to handle the abstraction of search problems in object-oriented systems, with the same time complexity.", "1. High Performance Systems. An Example Program: Matrix Multiplication. Structure of a Compiler. 2. Programming Language Features. Languages for High Performance. Sequential and Parallel Loops. Roundoff Error. 3. Basic Graph Concepts. Sets, Tuples, Logic. Graphs. Control Dependence. 4. Review of Linear Algebra. Real Vectors and Matrices. Integer Matrices and Lattices. Linear System of Equations. System of Integer Equations. Systems of Linear Inequalities. Systems of Integer Linear Inequalities. Extreme Values of Affine Functions. 5. Data Dependence. Data Dependence in Loops. Data Dependence in Conditionals. Data Dependence in Parallel Loops. Program Dependence Graph. 6. Scalar Analysis with Factored Use-Def Chains. Constructing Factored Use-Def Chains. FUD Chains for Arrays. Finding All Reaching Definitions. Implicit References in FUD Chains. InductionVariables Using FUD Chains. Constant Propagation with FUD Chains. Data Dependence for Scalars. 7. Data Dependence Analysis for Arrays. Building the Dependence System. Dependence System Solvers. General Solver. Summary of Solvers. Complications. Run-time Dependence Testing. 8. Other Dependence Problems. Array Region Analysis. Pointer Analysis. I O Dependence. Procedure Calls. Interprocedural Analysis. 9. Loop Restructuring. Simpile Transformations. Loop Fusion. Loop Fission. Loop Reversal. Loop Interchanging. Loop Skewing. Linear Loop Transformations. Strip-Mining. Loop Tiling. Other Loop Transformations. Interprocedural Transformations. 10. Optimizing for Locality. Single Reference to Each Array. Multiple References. General Tiling. Fission and Fusion for Locality. 11. Concurrency Analysis. Code for Concurrent Loops. Concurrency from Sequential Loops. Concurrency from Parallel Loops. Nested Loops. Roundoff Error. Exceptions and Debuggers. 12. Vector Analysis. Vector Code. Vector Code from Sequential Loops. Vector Code from Forall Loops. Nested Loops. Roundoff Error, Exceptions, and Debuggers. Multivector Computers. 13. Message-Passing Machines. SIMD Machines. MIMD Machines. Data Layout. Parallel Code for Array Assignment. Remote Data Access. Automatic Data Layout. Multiple Array Assignments. Other Topics. 14. Scalable Shared-Memory Machines. Global Cache Coherence. Local Cache Coherence. Latency Tolerant Machines. Glossary. References. Author Index. Index. 0805327304T04062001" ] }
cs0607079
2086126338
The length-based approach is a heuristic for solving randomly generated equations in groups that possess a reasonably behaved length function. We describe several improvements of the previously suggested length-based algorithms, which make them applicable to Thompson's group with significant success rates. In particular, this shows that the Shpilrain-Ushakov public key cryp- tosystem based on Thompson's group is insecure, and suggests that no practical public key cryp- tosystem based on the difficulty of solving an equation in this group can be secure.
While we were finalizing our paper for publication, a very elegant specialized attack on the same cryptosystem was announced by Matucci @cite_15 . The main contribution of the present paper is thus the generalization of the length-based algorithms to make them applicable to a wider class of groups. Moreover, while our general attack can be easily adapted to other possible cryptosystems based on Thompson's group, this may not be the case for Matucci's specialized methods.
{ "cite_N": [ "@cite_15" ], "mid": [ "1530291176" ], "abstract": [ "The present invention relates to a method for the determination of the electron density in a part volume in a patient by means of an X-ray tube, said tube comprising an anode symmetrical with respect to rotation as an electron beam rotates relative to the axis of rotation. As a result an X-ray emission from several points is obtained. By means of the scattered X-rays the electron densities can be measured." ] }
cs0606044
2951892051
In set-system auctions , there are several overlapping teams of agents, and a task that can be completed by any of these teams. The buyer's goal is to hire a team and pay as little as possible. Recently, Karlin, Kempe and Tamir introduced a new definition of frugality ratio for this setting. Informally, the frugality ratio is the ratio of the total payment of a mechanism to perceived fair cost. In this paper, we study this together with alternative notions of fair cost, and how the resulting frugality ratios relate to each other for various kinds of set systems. We propose a new truthful polynomial-time auction for the vertex cover problem (where the feasible sets correspond to the vertex covers of a given graph), based on the local ratio algorithm of Bar-Yehuda and Even. The mechanism guarantees to find a winning set whose cost is at most twice the optimal. In this situation, even though it is NP-hard to find a lowest-cost feasible set, we show that local optimality of a solution can be used to derive frugality bounds that are within a constant factor of best possible. To prove this result, we use our alternative notions of frugality via a bootstrapping technique, which may be of independent interest.
Vertex-cover auctions have been studied in the past by Talwar @cite_9 and Calinescu @cite_3 . Both of these papers are based on the definition of frugality ratio used in @cite_5 ; as mentioned before, this means that their results only apply to bipartite graphs. Talwar @cite_9 shows that the frugality ratio of VCG is at most @math . However, since finding the cheapest vertex cover is an NP-hard problem, the VCG mechanism is computationally infeasible. The first (and, to the best of our knowledge, only) paper to investigate polynomial-time truthful mechanisms for vertex cover is @cite_3 . That paper studies an auction that is based on the greedy allocation algorithm, which has an approximation ratio of @math . While the main focus of @cite_3 is the more general set cover problem, the results of @cite_3 imply a frugality ratio of @math for vertex cover. Our results improve on those of @cite_9 as our mechanism is polynomial-time computable, as well as on those of @cite_3 , as our mechanism has a better approximation ratio, and we prove a stronger bound on the frugality ratio; moreover, this bound also applies to the mechanism of @cite_3 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_3" ], "mid": [ "2061256213", "1537430179", "2046744707" ], "abstract": [ "We consider the problem of selecting a low-cost s - t path in a graph where the edge costs are a secret, known only to the various economic agents who own them. To solve this problem, Nisan and Ronen applied the celebrated Vickrey-Clarke-Groves (VCG) mechanism, which pays a premium to induce the edges so as to reveal their costs truthfully. We observe that this premium can be unacceptably high. There are simple instances where the mechanism pays Θ(n) times the actual cost of the path, even if there is an alternate path available that costs only (1 p e) times as much. This inspires the frugal path problem, which is to design a mechanism that selects a path and induces truthful cost revelation, without paying such a high premium. This article contributes negative results on the frugal path problem. On two large classes of graphs, including those having three node-disjoint s - t paths, we prove that no reasonable mechanism can always avoid paying a high premium to induce truthtelling. In particular, we introduce a general class of min function mechanisms, and show that all min function mechanisms can be forced to overpay just as badly as VCG. Meanwhile, we prove that every truthful mechanism satisfying some reasonable properties is a min function mechanism. Our results generalize to the problem of hiring a team to complete a task, where the analog of a path in the graph is a subset of the agents constituting a team capable of completing the task.", "The celebrated Vickrey-Clarke-Grove(VCG) mechanism induces selfish agents to behave truthfully by paying them a premium. In the process, it may end up paying more than the actual cost to the agents. For the minimum spanning tree problem, if the market is \"competitive\", one can show that VCG never pays too much. On the other hand, for the shortest s-t path problem, Archer and Tardos [5] showed that VCG can overpay by a factor of ?(n). A natural question that arises then is: For what problems does VCG overpay by a lot? We quantify this notion of overpayment, and show that the class of instances for which VCG never overpays is a natural generalization of matroids, that we call frugoids. We then give some sufficient conditions to upper bound and lower bound the overpayment in other cases, and apply these to several important combinatorial problems. We also relate the overpayment in an suitable model to the locality ratio of a natural local search procedure.", "In a STACS 2003 paper, Talwar analyzes the overpayment the VCG mechanism incurs for ensuring truthfulness in auctions. Among other results, he studies k-Set Cover (given a universe U and a collection of sets S 1 , S 2 , ? , S m , each having a cost c ( S i ) and at most k elements of U, find a minimum cost subcollection - a cover - whose union equals U) and shows that the payment of the VCG mechanism is at most k ? c ( OPT ' ) , where OPT ' is the best cover disjoint from the optimum cover OPT . The VCG mechanism requires finding an optimum cover. For k ? 3 , k-Set Cover is known to be NP-hard, and thus truthful mechanisms based on approximation algorithms are desirable. We show that the payment incurred by two mechanisms based on approximation algorithms (including the Greedy algorithm) is bounded by ( k - 1 ) c ( OPT ) + k ? c ( OPT ' ) . The same approximation algorithms have payment bounded by k ( c ( OPT ) + c ( OPT ' ) ) when applied to more general set systems, which include k-Polymatroid Cover, a problem related to Steiner Tree computations. If q is such that an element in a k-Set Cover instance appears in at most q sets, we show that the total payment based on our algorithms is bounded by q ? k 2 times the total payment of the VCG mechanism." ] }
cs0606124
1498463151
In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs.
Both of these problems have many practical applications, in particular, graph isomorphism has received a lot of attention in the area of computer vision. Images or objects can be represented as a graph. A weighted graph can be used to formulate a structural description of an object @cite_24 . There have been two main approaches to solving graph isomorphism: state--space construction with searching and nonlinear optimization. The first method consists of building the state--space, which can then be searched. This method has an exponential running time in the worst case scenario, but by employing heuristics, the search can be reduced to a low--order polynomial for many types of graphs @cite_20 @cite_9 . With the second approach (nonlinear optimization), the most successful approaches have been relaxation labeling @cite_27 , neural networks @cite_18 , linear programming @cite_8 , eigendecomposition @cite_0 , genetic algorithms @cite_4 , and Lagrangian relaxation @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_24", "@cite_0", "@cite_27", "@cite_20" ], "mid": [ "1967974926", "1594073672", "2161444532", "", "2006836707", "2035143052", "2108182844", "2107792892", "2013563330" ], "abstract": [ "A generalization of subgraph isomorphism for the fault-tolerant interpretation of disturbed line images has been achieved. Object recognition is effected by optimal matching of a reference graph to the graph of a distorted image. This optimization is based on the solution of linear and quadratic assignment problems. The efficiency of the procedures developed for this objective has been proved in practical applications. NP-complete problems such as subgraph recognition need exhaustive computation if exact (branch-and-bound) algorithms are used. In contrast to this, heuristics are very fast and sufficiently reliable for less complex relational structures of the kind investigated in the first part of this paper. Constrained continuous optimization techniques, such as relaxation labeling and neural network strategies, solve recognition problems within a reasonable time, even in rather complex relational structures where heuristics can fail. They are also well suited to parallelism. The second part of this paper is devoted exclusively to them.", "Genetic algorithms (GA) can be exploited for optimal graph matching. Graphs represent powerful method of a pattern formal description. Globally optimal graph matching is a NP-complete problem. Pattern distortions and noise increase an optimal search difficulty which could be tackled using GA. This paper describes results of simple GA applied on a graph matching problem. As a conclusion, the suitable GA for an optimal graph \"isomorphism\" and \"monomorphism\" is proposed. Used coding resembles the travelling salesman problem (TSP). Consequently, performance of ordering operators has been tested. In contrast to the TSP, the fitness function depends on chromosome value positioning not ordering. It results in differences between optimal GA configuration for graph matching and for TSP. >", "A linear programming (LP) approach is proposed for the weighted graph matching problem. A linear program is obtained by formulating the graph matching problem in L sub 1 norm and then transforming the resulting quadratic optimization problem to a linear one. The linear program is solved using a simplex-based algorithm. Then, approximate 0-1 integer solutions are obtained by applying the Hungarian method on the real solutions of the linear program. The complexity of the proposed algorithm is polynomial time, and it is O(n sup 6 L) for matching graphs of size n. The developed algorithm is compared to two other algorithms. One is based on an eigendecomposition approach and the other on a symmetric polynomial transform. Experimental results showed that the LP approach is superior in matching graphs than both other methods. >", "", "A Lagrangian relaxation network for graph matching is presented. The problem is formulated as follows: given graphs G and g, find a permutation matrix M that brings the two sets of vertices into correspondence. Permutation matrix constraints are formulated in the framework of deterministic annealing. Our approach is in the same spirit as a Lagrangian decomposition approach in that the row and column constraints are satisfied separately with a Lagrange multiplier used to equate the two \"solutions\". Due to the unavoidable symmetries in graph isomorphism (resulting in multiple global minima), we add a symmetry-breaking self-amplification term in order to obtain a permutation matrix. With the application of a fixpoint preserving algebraic transformation to both the distance measure and self-amplification terms, we obtain a Lagrangian relaxation network. The network performs minimization with respect to the Lagrange parameters and maximization with respect to the permutation matrix variables. Simulation results are shown on 100 node random graphs and for a wide range of connectivities.", "In this paper we formally define the structural description of an object and the concepts of exact and inexact matching of two structural descriptions. We discuss the problems associated with a brute-force backtracking tree search for inexact matching and develop several different algorithms to make the tree search more efficient. We develop the formula for the expected number of nodes in the tree for backtracking alone and with a forward checking algorithm. Finally, we present experimental results showing that forward checking is the most efficient of the algorithms tested.", "An approximate solution to the weighted-graph-matching problem is discussed for both undirected and directed graphs. The weighted-graph-matching problem is that of finding the optimum matching between two weighted graphs, which are graphs with weights at each arc. The proposed method uses an analytic instead of a combinatorial or iterative approach to the optimum matching problem. Using the eigendecompositions of the adjacency matrices (in the case of the undirected-graph-matching problem) or Hermitian matrices derived from the adjacency matrices (in the case of the directed-graph-matching problem), a matching close to the optimum can be found efficiently when the graphs are sufficiently close to each other. Simulation results are given to evaluate the performance of the proposed method. >", "A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels. This paper develops a theory to characterize the goal of relaxation labeling. The theory is founded on a definition of con-sistency in labelings, extending the notion of constraint satisfaction. In certain restricted circumstances, an explicit functional exists that can be maximized to guide the search for consistent labelings. This functional is used to derive a new relaxation labeling operator. When the restrictions are not satisfied, the theory relies on variational cal-culus. It is shown that the problem of finding consistent labelings is equivalent to solving a variational inequality. A procedure nearly identical to the relaxation operator derived under restricted circum-stances serves in the more general setting. Further, a local convergence result is established for this operator. The standard relaxation labeling formulas are shown to approximate our new operator, which leads us to conjecture that successful applications of the standard methods are explainable by the theory developed here. Observations about con-vergence and generalizations to higher order compatibility relations are described.", "Attributed relational graphs (ARGs) have shown superior qualities when used for image representation and analysis in computer vision systems. A new, efficient approach for calculating a global distance measure between attributed relational graphs is proposed, and its applications in computer vision are discussed. The distance measure is calculated by a global optimization algorithm that is shown to be very efficient for this problem. The approach shows good results for practical size ARGs. The technique is also suitable for parallel processing implementation." ] }
cs0606124
1498463151
In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs.
In @cite_6 , graph matching is applied to conceptual system matching for translation. The work is very similar to ontology alignment, however, the authors formalize their problem in terms of any conceptual system rather than restricting the work specifically to an ontological formalization of a domain. They formalize conceptual systems as graphs, and introduce algorithms for matching both unweighted and weighted versions of these graphs.
{ "cite_N": [ "@cite_6" ], "mid": [ "1979842553" ], "abstract": [ "ABSURDIST II, an extension to ABSURDIST, is an algorithm using attributed graph matching to find translations between conceptual systems. It uses information about the internal structure of systems by itself, or in combination with external information about concept similarities across systems. It supports systems with multiple types of weighted or unweighted, directed or undirected relations between concepts. The algorithm exploits graph sparsity to improve computational efficiency. We present the results of experiments with a number of conceptual systems, including artificially constructed random graphs with introduced distortions." ] }
quant-ph0605181
2119968904
A celebrated important result due to (2002 Commun. Math. Phys. 227 605–22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k≥7, is BQP-hard. Together with the algorithmic results of (2005) and (2002 Commun. Math. Phys. 227 587–603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies ( 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are #P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph 0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph 0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and Arad (2006) and includes discussions of the developments since then.
Since the first publication of the results presented here (in preliminary form) @cite_26 , they were already used in several contexts: Shor and Jordan @cite_15 built on the methods we develop here to prove universality of a variant of the Jones polynomial approximation problem, in the model of quantum computation with one clean qubit. In the extension of the AJL algorithm @cite_21 to the Potts model @cite_36 , Aharonov build on those methods to prove universality of approximating the Jones polynomial in many other values, and even in values which correspond to non-unitary representations. We hope that the method we present here will be useful is future other contexts as well.
{ "cite_N": [ "@cite_36", "@cite_15", "@cite_26", "@cite_21" ], "mid": [ "1632584182", "1616071251", "2137459590", "45689951" ], "abstract": [ "In the first 36 pages of this paper, we provide polynomial quantum algorithms for additive approximations of the Tutte polynomial, at any point in the Tutte plane, for any planar graph. This includes as special cases the AJL algorithm for the Jones polynomial, the partition function of the Potts model for any weighted planer graph at any temperature, and many other combinatorial graph properties. In the second part of the paper we prove the quantum universality of many of the problems for which we provide an algorithm, thus providing a large set of new quantum-complete problems. Unfortunately, we do not know that this holds for the Potts model case; this is left as an important open problem. The main progress in this work is in our ability to handle non-unitary representations of the Temperley Lieb algebra, both when applying them in the algorithm, and, more importantly, in the proof of universality, when encoding quantum circuits using non-unitary operators. To this end we develop many new tools, that allow proving density and applying the Solovay Kitaev theorem in the case of non-unitary matrices. We hope that these tools will open up new possibilities of using non-unitary reps in other quantum computation contexts.", "It is known that evaluating a certain approximation to the Jones polynomial for the plat closure of a braid is a BQP-complete problem. That is, this problem exactly captures the power of the quantum circuit model[13, 3, 1]. The one clean qubit model is a model of quantum computation in which all but one qubit starts in the maximally mixed state. One clean qubit computers are believed to be strictly weaker than standard quantum computers, but still capable of solving some classically intractable problems [21]. Here we show that evaluating a certain approximation to the Jones polynomial at a fifth root of unity for the trace closure of a braid is a complete problem for the one clean qubit complexity class. That is, a one clean qubit computer can approximate these Jones polynomials in time polynomial in both the number of strands and number of crossings, and the problem of simulating a one clean qubit computer is reducible to approximating the Jones polynomial of the trace closure of a braid.", "IN THIS PAPER I construct a state model for the (original) Jones polynomial [5]. (In [6] a state model was constructed for the Conway polynomial.) As we shall see, this model for the Jones polynomial arises as a normalization of a regular isotopy invariant of unoriented knots and links, called here the bracket polynomial, and denoted 〈K〉 for a link projectionK . The concept of regular isotopy will be explained below. The bracket polynomial has a very simple state model. In §2 (Theorem 2.10) I use the bracket polynomial to prove (via Proposition 2.9 and an observation of Kunio Murasugi) that the number of crossings in a connected, reduced alternating projection of a link L is a topological invariant of L. (A projection is reduced if it has no isthmus in the sense of Fig. 5.) In other words, any two connected, reduced alternating projections of the link L have the same number of crossings. This is a remarkable application of our technique. It solves affirmatively a conjecture going back to the knot tabulations of Tait, Kirkman and Little over a century ago (see [6], [9], [10]). Along with this application to alternating links, we also use the bracket polynomial to obtain a necessary condition for an alternating reduced link diagram to be ambient isotopic to its mirror image (Theorem 3.1). One consequence of this theorem is that a reduced alternating diagram with twist number greater than or equal to one-third the number of crossings is necessarily chiral. The paper is organized as follows. In §2 the bracket polynomial is developed, and its relationship with the Jones polynomial is explained. This provides a self-contained introduction to the Jones polynomial and to our techniques. The last part of §2 contains the applications to alternating knots, and to bounds on the minimal and maximal degrees of the polynomial. §3 contains the results about chirality of alternating knots. §4 discusses the structure of our state model in the case of braids. Here the states have an algebraic structure related to Jones’s representation of the braid group into a Von Neumann Algebra.", "" ] }
cs0605080
2950591296
Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications.
In the overlay-router approach such as OMNI @cite_5 and TOMA @cite_13 , reliable servers are installed across the network to act as application-level multicast routers. The content is transmitted from the source to a set of receivers on a multicast tree consisting of the overlay servers. This approach is designed to be scalable since the receivers get the content from the application-level routers, thus alleviating bandwidth demand at the source. However, it needs dedicated infrastructure deployment and costly servers.
{ "cite_N": [ "@cite_5", "@cite_13" ], "mid": [ "2098995343", "1568206987" ], "abstract": [ "This paper presents an overlay architecture where service providers deploy a set of service nodes (called MSNs) in the network to efficiently implement media-streaming applications. These MSNs are organized into an overlay and act as application-layer multicast forwarding entities for a set of clients. We present a decentralized scheme that organizes the MSNs into an appropriate overlay structure that is particularly beneficial for real-time applications. We formulate our optimization criterion as a \"degree-constrained minimum average-latency problem\" which is known to be NP-hard. A key feature of this formulation is that it gives a dynamic priority to different MSNs based on the size of its service set. Our proposed approach iteratively modifies the overlay tree using localized transformations to adapt with changing distribution of MSNs, clients, as well as network conditions. We show that a centralized greedy approach to this problem does not perform quite as well, while our distributed iterative scheme efficiently converges to near-optimal solutions.", "In this paper, we propose a Two-tier Overlay Multicast Architecture (TOMA) to provide scalable and efficient multicast support for various group communication applications. In TOMA, Multicast Service Overlay Network (MSON) is advocated as the backbone service domain, while end users in the access domains form a number of small clusters, in which an application-layer multicast protocol is used for the communication between the clustered end users. TOMA is able to provide efficient resource utilization with less control overhead, especially for large-scale applications. It also alleviates the state scalability problem and simplifies multicast tree construction and maintenance when there are large numbers of groups in the networks. Simulation studies are conducted and the results demonstrate the promising performance of TOMA." ] }
cs0605080
2950591296
Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications.
The P2P approach requires no extra resources. Several proposals have been designed to handle small groups. Narada @cite_0 , MeshTree @cite_7 , and Hostcast @cite_10 are examples of distributed mesh-first'' algorithms where nodes arrange themselves into well-connected mesh on top of which a routing protocol is run to derive a delivery tree. These protocols rely on incremental improvements over time by adding and removing mesh links based on an utility function. Although these protocols offer robustness properties (thanks to the mesh structure), they do not scale to large population, due to excessive overhead resulting from the improvement process. The objective of LCC is to locate the newcomer prior to joining the overlay and hence process only a few number of refinements during the multicast session.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_7" ], "mid": [ "2123732385", "", "1926875216" ], "abstract": [ "The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred.", "", "We study decentralised low delay degree-constrained overlay multicast tree construction for single source real-time applications. This optimisation problem is NP-hard even if computed centrally. We identify two problems in traditional distributed solutions, namely the greedy problem and delay-cost trade-off. By offering solutions to these problems, we propose a new self-organising distributed tree building protocol called MeshTree. The main idea is to embed the delivery tree in a degree-bounded mesh containing many low cost links. Our simulation results show that MeshTree is comparable to the centralised Compact Tree algorithm, and always outperforms existing distributed solutions in delay optimisation. In addition, it always yields trees with lower cost and traffic redundancy." ] }
cs0605080
2950591296
Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications.
Other tree-first'' protocols like ZigZag @cite_17 and NICE @cite_8 , are topology-aware clustering-based protocols which are designed to support wide-area size multicast for low bandwidth application. However, they do not consider individual node fan-out capability. Rather, they bound the overlay fan-out using a (global) cluster-size parameter. In particular, since both protocols only consider latency for cluster leader selection, they may experience problems if the cluster leader has insufficient fan-out. Other proposals exploit the AS-level @cite_1 or the router-level @cite_2 underlying network topology information to build efficient overlay networks. However, these approaches assume some assistance from the IP layer (routers sending ICMP messages, or BGP information access), which may be problematic. LCC does not require any extra assistance from entities that do not belong to the overlay.
{ "cite_N": [ "@cite_2", "@cite_8", "@cite_1", "@cite_17" ], "mid": [ "2118116239", "2129807746", "2106432824", "" ], "abstract": [ "We propose an application level multicast approach, Topology Aware Grouping (TAG), which exploits underlying network topology information to build efficient overlay networks among multicast group members. TAG uses information about path overlap among members to construct a tree that reduces the overlay relative delay penalty, and reduces the number of duplicate copies of a packet on the same link. We study the properties of TAG, and model and experiment with its economies of scale factor to quantify its benefits compared to unicast and IP multicast. We also compare the TAG approach with the ESM approach in a variety of simulation configurations including a number of real Internet topologies and generated topologies. Our results indicate the effectiveness of the algorithm in reducing delays and duplicate packets, with reasonable algorithm time and space complexities.", "We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25 ), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100.", "With the rise of peer-to-peer networks, two problems have become prominent: (1) significant network traffic among peers, to probe the latency among those peers to improve lookup performance; (2) the need for increased information flow across protocol layer boundaries, to allow for cross-layer adaptations. The particular work here focuses on improvements in structured peer-to-peer networks based on autonomous system information. We find that by using autonomous system information effectively we can achieve lookup performance approaching that based on proximity neighbor selection, but with much less network traffic. We also demonstrate improvements in replication in structured peer-to-peer networks using AS topology and scoping information. Finally, we review this approach in the context of network architecture", "" ] }
cs0605080
2950591296
Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications.
Landmark clustering is a general concept to construct topology-aware overlays. @cite_9 use such an approach to build a multicast topology-aware CAN overlay network. Prior to joining the overlay network, a newcomer has to measure its distance to each landmark. The node then orders the landmarks according to its distance measurements. The main intuition is that nodes with the same landmark ordering, are also quite likely to be close to each other topologically. An immediate issue with such a landmark-based approach is that it can be rather coarse-grained depending on the number of landmarks used and their distribution. Furthermore, requiring a fixed set of landmarks known by all participating nodes renders this approach unsuitable for dynamic networks.
{ "cite_N": [ "@cite_9" ], "mid": [ "2148647281" ], "abstract": [ "A number of large-scale distributed Internet applications could potentially benefit from some level of knowledge about the relative proximity between its participating host nodes. For example, the performance of large overlay networks could be improved if the application-level connectivity between the nodes in these networks is congruent with the underlying IP-level topology. Similarly, in the case of replicated Web content, client nodes could use topological information in selecting one of multiple available servers. For such applications, one need not find the optimal solution in order to achieve significant practical benefits. Thus, these applications, and presumably others like them, do not require exact topological information and can instead use sufficiently informative hints about the relative positions of Internet hosts. In this paper, we present a binning scheme whereby nodes partition themselves into bins such that nodes that fall within a given bin are relatively close to one another in terms of network latency. Our binning strategy is simple (requiring minimal support from any measurement infrastructure), scalable (requiring no form of global knowledge, each node only needs knowledge of a small number of well-known landmark nodes) and completely distributed (requiring no communication or cooperation between the nodes being binned). We apply this binning strategy to the two applications mentioned above: overlay network construction and server selection. We test our binning strategy and its application using simulation and Internet measurement traces. Our results indicate that the performance of these applications can be significantly improved by even the rather coarse-grained knowledge of topology offered by our binning scheme." ] }
cs0605097
1681051970
We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria.
The first formalisms designed for reasoning about cryptographic protocols are belief logics such as BAN logic @cite_27 , used by the Convince tool @cite_14 with the HOL theorem prover @cite_40 , and its generalizations (GNY @cite_32 , AT @cite_8 , and SVO logic @cite_37 which the C3PO tool @cite_7 employs with the Isabelle theorem prover @cite_5 ). Belief logics are difficult to use since the logical form of a protocol does not correspond to the protocol itself in an obvious way. Almost indistinguishable formulations of the same problem lead to different results. It is also hard to know if a formulation is over constrained or if any important assumptions are missing. BAN logic and its derivatives cannot deal with security flaws resulting from interleaving of protocol steps @cite_31 and cannot express any properties of protocols other than authentication @cite_23 . To overcome these limitations, the knowledge flow formalism has, like other approaches @cite_18 @cite_28 @cite_21 @cite_11 @cite_1 , a concrete operational model of protocol execution. Our model also includes a description of how the honest participants in the protocol behave and a description of how an adversary can interfere with the execution of the protocol.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_18", "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_1", "@cite_32", "@cite_40", "@cite_27", "@cite_23", "@cite_5", "@cite_31", "@cite_11" ], "mid": [ "2169216703", "113821320", "2170718011", "2097842452", "1533679417", "2150682469", "2032577199", "2003915781", "2102922499", "1521083034", "2010939995", "2106857693", "", "2404159111", "1569881051" ], "abstract": [ "We present a logic for analyzing cryptographic protocols. This logic encompasses a unification of four of its predecessors in the BAN family of logics, namely those given by Li (1990); M. Abadi, M. Tuttle (1991); P.C. van Oorschot (1993); and BAN itself (M. , 1989). We also present a model-theoretic semantics with respect to which the logic is sound. The logic presented captures all of the desirable features of its predecessors and more; nonetheless, it accomplishes this with no more axioms or rules than the simplest of its predecessors. >", "This paper describes the Convince toolset for detecting common errors in cryptographic protocols, protocols of the sort used in electronic commerce. We describe using Convince to analyze confidentiality, authentication, and key distribution in a recently developed protocol proposed for incorporation into a network bill-payment system, a public-key version of the Kerberos authentication protocol. Convince incorporates a \"belief logic\" formalism into a theorem-proving environment that automatically proves whether a protocol can meet its goals. Convince allows an analyst to model a protocol using a tool originally designed for Computer-Aided Software Engineering (CASE).", "In recent years, a method for analyzing security protocols using the process algebra CSP (C.A.R. Hoare, 1985) and its model checker FDR (A.W Roscoe, 1994) has been developed. This technique has proved successful, and has been used to discover a number of attacks upon protocols. However the technique has required producing a CSP description of the protocol by hand; this has proved tedious and error prone. We describe Casper, a program that automatically produces the CSP description from a more abstract description, thus greatly simplifying the modelling and analysis process.", "We present an improved logic for analysing authentication properties of cryptographic protocols, based on the SVO logic of Syverson and van Oorschot (1994). Such logics are useful in electronic commerce, among other areas. We have constructed this logic in order to simplify automation, and we describe an implementation using the Isabelle theorem-proving system, and a GUI tool based on this implementation. The tool is typically operated by opening a list of propositions intended to be true, and clicking one button. Since the rules form a clean framework, the logic is easily extensible. We also present in detail a proof of soundness, using Kripke possible-worlds semantics.", "", "A methodology is presented for using a general-purpose state enumeration tool, Mur spl phi , to analyze cryptographic and security-related protocols. We illustrate the feasibility of the approach by analyzing the Needham-Schroeder (1978) protocol, finding a known bug in a few seconds of computation time, and analyzing variants of Kerberos and the faulty TMN protocol used in another comparative study. The efficiency of Mur spl phi also allows us to examine multiple terms of relatively short protocols, giving us the ability to detect replay attacks, or errors resulting from confusion between independent execution of a protocol by independent parties.", "Due to the rapid growth of the “Internet” and the “World Wide Web” security has become a very important concern in the design and implementation of software systems. Since security has become an important issue, the number of protocols in this domain has become very large. These protocols are very diverse in nature. If a software architect wants to deploy some of these protocols in a system, they have to be sure that the protocol has the right properties as dictated by the requirements of the system. In this article we present BRUTUS, a tool for verifying properties of security protocols. This tool can be viewed as a special-purpose model checker for security protocols. We also present reduction techniques that make the tool efficient. Experimental results are provided to demonstrate the efficiency of BRUTUS.", "The NRL Protocol Analyzer is a prototype special-purpose verification tool, written in Prolog, that has been developed for the analysis of cryptographic protocols that are used to authenticate principals and services and distribute keys in a network. In this paper we give an overview of how the Analyzer works and describe its achievements so far. We also show how our use of the Prolog language benefited us in the design and implementation of the Analyzer.", "A mechanism is presented for reasoning about belief as a systematic way to understand the working of cryptographic protocols. The mechanism captures more features of such protocols than that given by M. (1989) to which the proposals are a substantial extension. The notion of possession incorporated in the approach assumes that principles can include in messages data they do not believe in, but merely possess. This also enables conclusions such as 'Q possesses the shared key', as in an example to be derived. The approach places a strong emphasis on the separation between the content and the meaning of messages. This can increase consistency in the analysis and, more importantly, introduce the ability to reason at more than one level. The final position in a given run will depend on the level of mutual trust of the specified principles participating in that run. >", "Part I. Tutorial: 1. Introduction to ML 2. The HOL logic 3. Introduction to proof with HOL 4. Goal-oriented proof: tactics and tacticals 5. Example: a simple parity checker 6. How to program a proof tool 7. Example: the binomial theorem Part II. The Meta-Language ML: 8. The history of ML 9. Introduction and examples 10. Syntax of ML 11. Semantics of ML 12. ML types 13. Primitive ML identifier bindings 14. General purpose and list processing functions 15. ML system functions Part III. The Hol Logic: 16. Syntax and semantics 17. Theories Part IV. The Hol System: 18. The HOL logic in ML Part V. Theorem-Proving With HOL: 19. Derived inference rules 20. Conversions 21. Goal-directed proof: tactics and tacticals Appendices.", "Authentication protocols are the basis of security in many distributed systems, and it is therefore essential to ensure that these protocols function correctly. Unfortunately, their design has been extremely error prone. Most of the protocols found in the literature contain redundancies or security flaws. A simple logic has allowed us to describe the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication. We have been able to explain a variety of authentication protocols formally, to discover subtleties and errors in them, and to suggest improvements. In this paper we present the logic and then give the results of our analysis of four published protocols, chosen either because of their practical importance or because they serve to illustrate our method.", "The pioneering and well-known work of M. Burrows, M. Abadi and R. Needham (1989), (the BAN logic) which dominates the area of security protocol analysis is shown to take an approach which is not fully formal and which consequently permits approval of dangerous protocols. Measures to make the BAN logic formal are then proposed. The formalisation is found to be desirable not only for its potential in providing rigorous analysis of security protocols, but also for its readiness for supporting a computer-aided fashion of analysis. >", "", "In the past few years a lot of attention has been paid to the use of special logics to analyse cryptographic protocols, foremost among these being the logic of Burrows, Abadi and Needham (the BAN logic). These logics have been successful in finding weaknesses in various examples. In this paper a limitation of the BAN logic is illustrated with two examples. These show that it is easy for the BAN logic to approve protocols that are in practice unsound.", "We propose a new efficient automatic verification technique, Athena, for security protocol analysis. It uses a new efficient representation - our extension to the Strand Space Model, and utilizes techniques from both model checking and theorem proving approaches. Athena is fully automatic and is able to prove the correctness of many security protocols with arbitrary number of concurrent runs. The run time for a typical protocol from the literature, like the Needham-Schroeder protocol, is often a fraction of a second. Athena exploits several different techniques that enable it to analyze infinite sets of protocol runs and achieve such efficiency. Our extended Strand Space Model is a natural and efficient representation for the problem domain. The security properties are specified in a simple logic which permits both efficient proof search algorithms and has enough expressive power to specify interesting properties. The automatic proof search procedure borrows some efficient techniques from both model checking and theorem proving. We believe that it is the right combination of the new compact representation and all the techniques that actually makes Athena successful in fast and automatic verification of security protocols." ] }
cs0605097
1681051970
We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria.
Specialized model checkers such as Casper @cite_18 , Mur @math @cite_28 , Brutus @cite_21 , TAPS @cite_24 , and ProVerif @cite_15 have been successfully used to analyze security protocols. These tools are based on state space exploration which leads to an exponential complexity. Athena @cite_11 is based on a modification of the strand space model @cite_16 . Even though it reduces the state space explosion problem, it remains exponential. Multiset rewriting @cite_19 in combination with tree automata is used in Timbuk @cite_12 . The relation between multiset rewriting and strand spaces is analyzed in @cite_41 . The relation between multiset rewriting and process algebras @cite_36 @cite_0 is analyzed in @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_41", "@cite_36", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_0", "@cite_15", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2170718011", "2150682469", "2118034427", "1603799276", "2032577199", "1926128235", "2119617924", "2114497629", "1564764333", "2073385346", "2150426251", "", "1569881051" ], "abstract": [ "In recent years, a method for analyzing security protocols using the process algebra CSP (C.A.R. Hoare, 1985) and its model checker FDR (A.W Roscoe, 1994) has been developed. This technique has proved successful, and has been used to discover a number of attacks upon protocols. However the technique has required producing a CSP description of the protocol by hand; this has proved tedious and error prone. We describe Casper, a program that automatically produces the CSP description from a more abstract description, thus greatly simplifying the modelling and analysis process.", "A methodology is presented for using a general-purpose state enumeration tool, Mur spl phi , to analyze cryptographic and security-related protocols. We illustrate the feasibility of the approach by analyzing the Needham-Schroeder (1978) protocol, finding a known bug in a few seconds of computation time, and analyzing variants of Kerberos and the faulty TMN protocol used in another comparative study. The efficiency of Mur spl phi also allows us to examine multiple terms of relatively short protocols, giving us the ability to detect replay attacks, or errors resulting from confusion between independent execution of a protocol by independent parties.", "Formal analysis of security protocols is largely based on a set of assumptions commonly referred to as the Dolev-Yao model. Two formalisms that state the basic assumptions of this model are related here: strand spaces and multiset rewriting with existential quantification. Strand spaces provide a simple and economical approach to analysis of completed protocol runs by emphasizing causal interactions among protocol participants. The multiset rewriting formalism provides a very precise way of specifying finite-length protocols with unboundedly many instances of each protocol role, such as client, server, initiator, or responder. A number of modifications to each system are required to produce a meaningful comparison. In particular, we extend the strand formalism with a way of incrementally growing bundles in order to emulate an execution of a protocol with parametric strands. The correspondence between the modified formalisms directly relates the intruder theory from the multiset rewriting formalism to the penetrator strands. The relationship we illustrate here between multiset rewriting specifications and strand spaces thus suggests refinements to both frameworks, and deepens our understanding of the Dolev-Yao model.", "Glossary Part I. Communicating Systems: 1. Introduction 2. Behaviour of automata 3. Sequential processes and bisimulation 4. Concurrent processes and reaction 5. Transitions and strong equivalence 6. Observation equivalence: theory 7. Observation equivalence: examples Part II. The pi-Calculus: 8. What is mobility? 9. The pi-calculus and reaction 10. Applications of the pi-calculus 11. Sorts, objects and functions 12. Commitments and strong bisimulation 13. Observation equivalence and examples 14. Discussion and related work Bibliography Index.", "Due to the rapid growth of the “Internet” and the “World Wide Web” security has become a very important concern in the design and implementation of software systems. Since security has become an important issue, the number of protocols in this domain has become very large. These protocols are very diverse in nature. If a software architect wants to deploy some of these protocols in a system, they have to be sure that the protocol has the right properties as dictated by the requirements of the system. In this article we present BRUTUS, a tool for verifying properties of security protocols. This tool can be viewed as a special-purpose model checker for security protocols. We also present reduction techniques that make the tool efficient. Experimental results are provided to demonstrate the efficiency of BRUTUS.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication.", "We describe a proof method for cryptographic protocols, based on a strong secrecy invariant that catalogues conditions under which messages can be published. For typical protocols, a suitable first-order invariant can be generated automatically from the program text, independent of the properties being verified, allowing safety properties to be proved by ordinary first-order reasoning. We have implemented the method in an automatic verifier, TAPS, that proves safety properties roughly equivalent to those in published Isabelle verifications, but does so much faster (usually within a few seconds) and with little or no guidance from the user. We have used TAPS to analyze about 60 protocols, including all but three protocols from the Clark and Jacob survey; on average, these verifications each require less than 4 seconds of CPU time and less than 4 bytes of hints from the user.", "We formalize the Dolev-Yao model of security protocols, using a notation based on multiset rewriting with existentials. The goals are to provide a simple formal notation for describing security protocols, to formalize the assumptions of the Dolev-Yao model using this notation, and to analyze the complexity of the secrecy problem under various restrictions. We prove that, even for the case where we restrict the size of messages and the depth of message encryption, the secrecy problem is undecidable for the case of an unrestricted number of protocol roles and an unbounded number of new nonces. We also identify several decidable classes, including a DEXP-complete class when the number of nonces is restricted, and an NP-complete class when both the number of nonces and the number of roles is restricted. We point out a remaining open complexity problem, and discuss the implications these results have on the general topic of protocol analysis.", "The spi calculus is an extension of the pi calculus with constructs for encryption and decryption. This paper develops the theory of the spi calculus, focusing on techniques for establishing testing equivalence, and applying these techniques to the proof of authenticity and secrecy properties of cryptographic protocols.", "We study and further develop two language-based techniques for analyzing security protocols. One is based on a typed process calculus; the other, on untyped logic programs. Both focus on secrecy properties. We contribute to these two techniques, in particular by extending the former with a flexible, generic treatment of many cryptographic operations. We also establish an equivalence between the two techniques.", "A strand is a sequence of events; it represents either the execution of an action by a legitimate party in a security protocol or else a sequence of actions by a penetrator. A strand space is a collection of strands, equipped with a graph structure generated by causal interaction. In this framework, protocol correctness claims may be expressed in terms of the connections between strands of different kinds. In this paper, we develop the notion of a strand space. We then prove a generally useful lemma, as a sample result giving a general bound on the abilities of the penetrator in any protocol. We apply the strand space formalism to prove the correctness of the Needham-Schroeder-Lowe protocol (G. Lowe, 1995, 1996). Our approach gives a detailed view of the conditions under which the protocol achieves authentication and protects the secrecy of the values exchanged. We also use our proof methods to explain why the original Needham-Schroeder (1978) protocol fails. We believe that our approach is distinguished from other work on protocol verification by the simplicity of the model and the ease of producing intelligible and reliable proofs of protocol correctness even without automated support.", "", "We propose a new efficient automatic verification technique, Athena, for security protocol analysis. It uses a new efficient representation - our extension to the Strand Space Model, and utilizes techniques from both model checking and theorem proving approaches. Athena is fully automatic and is able to prove the correctness of many security protocols with arbitrary number of concurrent runs. The run time for a typical protocol from the literature, like the Needham-Schroeder protocol, is often a fraction of a second. Athena exploits several different techniques that enable it to analyze infinite sets of protocol runs and achieve such efficiency. Our extended Strand Space Model is a natural and efficient representation for the problem domain. The security properties are specified in a simple logic which permits both efficient proof search algorithms and has enough expressive power to specify interesting properties. The automatic proof search procedure borrows some efficient techniques from both model checking and theorem proving. We believe that it is the right combination of the new compact representation and all the techniques that actually makes Athena successful in fast and automatic verification of security protocols." ] }
cs0605097
1681051970
We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria.
Proof building tools such as NRL, based on Prolog @cite_1 , have also been helpful for analyzing security protocols. However, they are not fully automatic and often require extensive user intervention. Model checkers lead to completely automated tools which generate counterexamples if a protocol is flawed. For theorem-proving-based approaches, counterexamples are hard to produce.
{ "cite_N": [ "@cite_1" ], "mid": [ "2003915781" ], "abstract": [ "The NRL Protocol Analyzer is a prototype special-purpose verification tool, written in Prolog, that has been developed for the analysis of cryptographic protocols that are used to authenticate principals and services and distribute keys in a network. In this paper we give an overview of how the Analyzer works and describe its achievements so far. We also show how our use of the Prolog language benefited us in the design and implementation of the Analyzer." ] }
cs0605103
2951242165
Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.
While we focus on segmentation, there are many methods available for fitting models to continuous variables, such as a regression, regression decision trees, Neural Networks @cite_23 , Wavelets @cite_39 , Adaptive Multivariate Splines @cite_46 , Free-Knot Splines @cite_26 , Hybrid Adaptive Splines @cite_19 , etc.
{ "cite_N": [ "@cite_26", "@cite_39", "@cite_19", "@cite_23", "@cite_46" ], "mid": [ "1986096508", "2158940042", "2069371995", "1554944419", "2102201073" ], "abstract": [ "Abstract Polynomial splines are often used in statistical regression models for smooth response functions. When the number and location of the knots are optimized, the approximating power of the spline is improved and the model is nonparametric with locally determined smoothness. However, finding the optimal knot locations is an historically difficult problem. We present a new estimation approach that improves computational properties by penalizing coalescing knots. The resulting estimator is easier to compute than the unpenalized estimates of knot positions, eliminates unnecessary “corners” in the fitted curve, and in simulation studies, shows no increase in the loss. A number of GCV and AIC type criteria for choosing the number of knots are evaluated via simulation.", "SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle offers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. We show that variable-knot spline fits and piecewise-polynomial fits, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality in multivariate normal decision theory which we call the oracle inequality shows that attained performance differs from ideal performance by at most a factor of approximately 2 log n, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.", "Abstract An adaptive spline method for smoothing is proposed that combines features from both regression spline and smoothing spline approaches. One of its advantages is the ability to vary the amount of smoothing in response to the inhomogeneous “curvature” of true functions at different locations. This method can be applied to many multivariate function estimation problems, which is illustrated by an application to smoothing temperature data on the globe. The method's performance in a simulation study is found to be comparable to the wavelet shrinkage methods proposed by Donoho and Johnstone. The problem of how to count the degrees of freedom for an adaptively chosen set of basis functions is addressed. This issue arises also in the MARS procedure proposed by Friedman and other adaptive regression spline procedures.", "During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. @PARASPLIT This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for wide'' data (p bigger than n), including multiple testing and false discovery rates. @PARASPLIT Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.", "" ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
El @cite_2 consider the wireless transmission problem when the packets have different power functions, giving an iterative algorithm that converges to an optimal solution. They also show how to extend their algorithm to handle the case when the buffer used to store active packets has bounded size and the case when packets have individual deadlines. Their algorithm can also be extended to schedule multiple transmitters, but this does not correspond to a processor scheduling problem.
{ "cite_N": [ "@cite_2" ], "mid": [ "2107314057" ], "abstract": [ "The paper develops algorithms for minimizing the energy required to transmit packets in a wireless environment. It is motivated by the following observation: In many channel coding schemes it is possible to significantly lower the transmission energy by transmitting packets over a long period of time. Based on this observation, we show that for a variety of scenarios the offline energy-efficient transmission scheduling problem reduces to a convex optimization problem. Unlike for the special case of a single transmitter-receiver pair studied by (see Prabhakar, Uysal-Biyikoglu and El Gamal. Proc. IEEE Infocom 2001), the problem does not, in general, admit a closed-form solution when there are multiple users. By exploiting the special structure of the problem, however, we are able to devise energy-efficient transmission schedules. For the downlink channel, with a single transmitter and multiple receivers, we devise an iterative algorithm, called MoveRight, that yields the optimal offline schedule. The MoveRight algorithm also optimally solves the downlink problem with additional constraints imposed by packet deadlines and finite transmit buffers. For the uplink (or multiaccess) problem MoveRight optimally determines the offline time-sharing schedule. A very efficient online algorithm, called MoveRightExpress, that uses a surprisingly small look-ahead buffer is proposed and is shown to perform competitively with the optimal offline schedule in terms of energy efficiency and delay." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
Pruhs, van Stee, and Uthaisombut @cite_10 consider the laptop problem version of minimizing makespan for jobs having precedence constraints where all jobs are released immediately and @math . Their main observation, which they call the power equality , is that the sum of the powers of the machines is constant over time in the optimal schedule. They use binary search to determine this value and then reduce the problem to scheduling on related fixed-speed machines. Previously-known @cite_8 @cite_1 approximations for the related fixed-speed machine problem then give an @math -approximation for power-aware makespan. This technique cannot be applied in our setting because the power equality does not hold for jobs with release dates.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_8" ], "mid": [ "2012989825", "2097541335", "2607875432" ], "abstract": [ "We give a new and efficient approximation algorithm for scheduling precedence-constrained jobs on machines with different speeds. The problem is as follows. We are given n jobs to be scheduled on a set of m machines. Jobs have processing times and machines have speeds. It takes pj si units of time for machine i with speed si to process job j with processing requirement pj. Precedence constraints between jobs are given in the form of a partial order. If j?k, processing of job k cannot start until job j's execution is completed. The objective is to find a non-preemptive schedule to minimize the makespan of the schedule. Chudak and Shmoys (1997, “Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA),” pp. 581?590) gave an algorithm with an approximation ratio of O(log m), significantly improving the earlier ratio of O(m) due to Jaffe (1980, Theoret. Comput. Sci.26, 1?17). Their algorithm is based on solving a linear programming relaxation. Building on some of their ideas, we present a combinatorial algorithm that achieves a similar approximation ratio but runs in O(n3) time. Our algorithm is based on a new and simple lower bound which we believe is of independent interest.", "We consider the problem of speed scaling to conserve energy in a multiprocessor setting where there are precedence constraints between tasks, and where the performance measure is the makespan. That is, we consider an energy bounded version of the classic problem Pm | prec | Cmax. We show that, without loss of generality, one need only consider constant power schedules. We then show how to reduce this problem to the problem Qm | prec | Cmax to obtain a poly-log(m)-approximation algorithm.", "We present new approximation algorithms for the problem of scheduling precedence-constrained jobs on parallel machines that are uniformly related. That is, there arenjobs andmmachines; each jobjrequirespjunits of processing, and is to be processed on one machine without interruption; if it is assigned to machinei, which runs at a given speedsi, it takespj sitime units. There also is a partial order ? on the jobs, wherej?kimplies that jobkmay not start processing until jobjhas been completed. We consider two objective functions:Cmax=maxjCj, whereCjdenotes the completion time of jobj, and ?jwjCj, wherewjis a weight that is given for each jobj. For the first objective, the best previously known result is anOm-approximation algorithm, which was shown by Jaffe more than 15 years ago. We give anO(logm)-approximation algorithm. We also show how to extend this result to obtain anO(logm)-approximation algorithm for the second objective, albeit with a somewhat larger constant. These results also extend to settings in which each jobjhas a release daterjbefore which the job may not begin processing. In addition, we obtain stronger performance guarantees if there are a limited number of distinct speeds. Our results are based on a new linear programming-based technique for estimating the speed at which each job should be run, and a variant of the list scheduling algorithm of Graham that can exploit this additional information." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
Minimizing the makespan of tasks with precedence constraints has also been studied in the context of project management. Speed scaling is possible when additional resources can be used to shorten some of the tasks. Pinedo @cite_0 gives heuristics for some variations of this problem.
{ "cite_N": [ "@cite_0" ], "mid": [ "1570584007" ], "abstract": [ "Introduction.- Manufacturing Models.- Service Models.- Project Planning and Scheduling.- Machine Scheduling and Job Shop Scheduling.- Scheduling of Flexible Assembly Systems.- Economic Lot Scheduling.- Planning and Scheduling in Supply Chains.- Interval Scheduling, Reservations, and Timetabling.- Planning and Scheduling in Sports and Entertainment.- Planning, Scheduling, and Timetabling in Transportation.- Workforce Scheduling.- Systems Design and Implementation.- Advanced Concepts in Systems Design.- What Lies Ahead?- Mathematical Programming Formulations.- Exact Optimization Methods.- Heuristic Methods.- Constraint Programing Methods.- Selected Scheduuling Sytems.- The LEKIN Systems User's Guide.- Notation.- References.- Index." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
The only previous power-aware algorithm to minimize total flow is by Pruhs, Uthaisombut, and Woeginger @cite_20 , who consider scheduling equal-work jobs on a uniprocessor. In this setting, they observe that jobs can be run in order of release time and then prove the following relationships between the speed of each job in the optimal solution:
{ "cite_N": [ "@cite_20" ], "mid": [ "1532943894" ], "abstract": [ "We consider the bi-criteria problem of minimizing the average flow time (average response time) of a collection of dynamically released equi-work processes subject to the constraint that a fixed amount of energy is available. We assume that the processor has the ability to dynamically scale the speed at which it runs, as do current microprocessors from AMD, Intel, and Transmeta. We first reveal the combinatorial structure of the optimal schedule. We then use these insights to devise a relatively simple polynomial time algorithm to simultaneously compute, for each possible energy, the schedule with optimal average flow time subject to this energy constraint." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
The idea of power-aware scheduling was proposed by @cite_17 , who use trace-based simulations to estimate how much energy could be saved by slowing the processor to remove idle time. @cite_5 formalize this problem by assuming each job has a deadline and seeking the minimum-energy schedule that satisfies all deadlines. They give an optimal offline algorithm and propose two online algorithms. They show one is @math -competitive, i.e. it uses at most @math times the optimal energy. @cite_4 analyze the other, showing it is @math -competitive. @cite_4 also give another algorithm that is @math -competitive.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_17" ], "mid": [ "2099961254", "2131218499", "" ], "abstract": [ "The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s sup p where p spl ges 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.", "We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P(s) = s sup spl alpha . We provide a tight spl alpha sup spl alpha bound on the competitive ratio of the previously proposed optimal available algorithm. This improves the best known competitive ratio by a factor of 2 sup spl alpha . We then introduce an online algorithm, and show that this algorithm's competitive ratio is at most 2( spl alpha ( spl alpha - 1)) sup spl alpha e sup spl alpha . This competitive ratio is significantly better and is approximately 2e sup spl alpha +1 for large spl alpha . Our result is essentially tight for large spl alpha . In particular, as spl alpha approaches infinity, we show that any algorithm must have competitive ratio e sup spl alpha (up to lower order terms). We then turn to the problem of dynamic speed scaling to minimize the maximum temperature that the device ever reaches, again subject to the constraint that all jobs finish by their deadlines. We assume that the device cools according to Fourier's law. We show how to solve this problem in polynomial time, within any error bound, using the ellipsoid algorithm.", "" ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
Power-aware scheduling of jobs with deadlines has also been considered with the goal of minimizing the CPU's maximum temperature. @cite_4 propose this problem and give an offline solution based on convex programming. Bansal and Pruhs @cite_3 analyze the online algorithms discussed above in the context of minimizing maximum temperature.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2131218499", "2180738807" ], "abstract": [ "We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P(s) = s sup spl alpha . We provide a tight spl alpha sup spl alpha bound on the competitive ratio of the previously proposed optimal available algorithm. This improves the best known competitive ratio by a factor of 2 sup spl alpha . We then introduce an online algorithm, and show that this algorithm's competitive ratio is at most 2( spl alpha ( spl alpha - 1)) sup spl alpha e sup spl alpha . This competitive ratio is significantly better and is approximately 2e sup spl alpha +1 for large spl alpha . Our result is essentially tight for large spl alpha . In particular, as spl alpha approaches infinity, we show that any algorithm must have competitive ratio e sup spl alpha (up to lower order terms). We then turn to the problem of dynamic speed scaling to minimize the maximum temperature that the device ever reaches, again subject to the constraint that all jobs finish by their deadlines. We assume that the device cools according to Fourier's law. We show how to solve this problem in polynomial time, within any error bound, using the ellipsoid algorithm.", "We consider speed scaling algorithms to minimize device temperature subject to the constraint that every task finishes by its deadline. We assume that the device cools according to Fourier's law. We show that the optimal offline algorithm proposed in [18] for minimizing total energy (that we call YDS) is an O(1)-approximation with respect to temperature. Tangentially, we observe that the energy optimality of YDS is an elegant consequence of the well known KKT optimality conditions. Two online algorithms, AVR and Optimal Available, were proposed in [18] in the context of energy management. It was shown that these algorithms were O(1)-competitive with respect to energy in [18] and [2]. Here we show these algorithms are not O(1)-competitive with respect to temperature. This demonstratively illustrates the observation from practice that power management techniques that are effective for managing energy may not be effective for managing temperature. We show that the most intuitive temperature management algorithm, running at such a speed so that the temperature is constant, is surprisingly not O(1)-competitive with respect to temperature. In contrast, we show that the online algorithm BKP, proposed in [2], is O(1)-competitive with respect to temperature. This is the first O(1)-competitiveness analysis with respect to temperature for an online algorithm." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
A different variation is to assume that the processor can only choose between discrete speeds. @cite_7 show that minimizing energy consumption in this setting while meeting all deadlines is NP-hard, but give approximations for some special cases.
{ "cite_N": [ "@cite_7" ], "mid": [ "1517415100" ], "abstract": [ "We study the problem of non-preemptive scheduling to minimize energy consumption for devices that allow dynamic voltage scaling. Specifically, consider a device that can process jobs in a non-preemptive manner. The input consists of (i) the set R of available speeds of the device, (ii) a set J of jobs, and (iii) a precedence constraint Π among J. Each job j in J, defined by its arrival time aj, deadline dj, and amount of computation cj, is supposed to be processed by the device at a speed in R. Under the assumption that a higher speed means higher energy consumption, the power-saving scheduling problem is to compute a feasible schedule with speed assignment for the jobs in J such that the required energy consumption is minimized. This paper focuses on the setting of weakly dynamic voltage scaling, i.e., speed change is not allowed in the middle of processing a job. To demonstrate that this restriction on many portable power-aware devices introduces hardness to the power-saving scheduling problem, we prove that the problem is NP-hard even if aj = aj ′ and dj = dj ′ hold for all j,j ′∈ Jand |R|=2. If |R|<∞, we also give fully polynomial-time approximation schemes for two cases of the general NP-hard problem: (a) all jobs share a common arrival time, and (b) Π = ∅ and for any j,j ′ ∈ J, aj ≤ aj ′ implies dj ≤ dj ′. To the best of our knowledge, there is no previously known approximation algorithm for any special case of the NP-hard problem." ] }
cs0605126
2951388088
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
Another algorithmic approach to power management is to identify times when the processor or parts of it can be partially or completely powered down. Irani and Pruhs @cite_15 survey work along these lines as well as approaches based on speed scaling.
{ "cite_N": [ "@cite_15" ], "mid": [ "2005012890" ], "abstract": [ "We survey recent research that has appeared in the theoretical computer science literature on algorithmic problems related to power management. We will try to highlight some open problem that we feel are interesting. This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice." ] }
cs0605135
2952537745
In this work we focus on the general relay channel. We investigate the application of estimate-and-forward (EAF) to different scenarios. Specifically, we consider assignments of the auxiliary random variables that always satisfy the feasibility constraints. We first consider the multiple relay channel and obtain an achievable rate without decoding at the relays. We demonstrate the benefits of this result via an explicit discrete memoryless multiple relay scenario where multi-relay EAF is superior to multi-relay decode-and-forward (DAF). We then consider the Gaussian relay channel with coded modulation, where we show that a three-level quantization outperforms the Gaussian quantization commonly used to evaluate the achievable rates in this scenario. Finally we consider the cooperative general broadcast scenario with a multi-step conference. We apply estimate-and-forward to obtain a general multi-step achievable rate region. We then give an explicit assignment of the auxiliary random variables, and use this result to obtain an explicit expression for the single common message broadcast scenario with a two-step conference.
An extension of the relay scenario to a hybrid broadcast relay system was introduced in @cite_24 in which the authors applied a combination of EAF and DAF strategies to the independent broadcast channel with a single common message, and then extended this strategy to the multi-step conference. In @cite_5 we used both a single-step and a two-step conference with orthogonal conferencing channels in the discrete memoryless framework. A thorough investigation of the broadcast-relay channel was done in @cite_25 , where the authors applied the DAF strategy to the case where only one user is helping the other user, and also presented an upper bound for this case. Then, the fully cooperative scenario was analyzed. The authors applied both the DAF and the EAF methods to that case.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_25" ], "mid": [ "140150137", "2135306111", "" ], "abstract": [ "We develop communication strategies for the rate-constrained interactive decoding of a message broadcast to a group of interested users. This situation diers from the relay channel in that all users are interested in the transmitted message, and from the broadcast channel because no user can decode on its own. We focus on two-user scenarios, and describe a baseline strategy that uses ideas of coding with decoder side information. One user acts initially as a relay for the other. That other user then decodes the message and sends back random parity bits, enabling the first user to decode. We show how to improve on this scheme’s performance through a conversation consisting of multiple rounds of discussion. While there are now more messages, each message is shorter, lowering the overall rate of the conversation. Such multi-round conversations can be more ecient because earlier messages serve as side information known at both encoder and decoder. We illustrate these ideas for binary erasure channels. We show that multi-round conversations can decode using less overall rate than is possible with the single-round scheme.", "We consider the problem of communicating over the general discrete memoryless broadcast channel (DMBC) with partially cooperating receivers. In our setup, receivers are able to exchange messages over noiseless conference links of finite capacities, prior to decoding the messages sent from the transmitter. In this paper, we formulate the general problem of broadcast with cooperation. We first find the capacity region for the case where the BC is physically degraded. Then, we give achievability results for the general broadcast channel, for both the two independent messages case and the single common message case", "" ] }
quant-ph0604141
2949446743
We show that quantum circuits cannot be made fault-tolerant against a depolarizing noise level of approximately 45 , thereby improving on a previous bound of 50 (due to Razborov). Our precise quantum circuit model enables perfect gates from the Clifford group (CNOT, Hadamard, S, X, Y, Z) and arbitrary additional one-qubit gates that are subject to that much depolarizing noise. We prove that this set of gates cannot be universal for arbitrary (even classical) computation, from which the upper bound on the noise threshold for fault-tolerant quantum computation follows.
Finally, we note that our work is related to, and partly stimulated by, the circle of ideas surrounding measurement-based quantum computation that was largely initiated by @cite_1 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "2035993353", "1490521149" ], "abstract": [ "We present a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. The measurements are used to imprint a quantum logic circuit on the state, thereby destroying its entanglement at the same time. Cluster states are thus one-way quantum computers and the measurements form the program.", "Algorithms such as quantum factoring1 and quantum search2 illustrate the great theoretical promise of quantum computers; but the practical implementation of such devices will require careful consideration of the minimum resource requirements, together with the development of procedures to overcome inevitable residual imperfections in physical systems3,4,5. Many designs have been proposed, but none allow a large quantum computer to be built in the near future6. Moreover, the known protocols for constructing reliable quantum computers from unreliable components can be complicated, often requiring many operations to produce a desired transformation3,4,5,7,8. Here we show how a single technique—a generalization of quantum teleportation9—reduces resource requirements for quantum computers and unifies known protocols for fault-tolerant quantum computation. We show that single quantum bit (qubit) operations, Bell-basis measurements and certain entangled quantum states such as Greenberger–Horne–Zeilinger (GHZ) states10—all of which are within the reach of current technology—are sufficient to construct a universal quantum computer. We also present systematic constructions for an infinite class of reliable quantum gates that make the design of fault-tolerant quantum computers much more straightforward and methodical." ] }
cs0604015
2953251814
Although the Internet AS-level topology has been extensively studied over the past few years, little is known about the details of the AS taxonomy. An AS "node" can represent a wide variety of organizations, e.g., large ISP, or small private business, university, with vastly different network characteristics, external connectivity patterns, network growth tendencies, and other properties that we can hardly neglect while working on veracious Internet representations in simulation environments. In this paper, we introduce a radically new approach based on machine learning techniques to map all the ASes in the Internet into a natural AS taxonomy. We successfully classify 95.3 of ASes with expected accuracy of 78.1 . We release to the community the AS-level topology dataset augmented with: 1) the AS taxonomy information and 2) the set of AS attributes we used to classify ASes. We believe that this dataset will serve as an invaluable addition to further understanding of the structure and evolution of the Internet.
Several works have developed techniques decomposing the AS topology into different levels or tiers based on connectivity properties of BGP-derived AS graphs. Govindan and Reddy @cite_13 propose a classification of ASes into four levels based on their AS degree. Ge . @cite_5 classify ASes into seven tiers based on inferred customer-to-provider relationships. Their classification exploits the idea that provider ASes should be in higher tiers than their customers. Subramanian . @cite_0 classify ASes into five tiers based on inferred customer-to-provider as well as peer-to-peer relationships.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_13" ], "mid": [ "2160565743", "1846523235", "2165743744" ], "abstract": [ "The delivery of IP traffic through the Internet depends on the complex interactions between thousands of autonomous systems (AS) that exchange routing information using the border gateway protocol (BGP). This paper investigates the topological structure of the Internet in terms of customer-provider and peer-peer relationships between autonomous systems, as manifested in BGP routing policies. We describe a technique for inferring AS relationships by exploiting partial views of the AS graph available from different vantage points. Next we apply the technique to a collection of ten BGP routing tables to infer the relationships between neighboring autonomous systems. Based on these results, we analyze the hierarchical structure of the Internet and propose a five-level classification of AS. Our characterization differs from previous studies by focusing on the commercial relationships between autonomous systems rather than simply the connectivity between the nodes.", "The study of the Internet topology has recently received much attention from the research community. In particular, the observation that the network graph has interesting properties, such as power laws, that might be explored in a myriad of ways. Most of the work in characterizing the Internet graph is based on the physical network graph, i.e., the connectivity graph. In this paper we investigate how logical relationships between nodes of the AS graph can be used to gain insight to its structure. We characterize the logical graph using various metrics and identify the presence of power laws in the number of customers that a provider has. Using these logical relationships we define a structural model of the AS graph. The model highlights the hierarchical nature of logical relationships and the preferential connection to larger providers. We also investigate the consistency of this model over time and observe interesting properties of the hierarchical structure.", "The Internet routing fabric is partitioned into several domains. Each domain represents a region of the fabric administered by a single commercial entity. Over the past two years, the routing fabric has experienced significant growth. From more than a year's worth of inter-domain routing traces, we analyze the Internet inter-domain topology, its route stability behavior, and the effect of growth on these characteristics. Our analysis reveals several interesting results. Despite growth, the degree distribution and the diameter of the inter-domain topology have remained relatively unchanged. Furthermore, there exists a four-level hierarchy of Internet domains classified by degree. However, connectivity between domains is significantly non-hierarchical. Despite increased connectivity at higher levels in the topology, the distribution of paths to prefixes from the backbone remained relatively unchanged. There is evidence that both route availability and the mean reachability duration have degraded with Internet growth." ] }
math0603097
2949404783
We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra.
For a comprehensive bibliography on circle packings and circle patterns we refer to Stephenson's monograph @cite_17 . Here, we can only attempt to briefly discuss some of the the most important and most closely related results.
{ "cite_N": [ "@cite_17" ], "mid": [ "2082107757" ], "abstract": [ "Part I. An Overview of Circle Packing: 1. A circle packing menagerie 2. Circle packings in the wild Part II. Rigidity: Maximal Packings: 3. Preliminaries: topology, combinatorics, and geometry 4. Statement of the fundamental result 5. Bookkeeping and monodromy 6. Proof for combinatorial closed discs 7. Proof for combinatorial spheres 8. Proof for combinatorial open discs 9. Proof for combinatorial surfaces Part III. Flexibility: Analytic Functions: 10. The intuitive landscape 11. Discrete analytic functions 12. Construction tools 13. Discrete analytic functions on the disc 14. Discrete entire functions 15. Discrete rational functions 16. Discrete analytic functions on Riemann surfaces 17. Discrete conformal structure 18. Random walks on circle packings Part IV: 19. Thurston's Conjecture 20. Extending the Rodin Sullivan theorem 21. Approximation of analytic functions 22. Approximation of conformal structures 23. Applications Appendix A. Primer on classical complex analysis Appendix B. The ring lemma Appendix C. Doyle spirals Appendix D. The brooks parameter Appendix E. Schwarz and buckyballs Appendix F. Inversive distance packings Appendix G. Graph embedding Appendix H. Square grid packings Appendix I. Experimenting with circle packings." ] }
math0603097
2949404783
We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra.
Recently, Schlenker has treated weighted Delaunay triangulations in piecewise flat and piecewise hyperbolic surfaces using a deformation method @cite_12 . He obtains an existence and uniqueness theorem [Theorem 1.4] schlenker05b with the same scope as Theorem , but the conditions for existence are in terms of angle sums over paths like in Theorem . This seems to be the first time that this type of conditions was obtained for circle patterns with cone singularities. It would be interesting to show directly that the conditions of his theorem are equivalent to the conditions of Theorem .
{ "cite_N": [ "@cite_12" ], "mid": [ "2951776094" ], "abstract": [ "We consider hyperideal'' circle patterns, i.e. patterns of disks appearing in the definition of the Delaunay decomposition associated to a set of disjoint disks, possibly with cone singularities at the center of those disks. Hyperideal circle patterns are associated to hyperideal hyperbolic polyhedra. We describe the possible intersection angles and singular curvatures of those circle patterns, on Euclidean or hyperbolic surfaces with conical singularities. This is related to results on the dihedral angles of ideal or hyperideal hyperbolic polyhedra." ] }
math0603097
2949404783
We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra.
The research for this article was conducted almost entirely while I enjoyed the hospitality of the , where I participated in the Research in Pairs Program together with Jean-Marc Schlenker, who was working on his closely related paper @cite_12 . I am grateful for the excellent working conditions I experienced in Oberwolfach and for the extremely inspiring and fruitful discussions with Jean-Marc, who was closely involved in the work presented here.
{ "cite_N": [ "@cite_12" ], "mid": [ "2951776094" ], "abstract": [ "We consider hyperideal'' circle patterns, i.e. patterns of disks appearing in the definition of the Delaunay decomposition associated to a set of disjoint disks, possibly with cone singularities at the center of those disks. Hyperideal circle patterns are associated to hyperideal hyperbolic polyhedra. We describe the possible intersection angles and singular curvatures of those circle patterns, on Euclidean or hyperbolic surfaces with conical singularities. This is related to results on the dihedral angles of ideal or hyperideal hyperbolic polyhedra." ] }
cs0603115
1539159366
The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results.
* Libraries based on a floating-point representation The actual trend of CPUs is to have highly optimized floating-point operators. Some libraries, such as the MPFUN @cite_8 , exploit these floating-point operators by using an array of floating-point number.
{ "cite_N": [ "@cite_8" ], "mid": [ "2005242289" ], "abstract": [ "A new version of a Fortran multiprecision computation system, based on the Fortran 90 language, is described. With this new approach, a translator program is not required—translation of Fortran code for multiprecision is accomplished by merely utilizing advanced features of Fortran 90, such as derived data types and operator extensions. This approach results in more-reliable translation and permits programmers of multiprecision applications to utilize the full power of Fortran 90. Three multiprecision data types are supported in this system: multiprecision integer, real, and complex. All the usual Fortran conventions for mixed-mode operations are supported, and many of the Fortran intrinsics, such as SIN, EXP, and MOD, are supported with multiprecision arguments. An interesting application of this software, wherein new number-theoretic identities have been discovered by means of multiprecision computations, is included also." ] }