aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0804.1696
1667425162
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
@cite_5 categories of direct and indirect interactions between aspects and methods are identified. Direct interaction is whether an advice interferes with the execution of a method, whereas indirect is whether advices and methods may read write the same fields. This classification is similar to ours, however, it addresses a different dimension. We identify invasiveness patterns instead of direct indirect interactions. Katz @cite_3 recognizes the fact that aspects can be harmful to the base code and the need of specification on aspect-oriented applications. Our approach agrees with his ideas and likewise we propose a mean to write such specifications. Furthermore, he describes three groups of advices according to their properties. aspects, which do not influence the underlying computation, aspects, which change the control flow but do not affect existing fields, and aspects, which affect existing fields. This classification is similar to ours, however, our characterization of is more fine grained. The two first correspond to our behavioral classification and the last to our data access classification.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "2149612550", "126340784" ], "abstract": [ "We present a new classification system for aspect-oriented programs. This system characterizes the interactions between aspects and methods and identifies classes of interactions that enable modular reasoning about the crosscut program. We argue that this system can help developers structure their understanding of aspect-oriented programs and promotes their ability to reason productively about the consequences of crosscutting a program with a given aspect. We have designed and implemented a program analysis system that automatically classifies interactions between aspects and methods and have applied this analysis to a set of benchmark programs. We found that our analysis is able to 1) identify interactions with desirable properties (such as lack of interference), 2) identify potentially problematic interactions (such as interference caused by the aspect and the method both writing the same field), and 3) direct the developer's attention to the causes of such interactions.", "Aspects are intended to add needed functionality to a system or to treat concerns of the system by augmenting or changing the existing code in a manner that cross-cuts the usual class or process hierarchy. However, sometimes aspects can invalidate some of the already existing desirable properties of the system. This paper shows how to automatically identify such situations. The importance of specifications of the underlying system is emphasized, and shown to clarify the degree of obliviousness appropriate for aspects. The use of regression testing is considered, and regression verification is recommended instead, with possible division into static analysis, deductive proofs, and aspect validation using model checking. Static analysis of only the aspect code is effective when strongly typed and clearly parameterized aspect languages are used. Spectative aspects can then be identified, and imply absence of harm for all safety and liveness properties involving only the variables and fields of the original system. Deductive proofs can be extended to show inductive invariants are not harmed by an aspect, also by treating only the aspect code. Aspect validation to establish lack of harm is defined and suggested as an optimal approach when the entire augmented system with the aspect woven in must be considered." ] }
0804.1696
1667425162
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
Clifton and Leavens propose and @cite_4 . Spectators are advices that do not affect the control flow of the advised method and do not affect existing fields. Assistants can change the control flow of the advised method and affect existing fields. are similar to our classification category and in the sense that they do not interfere with the mainline computation or write fields. All other classification categories are equivalent to . Nevertheless, we have achieved a more fine granularity level in our classification.
{ "cite_N": [ "@cite_4" ], "mid": [ "1494264673" ], "abstract": [ "In general, aspect-oriented programs require a whole-program analysis to understand the semantics of a single method invocation. This property can make reasoning difficult, impeding maintenance efforts, contrary to a stated goal of aspect-oriented programming. We propose some simple modifications to AspectJ that permit modular reasoning. This eliminates the need for whole-program analysis and makes code easier to understand and maintain." ] }
0803.3395
1890461267
In the first part of the paper we generalize a descent technique due to Harish-Chandra to the case of a reductive group acting on a smooth affine variety both defined over arbitrary local field F of characteristic zero. Our main tool is Luna slice theorem. In the second part of the paper we apply this technique to symmetric pairs. In particular we prove that the pair (GL(n,C),GL(n,R)) is a Gelfand pair. We also prove that any conjugation invariant distribution on GL(n,F) is invariant with respect to transposition. For non-archimedean F the later is a classical theorem of Gelfand and Kazhdan. We use the techniques developed here in our subsequent work [AG3] where we prove an archimedean analog of the theorem on uniqueness of linear periods by H. Jacquet and S. Rallis.
Another generalization of Harish-Chandra descent using Luna slice theorem has been done in the non-archimedean case in @cite_17 . In that paper Rader and Rallis investigated spherical characters of @math -distinguished representations of @math for symmetric pairs @math and checked the validity of what they call "density principle" for rank one symmetric pairs. They found out that usually it holds, but also found counterexamples.
{ "cite_N": [ "@cite_17" ], "mid": [ "2006686635" ], "abstract": [ "Symmetric spaces over local fields, and the harmonic analysis of their class one representations, arise as the local calculation of Jacquet's theory of the relative trace formula. There is an extensive literature at the real place, but few general results for p-adic fields are known. The objective here is to carry over to symmetric spaces as much as possible of Queens and the prerequisite results of Howe, and to provide counterexamples for those things which do not generalize. We derive the Weyl integration formula, local constancy of the spherical character on the θ-regular set, Howe's conjecture for θ-groups, the germ expansion for spherical characters at the origin, and a spherical version of Howe's Kirillov theory for compact p-adic groups. We find that the density property of regular orbital integrals fails. Some of the basic ideas are nascent in Hakim's thesis (written under Herve Jacquet)." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
In end-to-end encryption schemes @cite_8 @cite_0 @cite_15 @cite_5 , intermediate aggregators apply some aggregation functions on encrypted data which they can't decrypt. This is because these intermediate aggregators don't have access to the keys that are only shared between data originators (usually leaf sensor nodes) and the BS. In CDA @cite_0 sensor nodes share a common symmetric key with the BS that is kept hidden from middle-way aggregators. @cite_8 each leaf sensor share a distinct long-term key with the BS. This key is originally derived from the master secret only known to the BS. These protocols show that aggregation of end-to-end encrypted data is possible through using additive Privacy Homomorphism (PH) as the underlying encryption scheme. Although these protocols are supposed to provide maximum data secrecy across the paths between leaf sensor nodes and their sink, overall secrecy resilience of a WSN becomes in danger if an adversary gains access to the master key in @cite_8 , or compromises only a single leaf sensor node in CDA to acquire the common symmetric key shared between all leaf nodes.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "", "1978885205", "2114248007", "2102832611" ], "abstract": [ "", "Data aggregation is a widely used technique in wireless sensor networks. The security issues, data confidentiality and integrity, in data aggregation become vital when the sensor network is deployed in a hostile environment. There has been many related work proposed to address these security issues. In this paper we survey these work and classify them into two cases: hop-by-hop encrypted data aggregation and end-to-end encrypted data aggregation. We also propose two general frameworks for the two cases respectively. The framework for end-to-end encrypted data aggregation has higher computation cost on the sensor nodes, but achieves stronger security, in comparison with the framework for hop-by-hop encrypted data aggregation.", "In-network data aggregation is a popular technique for reducing the energy consumption tied to data transmission in a multi-hop wireless sensor network. However, data aggregation in untrusted or even hostile environments becomes problematic when end-to-end privacy between sensors and the sink is desired. In this paper we revisit and investigate the applicability of additively homomorphic public-key encryption algorithms for certain classes of wireless sensor networks. Finally, we provide recommendations for selecting the most suitable public key schemes for different topologies and wireless sensor network scenarios.", "Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
@cite_15 @cite_5 public key encryption based on elliptic curves is used to conceal transient data from leaf sensors to the BS. These schemes enhance secrecy resilience of WSNs against individual sensor attacks, since compromising a single or a set of sensor nodes won't reveal the decryption key that only the BS knows. An attracting feature of @cite_15 is the introduction of data integrity in end-to-end encrypted WSNs through Merkle hash trees of Message Authentication Codes (MACs). However, both schemes raise power consumption concerns, since computation requirements for public key encryption is still considered high for WSNs @cite_13 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "1978885205", "2114248007", "2033751220" ], "abstract": [ "Data aggregation is a widely used technique in wireless sensor networks. The security issues, data confidentiality and integrity, in data aggregation become vital when the sensor network is deployed in a hostile environment. There has been many related work proposed to address these security issues. In this paper we survey these work and classify them into two cases: hop-by-hop encrypted data aggregation and end-to-end encrypted data aggregation. We also propose two general frameworks for the two cases respectively. The framework for end-to-end encrypted data aggregation has higher computation cost on the sensor nodes, but achieves stronger security, in comparison with the framework for hop-by-hop encrypted data aggregation.", "In-network data aggregation is a popular technique for reducing the energy consumption tied to data transmission in a multi-hop wireless sensor network. However, data aggregation in untrusted or even hostile environments becomes problematic when end-to-end privacy between sensors and the sink is desired. In this paper we revisit and investigate the applicability of additively homomorphic public-key encryption algorithms for certain classes of wireless sensor networks. Finally, we provide recommendations for selecting the most suitable public key schemes for different topologies and wireless sensor network scenarios.", "As sensor networks edge closer towards wide-spread deployment, security issues become a central concern. So far, much research has focused on making sensor networks feasible and useful, and has not concentrated on security. We present a suite of security building blocks optimized for resource-constrained environments and wireless communication. SPINS has two secure building blocks: SNEP and μTESLA SNEP provides the following important baseline security primitives: Data confidentiality, two-party data authentication, and data freshness. A particularly hard problem is to provide efficient broadcast authentication, which is an important mechanism for sensor networks. μTESLA is a new protocol which provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
Many hop-by-hop aggregation protocols in WSNs like @cite_16 @cite_2 @cite_12 @cite_14 @cite_9 , provide more efficient aggregation operations and highly consider data integrity. However, since sensed data being passed to non-leaf aggregators are revealed for the sake of middle-way aggregation, hop-by-hop aggregation protocols represent weaker model of data confidentiality perspective than end-to-end aggregation protocols. Data secrecy can be revoked of a partition if a passive adversary has obtained the key of the root aggregator of that partition.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "2131924466", "2110889959", "2139890752", "2154625773", "2152765137" ], "abstract": [ "In sensor networks, data aggregation is a vital primitive enabling efficient data queries. An on-site aggregator device collects data from sensor nodes and produces a condensed summary which is forwarded to the off-site querier, thus reducing the communication cost of the query. Since the aggregator is on-site, it is vulnerable to physical compromise attacks. A compromised aggregator may report false aggregation results. Hence, it is essential that techniques are available to allow the querier to verify the integrity of the result returned by the aggregator node. We propose a novel framework for secure information aggregation in sensor networks. By constructing efficient random sampling mechanisms and interactive proofs, we enable the querier to verify that the answer given by the aggregator is a good approximation of the true value, even when the aggregator and a fraction of the sensor nodes are corrupted. In particular, we present efficient protocols for secure computation of the median and average of the measurements, for the estimation of the network size, for finding the minimum and maximum sensor reading, and for random sampling and leader election. Our protocols require only sublinear communication between the aggregator and the user.", "Hop-by-hop data aggregation is a very important technique for reducing the communication overhead and energy expenditure of sensor nodes during the process of data collection in a sensor network. However, because individual sensor readings are lost in the per-hop aggregation process, compromised nodes in the network may forge false values as the aggregation results of other nodes, tricking the base station into accepting spurious aggregation results. Here a fundamental challenge is: how can the base station obtain a good approximation of the fusion result when a fraction of sensor nodes are compromised.To answer this challenge, we propose SDAP, a Secure Hop-by-hop Data Aggregation Protocol for sensor networks. The design of SDAP is based on the principles of divide-and-conquer and commit-and-attest. First, SDAP uses a novel probabilistic grouping technique to dynamically partition the nodes in a tree topology into multiple logical groups (subtrees) of similar sizes. A commitment-based hop-by-hop aggregation is performed in each group to generate a group aggregate. The base station then identifies the suspicious groups based on the set of group aggregates. Finally, each group under suspect participates in an attestation process to prove the correctness of its group aggregate. Our analysis and simulations show that SDAP can achieve the level of efficiency close to an ordinary hop-by-hop aggregation protocol while providing certain assurance on the trustworthiness of the aggregation result. Moreover, SDAP is a general-purpose secure aggregation protocol applicable to multiple aggregation functions.", "An emerging class of important applications uses ad hoc wireless networks of low-power sensor devices to monitor and send information about a possibly hostile environment to a powerful base station connected to a wired network. To conserve power, intermediate network nodes should aggregate results from individual sensors. However, this opens the risk that a single compromised sensor device can render the network useless, or worse, mislead the operator into trusting a false reading. We present a protocol that provides a secure aggregation mechanism for wireless networks that is resilient to both intruder devices and single device key compromises. Our protocol is designed to work within the computation, memory and power consumption limits of inexpensive sensor devices, but takes advantage of the properties of wireless networking, as well as the power asymmetry between the devices and the base station.", "In-network aggregation is an essential primitive for performing queries on sensor network data. However, most aggregation algorithms assume that all intermediate nodes are trusted. In contrast, the standard threat model in sensor network security assumes that an attacker may control a fraction of the nodes, which may misbehave in an arbitrary (Byzantine) manner.We present the first algorithm for provably secure hierarchical in-network data aggregation. Our algorithm is guaranteed to detect any manipulation of the aggregate by the adversary beyond what is achievable through direct injection of data values at compromised nodes. In other words, the adversary can never gain any advantage from misrepresenting intermediate aggregation computations. Our algorithm incurs only O(Δ log2 n) node congestion, supports arbitrary tree-based aggregator topologies and retains its resistance against aggregation manipulation in the presence of arbitrary numbers of malicious nodes. The main algorithm is based on performing the sum aggregation securely by first forcing the adversary to commit to its choice of intermediate aggregation results, and then having the sensor nodes independently verify that their contributions to the aggregate are correctly incorporated. We show how to reduce secure median , count , and average to this primitive.", "Sensor networks include nodes with limited computation and communication capabilities. One of the basic functions of sensor networks is to sense and transmit data to the end users. The resource constraints and security issues pose a challenge to information aggregation in large sensor networks. Bootstrapping keys is another challenge because public key cryptosystems are unsuitable for use in resource-constrained sensor networks. In this paper, we propose a solution by dividing the problem in two domains. First, we present a protocol for establishing cluster keys in sensor networks using verifiable secret sharing. We chose elliptic curve cryptosystems for security because of their smaller key size, faster computations and reductions in processing power. Second, we develop a secure data aggregation and verification (SecureDAV) protocol that ensures that the base station never accepts faulty aggregate readings. An integrity check of the readings is done using Merkle hash trees, avoiding over-reliance on the cluster-heads." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
A standard centralized approach to tracking ( @cite_4 ), is sensor specific'', in the sense that it uses some smart powerful sensors that have high processing abilities. In particular, this algorithm assumes that each node is aware of its absolute location (e.g. via a GPS) or of a relative location. The sensors must be capable of estimating the distance of the target from the sensor readings. The process of tracking a target has three distinct steps: detecting the presence of the target, determining the direction of motion of the target and alerting appropriate nodes in the network. Thus, in their approach a very large part of the network is actively involved in the tracking process, a fact that may lead to increased energy dissipation. Also, in contrast to our method that can simultaneously handle multiple targets, their protocol can only track one target in the network at any time. Overall, their method has several strengths (reasonable estimation error, precise location of the tracked source, real time target tracking, but there are weaknesses as well (intensive computations, intensive radio transmissions).
{ "cite_N": [ "@cite_4" ], "mid": [ "2135916619" ], "abstract": [ "Networks of small, densely distributed wireless sensor nodes are capable of solving a variety of collaborative problems such as monitoring and surveillance. We develop a simple algorithm that detects and tracks a moving target, and alerts sensor nodes along the projected path of the target. The algorithm involves only simple computation and localizes communication only to the nodes in the vicinity of the target and its projected course. The algorithm is evaluated on a small-scale testbed of Berkeley motes using a light source as the moving target. The performance results are presented emphasizing the accuracy of the technique, along with a discussion about our experience in using such a platform for target tracking experiments." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Our method is entirely different to the network architecture design approach for centralized placement distributed tracking (see e.g. the book @cite_0 for a nice overview). According to that approach, optimal (or as efficient as possible) sensor deployment strategies are proposed to ensure maximum sensing coverage with minimal number of sensors, as well as power conservation in sensor networks. In one of the centralized methods ( @cite_3 ), that focuses on deployment optimization, a grid manner discretization of the space is performed. Their method tries to find the gridpoint closest to the target, instead of finding the exact coordinates of the target. In such a setting, an optimized placement of sensors will guarantee that every gridpoint in the area is covered by a unique subset of sensors.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "2168452204", "2108299551" ], "abstract": [ "This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.", "We present novel grid coverage strategies for effective surveillance and target location in distributed sensor networks. We represent the sensor field as a grid (two or three-dimensional) of points (coordinates) and use the term target location to refer to the problem of locating a target at a grid point at any instant in time. We first present an integer linear programming (ILP) solution for minimizing the cost of sensors for complete coverage of the sensor field. We solve the ILP model using a representative public-domain solver and present a divide-and-conquer approach for solving large problem instances. We then use the framework of identifying codes to determine sensor placement for unique target location, We provide coding-theoretic bounds on the number of sensors and present methods for determining their placement in the sensor field. We also show that grid-based sensor placement for single targets provides asymptotically complete (unambiguous) location of multiple targets in the grid." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Another network design approach for tracking is provided in @cite_6 , that tries to avoid an expensive massive deployment of sensors, taking advantage of possible coverage ovelaps over space and time, by introducing a novel combinatorial model (using set covers) that captures such overlaps. The authors then use this model to design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @math approximation ratio to the optimal solution.
{ "cite_N": [ "@cite_6" ], "mid": [ "2115335002" ], "abstract": [ "We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @Q(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
As opposed to centralized processing, in a distributed model sensor networks distribute the computation among sensor nodes. Each sensor unit acquires local, partial, and relatively coarse information from its environment. The network then collaboratively determines a fairly precise estimate based on its coverage and multiplicity of sensing modalities. Several such distributed approaches have been proposed. In @cite_7 , a cluster-based distributed tracking scheme is provided. The sensor network is logically partitioned into local collaborative groups. Each group is responsible for providing information on a target and tracking it. Sensors that can jointly provide the most accurate information on a target (in this case, those that are nearest to the target) form a group. As the target moves, the local region must move with it; hence groups are dynamic with nodes dropping out and others joining in. It is clear that time synchronization is a major prerequisite for this approach to work. Furthermore, this algorithm works well for merging multiple tracks corresponding to the same target. However, if two targets come very close to each other, then the mechanism described will be unable to distinguish between them.
{ "cite_N": [ "@cite_7" ], "mid": [ "1581531227" ], "abstract": [ "The tradeoff between performance and scalability is a fundamental issue in distributed sensor networks. In this paper, we propose a novel scheme to efficiently organize and utilize network resources for target localization. Motivated by the essential role of geographic proximity in sensing, sensors are organized into geographically local collaborative groups. In a target tracking context, we present a dynamic group management method to initiate and maintain multiple tracks in a distributed manner. Collaborative groups are formed, each responsible for tracking a single target. The sensor nodes within a group coordinate their behavior using geographically-limited message passing. Mechanisms such as these for managing local collaborations are essential building blocks for scalable sensor network applications." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Another nice distributed approach is the dynamic convoy tree-based collaboration (DCTC) framework that has been proposed in @cite_8 . The convoy tree includes sensor nodes around the detected target, and the tree progressively adapts itself to add more nodes and prune some nodes as the target moves. In particular, as the target moves, some nodes lying upstream of the moving path will drift farther away from the target and will be pruned from the convoy tree. On the other hand, some free nodes lying on the projected moving path will soon need to join the collaborative tracking. As the tree further adapts itself according to the movement of the target, the root will be too far away from the target, which introduces the need to relocate a new root and reconfigure the convoy tree accordingly. If the moving target's trail is known a priori and each node has knowledge about the global network topology, it is possible for the tracking nodes to agree on an optimal convoy tree structure; these are at the same time the main weaknesses of the protocol, since in many real scenarios such assumptions are unrealistic.
{ "cite_N": [ "@cite_8" ], "mid": [ "2157123706" ], "abstract": [ "Sensor nodes have limited sensing range and are not very reliable. To obtain accurate sensing data, many sensor nodes should he deployed and then the collaboration among them becomes an important issue. In W. Zhang and G. Cao, a tree-based approach has been proposed to facilitate sensor nodes collaborating in detecting and tracking a mobile target. As the target moves, many nodes in the tree may become faraway from the root of the tree, and hence a large amount of energy may be wasted for them to send their sensing data to the root. We address the tree reconfiguration problem. We formalize it as finding a min-cost convoy tree sequence, and solve it by proposing an optimized complete reconfiguration scheme and an optimized interception-based reconfiguration scheme. Analysis and simulation are conducted to compare the proposed schemes with each other and with other reconfiguration schemes. The results show that the proposed schemes are more energy efficient than others." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Finally, a mobile'' agent approach is followed in @cite_1 , i.e. a master agent is traveling through the network, and two slave agents are assigned the task to participate to the trilateration. As opposed to our method, their approach is quite complicated, including several sub-protocols (e.g. election protocols, trilateration, fusion and delivery of tracking results, maintaining a tracking history). Although by using mobile agents, the sensing, computing and communication overheads can be greatly reduced, their approach is not scalable in randomly scattered networks and also for well connected irregular networks, since a big amount of offline computation is needed Finally, the base that receives the tracking results is assumed fixed (in a tracking application this can be a problem).
{ "cite_N": [ "@cite_1" ], "mid": [ "1702602994" ], "abstract": [ "The wireless sensor network is an emerging technology that may greatly facilitate human life by providing ubiquitous sensing, computing, and communication capability, through which people can more closely interact with the environment wherever he she goes. To be context-aware, one of the central issues in sensor networks is location tracking, whose goal is to monitor the roaming path of a moving object. While similar to the location-update problem in PCS networks, this problem is more challenging in two senses: (1) there are no central control mechanism and backbone network in such environment, and (2) the wireless communication bandwidth is very limited. In this paper, we propose a novel protocol based on the mobile agent paradigm. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object. The agent is mobile since it will choose the sensor closest to the object to stay. The agent may invite some nearby slave sensors to cooperatively position the object and inhibit other irrelevant (i.e., farther) sensors from tracking the object.As a result, the communication and sensing overheads are greatly reduced. Our prototyping of the location-tracking mobile agent based on IEEE 802.11b NICs and our experimental experiences are also reported." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
The interested reader is referred to @cite_2 , the nice book by F. Zhao and L. Guibas, that even presents the tracking problem as a canonical'' problem for wireless sensor networks. Also, several tracking approaches are presented in @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2168452204", "2014101702" ], "abstract": [ "This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.", "Ch 1 Intro. Ch 2 Canonical Problem: Localization and Tracking Ch 3 Networking Sensor Networks Ch 4 Synchronization and Localization Ch 5 Sensor Tasking and Control Ch 6 Sensor Network Database Ch 7 Sensor Network Platforms and Tools Ch 8 Application and Future Direction" ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Many methods have been proposed for computing or estimating the first- and second-order differential quantities of a surface. In recent years, there have been significant interests in the convergence and consistency of these methods. We do not attempt to give a comprehensive review of these methods but consider only a few of them that are more relevant to our proposed approach; readers are referred to @cite_2 and @cite_1 for comprehensive surveys. Among the existing methods, many of them estimate the different quantities separately of each other. For the estimation of the normals, a common practice is to estimate vertex normals using a weighted average of face normals, such as area weighting or angle weighting. These methods in general are only first-order accurate, although they are the most efficient.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2155553113", "2048436819" ], "abstract": [ "This paper takes a systematic look at methods for estimating the curvature of surfaces represented by triangular meshes. We have developed a suite of test cases for assessing both the detailed behavior of these methods, and the error statistics that occur for samples from a general mesh. Detailed behavior is represented by the sensitivity of curvature calculation methods to noise, mesh resolution, and mesh regularity factors. Statistical analysis breaks out the effects of valence, triangle shape, and curvature sign. These tests are applied to existing discrete curvature approximation techniques and common surface fitting methods. We provide a summary of existing curvature estimation methods, and also look at alternatives to the standard parameterization techniques. The results illustrate the impact of noise and mesh related issues on the accuracy of these methods and provide guidance in choosing an appropriate method for applications requiring curvature estimates.", "In a variety of practical situations such as reverse engineering of boundary representation from depth maps of scanned objects, range data analysis, model-based recognition and algebraic surface design, there is a need to recover the shape of visible surfaces of a dense 3D point set. In particular, it is desirable to identify and fit simple surfaces of known type wherever these are in reasonable agreement with the data. We are interested in the class of quadric surfaces, that is, algebraic surfaces of degree 2, instances of which are the sphere, the cylinder and the cone. A comprehensive survey of the recent work in each subtask pertaining to the extraction of quadric surfaces from triangulations is presented." ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Vertex-based quadratic or higher-order polynomial fittings can produce convergent normal and curvature estimations. Meek and Walton studied the convergence properties of a number of estimators for normals and curvatures. It was further generalized to higher-degree polynomial fittings by Cazals and Pouget . These methods are most closely related to our approach. It is well-known that these methods may encounter numerical difficulties at low-valence vertices or special arrangements of vertices @cite_17 , which we address in this paper. Razdan and Bae proposed a scheme to estimate curvatures using biquadratic B 'ezier patches. Some methods were also proposed to improve robustness of curvature estimation under noise. For example, estimated curvatures by fitting the surface implicitly with multi-level meshes. Recently, proposed to improve robustness by adapting the neighborhood sizes. These methods in general only provide curvature estimations that are meaningful in some average sense but do not necessarily guarantee convergence of pointwise estimates.
{ "cite_N": [ "@cite_17" ], "mid": [ "1644886288" ], "abstract": [ "The purpose of this book is to reveal to the interested (but perhaps mathematically unsophisticated) user the foundations and major features of several basic methods for curve and surface fitting that are currently in use." ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Some of the methods estimate the second-order differential quantities from the surface normals. Goldfeather and Interrante proposed a cubic-order formula by fitting the positions and normals of the surface simultaneously to estimate curvatures and principal directions. proposed a face-based approach for computing shape operators using linear interpolation of normals. Rusinkiewicz proposed a similar face-based curvature estimator from vertex normals. These methods rely on good normal estimations for reliable results. Zorin and coworkers @cite_6 @cite_10 proposed to compute a shape operator using mid-edge normals, which resembles and corrects'' the formula of @cite_7 . Good results were obtained in practice but there was no theoretical guarantee of its order of convergence.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_6" ], "mid": [ "2151997105", "1485873939", "2100013455" ], "abstract": [ "Discrete curvature and shape operators, which capture complete information about directional curvatures at a point, are essential in a variety of applications: simulation of deformable two-dimensional objects, variational modeling and geometric data processing. In many of these applications, objects are represented by meshes. Currently, a spectrum of approaches for formulating curvature operators for meshes exists, ranging from highly accurate but computationally expensive methods used in engineering applications to efficient but less accurate techniques popular in simulation for computer graphics. We propose a simple and efficient formulation for the shape operator for variational problems on general meshes, using degrees of freedom associated with normals. On the one hand, it is similar in its simplicity to some of the discrete curvature operators commonly used in graphics; on the other hand, it passes a number of important convergence tests and produces consistent results for different types of meshes and mesh refinement.", "", "Curvature-based energy and forces are used in a broad variety of contexts, ranging from modeling of thin plates and shells to surface fairing and variational surface design. The approaches to discretization preferred in different areas often have little in common: engineering shell analysis is dominated by finite elements, while spring-particle models are often preferred for animation and qualitative simulation due to their simplicity and low computational cost. Both types of approaches have found applications in geometric modeling. While there is a well-established theory for finite element methods, alternative discretizations are less well understood: many questions about mesh dependence, convergence and accuracy remain unanswered. We discuss the general principles for defining curvature-based energy on discrete surfaces based on geometric invariance and convergence considerations. We show how these principles can be used to understand the behavior of some commonly used discretizations, to establish relations between some well-known discrete geometry and finite element formulations and to derive new simple and efficient discretizations." ] }
0803.2559
2952534738
We study the problem of deciding satisfiability of first order logic queries over views, our aim being to delimit the boundary between the decidable and the undecidable fragments of this language. Views currently occupy a central place in database research, due to their role in applications such as information integration and data warehousing. Our main result is the identification of a decidable class of first order queries over unary conjunctive views that generalises the decidability of the classical class of first order sentences over unary relations, known as the Lowenheim class. We then demonstrate how various extensions of this class lead to undecidability and also provide some expressivity results. Besides its theoretical interest, our new decidable class is potentially interesting for use in applications such as deciding implication of complex dependencies, analysis of a restricted class of active database rules, and ontology reasoning.
As observed earlier, description logics are important logics for expressing constraints on desired models. In @cite_22 , the query containment problem is studied in the context of the description logic @math . There are certain similarities between this and the first order (unary) view languages we have studied in this paper. The key difference appears to be that although @math can be used to define view constraints, these constraints cannot express unary conjunctive views (since assertions do not allow arbitrary projection). Furthermore, @math can express functional dependencies on a single attribute, a feature which would make the UCV language undecidable (see proof of theorem ). There is a result in @cite_22 , however, showing undecidability for a fragment of @math with inequality, which could be adapted to give an alternative proof of theorem (although inequality is used there in a slightly more powerful way).
{ "cite_N": [ "@cite_22" ], "mid": [ "2013409229" ], "abstract": [ "Query containment under constraints is the problem of checking whether for every database satisfying a given set of constraints, the result of one query is a subset of the result of another query. Recent research points out that this is a central problem in several database applications, and we address it within a setting where constraints are specified in the form of special inclusion dependencies over complex expressions, built by using intersection and difference of relations, special forms of quantification, regular expressions over binary relations, and cardinality constraints. These types of constraints capture a great variety of data models, including the relational, the entity-relational, and the object-oriented model. We study the problem of checking whether q is contained in q′ with respect to the constraints specified in a schema S, where q and q′ are nonrecursive Datalog programs whose atoms are complex expressions. We present the following results on query containment. For the case where q does not contain regular expressions, we provide a method for deciding query containment, and analyze its computational complexity. We do the same for the case where neither S nor q, q′ contain number restrictions. To the best of our knowledge, this yields the first decidability result on containment of conjunctive queries with regular expressions. Finally, we prove that the problem is undecidable for the case where we admit inequalities in q′." ] }
0803.2824
2949727990
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
A first LWO algorithm for a given intradomain traffic matrix has been proposed by in @cite_4 . It is based on a tabu-search metaheuristic and finds a nearly-optimal set of link weights that minimizes a particular objective function, namely the sum over all links of a convex function of the link loads and or utilizations. This problem has later been generalized to take several traffic matrices @cite_21 and some link failures @cite_11 into account. A heuristic that takes into account possible link failure scenarios when choosing weights is also proposed in @cite_20 by In our LWO we reuse the heuristic detailed in @cite_4 , but we have adapted this algorithm to consider the effect of hot-potato routing. All the later improvements to this algorithm (i.e. multiple traffic matrices, link failures) could be integrated in our new LWO in a similar way.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_20", "@cite_11" ], "mid": [ "2163907500", "", "1582342575", "1764504968" ], "abstract": [ "A system of techniques is presented for optimizing open shortest path first (OSPF) or intermediate system-intermediate system (IS-IS) weights for intradomain routing in a changing world, the goal being to avoid overloaded links. We address predicted periodic changes in traffic as well as problems arising from link failures and emerging hot spots.", "", "Intra-domain routing in IP backbone networks relies on link-state protocols such as IS-IS or OSPF. These protocols associate a weight (or cost) with each network link, and compute traffic routes based on these weight. However, proposed methods for selecting link weights largely ignore the issue of failures which arise as part of everyday network operations (maintenance, accidental, etc.). Changing link weights during a short-lived failure is impractical. However such failures are frequent enough to impact network performance. We propose a Tabu-search heuristic for choosing link weights which allow a network to function almost optimally during short link failures. The heuristic takes into account possible link failure scearios when choosing weights, thereby mitigating the effect of such failures. We find that the weights chosen by the heuristic can reduce link overload during transient link failures by as much as 40 at the cost of a small performance degradation in the absence of failures (10 ).", "In this paper, we adapt the heuristic of Fortz and Thorup for optimizing the weights of Shortest Path First protocols suchas Open Shortest Path First (OSPF) or Intermediate System-Intermediate System (IS-IS), in order to take into account failurescenarios.More precisely, we want to find a set of weights that is robust to all single link failures. A direct application of the originalheuristic, evaluating all the link failures, is too time consuming for realistic networks, so we developed a method based on acritical set of scenarios aimed to be representative of the whole set of scenarios. This allows us to make the problem manageableand achieve very robust solutions." ] }
0803.2824
2949727990
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
Cerav- have already shown in @cite_14 that the link weights found by a LWO may change the intradomain TM considered as input. In that paper they also show that applying LWO recursively on the resulting intradomain TM may not converge. They propose a method that keeps track of the series of resulting TMs and at each iteration they optimize the weights for the previous resulting intradomain TMs simultaneously. However, they do not consider the general problem with multiple exit points for each destination prefix, let alone taking advantage of it.
{ "cite_N": [ "@cite_14" ], "mid": [ "1739694831" ], "abstract": [ "Link weight optimization is shown to be a key issue in engineering of IGPs using shortest path first routing. The IGP weight optimization problem seeks a weight array resulting an optimal load distribution in the network based on the topology information and a traffic demand matrix. Several solution methods for various kinds of this problem have been proposed in the literature. However, the interaction of IGP with BGP is generally neglected in these studies. In reality, the optimized weights may not perform as well as expected, since updated link weights can cause shifts in the traffic demand matrix by hot-potato routing in the decision process of BGP. Hot-potato routing occurs when BGP decides the egress router for a destination prefix according to the IGP lengths. This paper mainly investigates the possible degradation of an IGP weight optimization tool due to hot-potato routing under a worst-case example and some experiments which are carried out by using an open source traffic engineering toolbox. Furthermore, it proposes an approach based on robust optimization to overcome the negative effect of hot-potato routing and analyzes its performance" ] }
0803.3699
1621497470
We investigate Quantum Key Distribution (QKD) relaying models. Firstly, we propose a novel quasi-trusted QKD relaying model. The quasi-trusted relays are defined as follows: (i) being honest enough to correctly follow a given multi-party finite-time communication protocol; (ii) however, being under the monitoring of eavesdroppers. We develop a simple 3-party quasi-trusted model, called Quantum Quasi-Trusted Bridge (QQTB) model, to show that we could securely extend up to two times the limited range of single-photon based QKD schemes. We also develop the Quantum Quasi-Trusted Relay (QQTR) model to show that we could securely distribute QKD keys over arbitrarily long distances. The QQTR model requires EPR pair sources, but does not use entanglement swapping and entanglement purification schemes. Secondly, we show that our quasi-trusted models could be improved to become untrusted models in which the security is not compromised even though attackers have full controls over some relaying nodes. We call our two improved models the Quantum Untrusted Bridge (QUB) and Quantum Untrusted Relay (QUR) ones. The QUB model works on single photons and allows securely extend up to two times the limited QKD range. The QUR model works on entangled photons but does not use entanglement swapping and entanglement purification operations. This model allows securely transmit shared keys over arbitrarily long distances without dramatically decreasing the key rate of the original QKD schemes.
Since the range of QKD is limited, QKD relaying methods are necessary. This becomes indispensable when one aims at building QKD networks as in the last recent years. All QKD relaying methods so far introduce some undesirable drawbacks. The most practical method is based on trusted model. This method has been applied in two famous QKD networks, DAPRA and SECOCQ @cite_20 @cite_14 @cite_15 @cite_10 . In this method, all the relaying nodes must be assumed perfectly secured. Such an assumption is critical since passive attacks or eavesdropping on intermediate nodes are very difficult to detect. A few number of intermediate nodes could lead to a great vulnerability in practice. Consequently, one wants to limit the number of trusted nodes in QKD networks.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_10", "@cite_20" ], "mid": [ "2141779755", "1580663165", "", "2006155898" ], "abstract": [ "The performances of Quantum Key Distribution (QKD) systems have notably progressed since the early experimental demonstrations and several recent works indicate that the pace of this progression is very likely to be maintained -if not increased- in the future years. In parallel to this fast progression of QKD techniques, commercial products are also being developed, making QKD deployment for securization of some specific ?real? data networks more and more likely to occur. It is the goal to the European project Secoqc to deploy a secure long-distance network based on quantum cryptography. It implies the conception of a specific architecture able to connect multiple users that may possibly be very far away from each other while QKD links are currently?point-to-point only? and intrinsically limited in distance.", "QKD networks are of much interest due to their capacity of providing extremely high security keys to network participants. Most QKD network studies so far focus on trusted models where all the network nodes are assumed to be perfectly secured. This restricts QKD networks to be small. In this paper, we first develop a novel model dedicated to large-scale QKD networks, some of whose nodes could be eavesdropped secretely. Then, we investigate the key transmission problem in the new model by an approach based on percolation theory and stochastic routing. Analyses show that under computable conditions large-scale QKD networks could protect secret keys with an extremely high probability. Simulations validate our results.", "", "We show how quantum key distribution (QKD) techniques can be employed within realistic, highly secure communications systems, using the internet architecture for a specific example. We also discuss how certain drawbacks in existing QKD point-to-point links can be mitigated by building QKD networks, where such networks can be composed of trusted relays or untrusted photonic switches." ] }
0803.0929
2952822878
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
In addition to the graph sparsifiers of @cite_6 @cite_9 @cite_20 , there is a large body of work on sparse @cite_3 @cite_1 and low-rank @cite_4 @cite_1 @cite_0 @cite_23 @cite_16 approximations for general matrices. The algorithms in this literature provide guarantees of the form @math , where @math is the original matrix and @math is obtained by entrywise or columnwise sampling of @math . This is analogous to satisfying ) only for vectors @math in the span of the dominant eigenvectors of @math ; thus, if we were to use these sparsifiers on graphs, they would only preserve the large cuts. Interestingly, our proof uses some of the same machinery as the low-rank approximation result of Rudelson and Vershynin @cite_0 --- the sampling of edges in our algorithm corresponds to picking @math columns at random from a certain rank @math matrix of dimension @math (this is the matrix @math introduced in Section 3).
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "", "", "1970950689", "1581656968", "2051540665", "1998058722", "1587887312", "", "2045107949" ], "abstract": [ "", "", "Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.", "We describe a simple random-sampling based procedure for producing sparse matrix approximations. Our procedure and analysis are extremely simple: the analysis uses nothing more than the Chernoff-Hoeffding bounds. Despite the simplicity, the approximation is comparable and sometimes better than previous work. Our algorithm computes the sparse matrix approximation in a single pass over the data. Further, most of the entries in the output matrix are quantized, and can be succinctly represented by a bit vector, thus leading to much savings in space.", "", "We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(rlog r) with a small error in the spectral norm, where r e VAV2F VAV22 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.", "Given an m ? n matrix A and an n ? p matrix B, we present 2 simple and intuitive algorithms to compute an approximation P to the product A ? B, with provable bounds for the norm of the \"error matrix\" P - A ? B. Both algorithms run in 0(mp+mn+np) time. In both algorithms, we randomly pick s = 0(1) columns of A to form an m ? s matrix S and the corresponding rows of B to form an s ? p matrix R. After scaling the columns of S and the rows of R, we multiply them together to obtain our approximation P. The choice of the probability distribution we use for picking the columns of A and the scaling are the crucial features which enable us to fairly elementary proofs of the error bounds. Our first algorithm can be implemented without storing the matrices A and B in Random Access Memory, provided we can make two passes through the matrices (stored in external memory). The second algorithm has a smaller bound on the 2-norm of the error matrix, but requires storage of A and B in RAM. We also present a fast algorithm that \"describes\" P as a sum of rank one matrices if B = AT.", "", "We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning." ] }
0803.0929
2952822878
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
The use of effective resistance as a distance in graphs has recently gained attention as it is often more useful than the ordinary geodesic distance in a graph. For example, in small-world graphs, all vertices will be close to one another, but those with a smaller effective resistance distance are connected by more short paths. See, for instance @cite_19 @cite_21 , which use effective resistance commute time as a distance measure in social network graphs.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2161984370", "2003672460" ], "abstract": [ "This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markov-chain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the \"length\" of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commute-time distance. This graph PCA provides a nice interpretation to the \"Fiedler vector,\" widely used for graph partitioning. The model is evaluated on a collaborative-recommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacian-based similarities perform well in comparison with other methods. The model, which nicely fits into the so-called \"statistical relational learning\" framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machine-learning and pattern-recognition tasks involving a relational database", "In the era of globalization, traditional theories and models of social systems are shifting their focus from isolation and independence to networks and connectedness. Analyzing these new complex social models is a growing, and computationally demanding area of research. In this study, we investigate the integration of genetic algorithms (GAs) with a random-walk-based distance measure to find subgroups in social networks. We test our approach by synthetically generating realistic social network data sets. Our clustering experiments using random-walk-based distances reveal exceptionally accurate results compared with the experiments using Euclidean distances." ] }
0803.1520
2949171548
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
In the recent several years, significant progress has been made towards building practical Byzantine fault tolerant systems, as shown in the series of seminal papers such as @cite_19 @cite_0 @cite_3 @cite_9 . This makes it possible to address the problem of reconciliation of the requirement of strong replica consistency and the preservation of each replica's randomness for real-world applications that requires both high availability and high degree of security. We believe the work presented in this paper is an important step towards solving this challenging problem.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_9", "@cite_3" ], "mid": [ "1561840858", "2114579022", "2126789306", "2139359217" ], "abstract": [ "", "Our growing reliance on online services accessible on the Internet demands highly available systems that provide correct service without interruptions. Software bugs, operator mistakes, and malicious attacks are a major cause of service interruptions and they can cause arbitrary behavior, that is, Byzantine faults. This article describes a new replication algorithm, BFT, that can be used to build highly available systems that tolerate Byzantine faults. BFT can be used in practice to implement real services: it performs well, it is safe in asynchronous environments such as the Internet, it incorporates mechanisms to defend against Byzantine-faulty clients, and it recovers replicas proactively. The recovery mechanism allows the algorithm to tolerate any number of faults over the lifetime of the system provided fewer than 1 3 of the replicas become faulty within a small window of vulnerability. BFT has been implemented as a generic program library with a simple interface. We used the library to implement the first Byzantine-fault-tolerant NFS file system, BFS. The BFT library and BFS perform well because the library incorporates several important optimizations, the most important of which is the use of symmetric cryptography to authenticate messages. The performance results show that BFS performs 2p faster to 24p slower than production implementations of the NFS protocol that are not replicated. This supports our claim that the BFT library can be used to build practical systems that tolerate Byzantine faults.", "We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.", "We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal." ] }
0803.1520
2949171548
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
The CT-algorithm is inspired by the work of Cachin, Kursawe and Shoup @cite_13 , in particular, the idea of exploiting threshold signature techniques for agreement. However, we have adapted this idea to solve a totally different problem, it is used towards reaching integrity-preserving strong replica consistency. Furthermore, we carefully studied what to sign for each request so that the final random number obtained is not vulnerable to attacks.
{ "cite_N": [ "@cite_13" ], "mid": [ "2106004025" ], "abstract": [ "Byzantine agreement requires a set of parties in a distributed system to agree on a value even if some parties are maliciously misbehaving. A new protocol for Byzantine agreement in a completely asynchronous network is presented that makes use of new cryptographic protocols, specifically protocols for threshold signatures and coin-tossing. These cryptographic protocols have practical and provably secure implementations in the random oracle model. In particular, a coin-tossing protocol based on the Diffie-Hellman problem is presented and analyzed. The resulting asynchronous Byzantine agreement protocol is both practical and theoretically optimal because it tolerates the maximum number of corrupted parties, runs in constant expected rounds, has message and communication complexity close to the optimum, and uses a trusted dealer only once in a setup phase, after which it can process a virtually unlimited number of transactions. The protocol is formulated as a transaction processing service in a cryptographic security model, which differs from the standard information-theoretic formalization and may be of independent interest." ] }
0803.1521
2951002888
In this paper, we describe a novel proactive recovery scheme based on service migration for long-running Byzantine fault tolerant systems. Proactive recovery is an essential method for ensuring long term reliability of fault tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is a reduced vulnerability window. This is achieved by removing the time-consuming reboot step from the critical path of proactive recovery. Our migration-based proactive recovery is coordinated among the replicas, therefore, it can automatically adjust to different system loads and avoid the problem of excessive concurrent proactive recoveries that may occur in previous work with fixed watchdog timeouts. Moreover, the fast proactive recovery also significantly improves the system availability in the presence of faults.
Finally, the reliance on extra nodes beyond the @math active nodes in our scheme may somewhat relates to the use of @math additional witness replicas in the fast Byzantine consensus algorithm @cite_13 . However, the extra nodes are needed for completely different purposes. In our scheme, they are required for proactive recovery for long-running Byzantine fault tolerant systems. In @cite_13 , however, they are needed to reach Byzantine consensus in fewer message delays.
{ "cite_N": [ "@cite_13" ], "mid": [ "2058322902" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes" ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Traditional client server solutions for security monitoring and protection of large-scale networks rely on the deployment of multiple sensors. These sensors locally collect audit data and forward it to a central server, where it is further analyzed. Early intrusion detection systems such as DIDS @cite_15 and STAT @cite_6 use this architecture and process the monitoring data in one central node. DIDS (Distributed Intrusion Detection System), for instance, is one of the first systems referred to in the literature that is using monitoring architecture @cite_15 . The main components of DIDS are a central analyzer component called DIDS director, a set of host-based sensors installed on each monitored host within the protected network, and a set of network-based sensors installed on each broadcasting segment of the target system. The communication channels between the central analyzer and the distributed sensors are bidirectional. This way, the sensors can push their reports asynchronously to the central analyzer while the director is still able to actively request more details from the sensors.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "1541939527", "2107409339" ], "abstract": [ "Intrusion detection is the problem of identifying unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. The proliferation of heterogeneous computer networks provides additional implications for the intrusion detection problem. Namely, the increased connectivity of computer systems gives greater access to outsiders, and makes it easier for intruders to avoid detection. IDS’s are based on the belief that an intruder’s behavior will be noticeably different from that of a legitimate user. We are designing and implementing a prototype Distributed Intrusion Detection System (DIDS) that combines distributed monitoring and data reduction (through individual host and LAN monitors) with centralized data analysis (through the DIDS director) to monitor a heterogeneous network of computers. This approach is unique among current IDS’s. A main problem considered in this paper is the Network-user Identification problem, which is concerned with tracking a user moving across the network, possibly with a new user-id on each computer. Initial system prototypes have provided quite favorable results on this problem and the detection of attacks on a network. This paper provides an overview of the motivation behind DIDS, the system architecture and capabilities, and a discussion of the early prototype.", "The paper presents a new approach to representing and detecting computer penetrations in real time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule based expert system for detecting penetrations, called the state transition analysis tool (STAT). The design and implementation of a Unix specific prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. >" ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The issue of sensor distribution is the focus of NetSTAT @cite_18 , an application of STAT (State Transition Analysis Technique) @cite_6 to network-based detection. It is based on NSTAT @cite_19 and comprises several extensions. Based on the attack scenarios and the network fact modeled as a hyper-graph, NetSTAT automatically chooses places to probe network activities and applies an analysis of state transitions. This way, it is able to decide what information is needed to collect within the protected network. Although NetSTAT collects network events in a distributed way, it analyzes them in a centralized fashion similarly to DIDS.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_6" ], "mid": [ "", "1603713701", "2107409339" ], "abstract": [ "", "The Reliable Software Group at UCSB has developed a new approach to representing computer penetrations. This approach models penetrations as a series of state transitions described in terms of signature actions and state assertions. State transition representations are written to correspond to the states of an actual computer system, and they form the basis of a rule-based expert system for detecting penetrations. The system is called the State Transition Analysis Tool (STAT). On a network filesystem where the files are distributed on many hosts and where each host mounts directories from the others, actions on each host computer need to be audited. A natural extension of the STAT effort is to run the system on audit data collected by multiple hosts. This means an audit mechanism needs to be run on each host. However, running an implementation of STAT on each host would result in inefficient use of computer resources. In addition, the possibility of having cooperative attacks on different hosts would make detection difficult. Therefore, for the distributed version of STAT, called NSTAT, there is a single STAT process with a single, chronological audit trail. We are currently designing a client server approach to the problem. The client side has two threads: a producer that reads and filters the audit trail and a consumer that sends it to the server. The server side merges the filtered information from the various clients and performs the analysis.", "The paper presents a new approach to representing and detecting computer penetrations in real time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule based expert system for detecting penetrations, called the state transition analysis tool (STAT). The design and implementation of a Unix specific prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. >" ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Some approaches published later try to solve those disadvantages. GrIDS @cite_8 , EMERALD @cite_2 , and AAfID @cite_9 , for example, propose the use of layered structures, where data is locally pre-processed and filtered, and further analyzed by intermediate components in a hierarchical fashion. The computational and network load is distributed over multiple analyzers and managers as well as over different domains to analyze. The analyzers and managers of each domain perform their detection for just a small part of the whole network. They forward the processed information to the entity that is on the top of the hierarchy,i.e., a master node which finally analyzes all the reported incidents of the system.
{ "cite_N": [ "@cite_9", "@cite_2", "@cite_8" ], "mid": [ "2124365372", "2288766236", "1575709188" ], "abstract": [ "AAFID is a distributed intrusion detection architecture and system, developed in CERIAS at Purdue University. AAFID was the first architecture that proposed the use of autonomous agents for doing intrusion detection. With its prototype implementation, it constitutes a useful framework for the research and testing of intrusion detection algorithms and mechanisms. We describe the AAFID architecture and the existing prototype, as well as some design and implementation experiences and future research issues. ” 2000 Elsevier Science B.V. All rights reserved.", "The EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) environment is a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade of intrusion detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors contribute to a streamlined event-analysis system that combines signature analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability that can counter attacks occurring across an entire network enterprise. Further, EMERALD introduces a versatile application programmers' interface that enhances its ability to integrate with heterogeneous target hosts and provides a high degree of interoperability with third-party tool suites.", "" ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Similar to GrIDS, EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) extends the work of IDES (Intrusion Detection Expert System) @cite_25 and NIDES (Next-Generation Intrusion Detection Expert System) @cite_24 by implementing a recursive framework in which generic building blocks can be deployed in a hierarchical fashion @cite_2 . It combines host- and network-based sensors as well as anomaly- and misuse-based analyzers. EMERALD focuses on the protection of large-scale enterprise networks that are divided into independent domains, each one of them with its own security policy. The authors claim to rely on a very efficient communication infrastructure for the exchange of information between the system components. Unfortunately, they also provide only few details regarding their implementation. Thus, a general statement regarding the performance of their infrastructure cannot be made.
{ "cite_N": [ "@cite_24", "@cite_25", "@cite_2" ], "mid": [ "34688585", "2132400386", "2288766236" ], "abstract": [ "", "This paper describes a real-time intrusion-detection expert system (IDES) that observes user behavior on a monitored computer system and adaptively learns what is normal for individual users, groups, remote hosts, and the overall system behavior. Observed behavior is flagged a5 a potential intrusion if it deviates significantly from the expected behavior or if it triggers a rule in the expert-system rule base.", "The EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) environment is a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade of intrusion detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors contribute to a streamlined event-analysis system that combines signature analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability that can counter attacks occurring across an entire network enterprise. Further, EMERALD introduces a versatile application programmers' interface that enhances its ability to integrate with heterogeneous target hosts and provides a high degree of interoperability with third-party tool suites." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The AAfID (Architecture for Intrusion Detection using Autonomous Agents) also pre -sents a hierarchical approach to remove the limitations of centralized approaches and, particularly, to provide better resistance to denial of service attacks @cite_9 . It consists of four main components called agents, filters, transceivers, and monitors organized in a tree structure, where child and parent components communicate with each other. The communication subsystem of AAfID exhibits a very simplistic design and does not seem to be resistant to a denial of service attack as intended. Although the set of agents may communicate with each other to agree upon a common suspicion level regarding every host, all relevant data is simply forwarded to monitors via transceivers and demands for human interaction in order to detect distributed intrusions.
{ "cite_N": [ "@cite_9" ], "mid": [ "2124365372" ], "abstract": [ "AAFID is a distributed intrusion detection architecture and system, developed in CERIAS at Purdue University. AAFID was the first architecture that proposed the use of autonomous agents for doing intrusion detection. With its prototype implementation, it constitutes a useful framework for the research and testing of intrusion detection algorithms and mechanisms. We describe the AAFID architecture and the existing prototype, as well as some design and implementation experiences and future research issues. ” 2000 Elsevier Science B.V. All rights reserved." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Most of these limitations can be solved efficiently by using a distributed publish sub -scribe middleware. The advantages of publish subscribe communication for our problem domain over other communication paradigms is that it keeps the producers of messages decoupled from the consumers and that the communication is information-driven. This way, it is possible to avoid problems regarding the scalability and the management inherent to other designs by means of a network of publishers, brokers, and subscribers. A publisher in a publish subscribe system does not need to have any knowledge about any of the entities that consume the published information since the communication is anonymous. Likewise, the subscribers do not need to know anything about the publishers. New services can simply be added without any impact on or interruption of the service to other users. @cite_3 @cite_13 , we presented an infrastructure inspired by the decentralized architectures discussed with the focus on removing the discussed limitations. In the following sections, we present further details on our work.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2544168507", "95147369" ], "abstract": [ "The cooperation between the different entities of a decentralized prevention system can be solved efficiently using the publish subscribe communication model. Here, clients can share and correlate alert information about the systems they monitor. In this paper, we present the advantages and convenience in using this communication model for a general decentralized prevention framework. Additionally, we outline the design for a specific architecture, and evaluate our design using a freely available publish subscribe message oriented middleware", "Distributed and coordinated attacks can disrupt electronic commerce applica- tions and cause large revenue losses. The prevention of these attacks is not possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to react against the different actions of such an attack. We are currently working on a decentralized attack prevention framework that is targeted at detecting as well as reacting to these attacks. The cooperation between the different entities of this system has been efficiently solved through the use of a publish subscribe model. In this paper we first present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Then, we present the design for our specific approach. Finally, we shortly discuss our implementation based on a freely available publish subscribe message oriented middleware." ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
The exact solution of factored MDPs is infeasible. The idea of representing a large MDP using a factored model was first proposed by Koller & Parr @cite_3 but similar ideas appear already in the works of Boutilier, Dearden, & Goldszmidt @cite_2 @cite_8 . More recently, the framework (and some of the algorithms) was extended to fMDPs with hybrid continuous-discrete variables @cite_9 and factored partially observable MDPs @cite_1 . Furthermore, the framework has also been applied to structured MDPs with alternative representations, e.g., relational MDPs @cite_14 and first-order MDPs @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_13" ], "mid": [ "2134153324", "1997477668", "2165304603", "1554120378", "1982333717", "1650504995", "2158479468" ], "abstract": [ "A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for direct planning. In this paper, we present an approach to the generalization problem based on a new framework of relational Markov Decision Processes (RMDPs). An RMDP can model a set of similar environments by representing objects as instances of different classes. In order to generalize plans to multiple environments, we define an approximate value function specified in terms of classes of objects and, in a multiagent setting, by classes of agents. This class-based approximate value function is optimized relative to a sampled subset of environments, and computed using an efficient linear programming method. We prove that a polynomial number of sampled environments suffices to achieve performance close to the performance achievable when optimizing over the entire space. Our experimental results show that our method generalizes plans successfully to new, significantly larger, environments, with minimal loss of performance relative to environment-specific planning. We demonstrate our approach on a real strategic computer war game.", "Abstract Markov decision processes (MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based specifications and computations. To alleviate the combinatorial problems associated with such methods, we propose new representational and computational techniques for MDPs that exploit certain types of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a decision-tree representation of rewards. Based on this representation, we develop versions of standard dynamic programming algorithms that directly manipulate decision-tree representations of policies and value functions. This generally obviates the need for state-by-state computation, aggregating states at the leaves of these trees and requiring computations only for each aggregate state. The key to these algorithms is a decision-theoretic generalization of classic regression analysis, in which we determine the features relevant to predicting expected value. We demonstrate the method empirically on several planning problems, showing significant savings for certain types of domains. We also identify certain classes of problems for which this technique fails to perform well and suggest extensions and related ideas that may prove useful in such circumstances. We also briefly describe an approximation scheme based on this approach.", "Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems.", "Learning to act optimally in a complex, dynamic and noisy environment is a hard problem. Various threads of research from reinforcement learning, animal conditioning, operations research, machine learning, statistics and optimal control are beginning to come together to offer solutions to this problem. I present a thesis in which novel algorithms are presented for learning the dynamics, learning the value function, and selecting good actions for Markov decision processes. The problems considered have high-dimensional factored state and action spaces, and are either fully or partially observable. The approach I take is to recognize similarities between the problems being solved in the reinforcement learning and graphical models literature, and to use and combine techniques from the two fields in novel ways. In particular I present two new algorithms. First, the DBN algorithm learns a compact representation of the core process of a partially observable MDP. Because inference in the DBN is intractable, I use approximate inference to maintain the belief state. A belief state action-value function is learned using reinforcement learning. I show that this DBN algorithm can solve POMDPs with very large state spaces and useful hidden state. Second, the PoE algorithm learns an approximation to value functions over large factored state-action spaces. The algorithm approximates values as (negative) free energies in a product of experts model. The model parameters can be learned efficiently because inference is tractable in a product of experts. I show that good actions can be found even in large factored action spaces by the use of brief Gibbs sampling. These two new algorithms take techniques from the machine learning community and apply them in new ways to reinforcement learning problems. Simulation results show that these new methods can be used to solve very large problems. The DBN method is used to solve a POMDP with a hidden state space and an observation space of size greater than 2180. The DBN model of the core process has 232 states represented as 32 binary variables. The PoE method is used to find actions in action spaces of size 240 .", "Abstract Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the induced-width of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set , or a cutset . Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.", "Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy Iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and prepositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with recent approximation methods.", "In the linear programming approach to approximate dynamic programming, one tries to solve a certain linear program--the ALP--that has a relatively small numberK of variables but an intractable numberM of constraints. In this paper, we study a scheme that samples and imposes a subset ofm <" ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
Markov decision processes were first formulated as LP tasks by Schweitzer and Seidmann . The approximate LP form is due to de Farias and van Roy . show that the maximum of local-scope functions can be computed by rephrasing the task as a non-serial dynamic programming task and eliminating variables one by one. Therefore, ) can be transformed to an equivalent, more compact linear program. The gain may be exponential, but this is not necessarily so in all cases: according to @cite_10 , as shown by Dechter , [the cost of the transformation] is exponential in the induced width of the cost network, the undirected graph defined over the variables @math , with an edge between @math and @math if they appear together in one of the original functions @math . The complexity of this algorithm is, of course, dependent on the variable elimination order and the problem structure. Computing the optimal elimination order is an NP-hard problem and elimination orders yielding low induced tree width do not exist for some problems.'' Furthermore, for the approximate LP task ), the solution is no longer independent of @math and the optimal choice of the @math values is not known.
{ "cite_N": [ "@cite_10" ], "mid": [ "2170400507" ], "abstract": [ "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 1040 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time." ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
The approximate policy iteration algorithm also uses an approximate LP reformulation, but it is based on the policy-evaluation Bellman equation ). Policy-evaluation equations are, however, linear and do not contain the maximum operator, so there is no need for the second, costly transformation step. On the other hand, the algorithm needs an explicit decision tree representation of the policy. Liberatore @cite_6 has shown that the size of the decision tree representation can grow exponentially.
{ "cite_N": [ "@cite_6" ], "mid": [ "2149896419" ], "abstract": [ "Policies of Markov Decision Processes (MDPs) tell the next action to execute, given the current state and (possibly) the history of actions executed so far. Factorization is used when the number of states is exponentially large: both the MDP and the policy can be then represented using a compact form, for example employing circuits. We prove that there are MDPs whose optimal policies require exponential space evenin factored form." ] }
0801.2575
1818109327
This paper reviews the fully complete hypergames model of system @math , presented a decade ago in the author's thesis. Instantiating type variables is modelled by allowing games as moves''. The uniformity of a quantified type variable @math is modelled by copycat expansion: @math represents an unknown game, a kind of black box, so all the player can do is copy moves between a positive occurrence and a negative occurrence of @math . This presentation is based on slides for a talk entitled Hypergame semantics: ten years later'' given at Games for Logic and Programming Languages', Seattle, August 2006.
Affine linear polymorphism was modelled in @cite_20 Samson Abramsky's course at this summer school, during the summer before my D.Phil., is in part what inspired my choice of thesis topic. with a PER-like intersections'' of first-order games of the form @cite_4 @cite_13 . Abramsky and Lenisa have explored systematic ways of modelling quantifiers so that, in the limited case in which all quantifiers are outermost (so in particular positive), models are fully complete @cite_5 . (See subsection for a simple example of a type at which full completeness fails.)
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_4", "@cite_20" ], "mid": [ "2165446401", "2124501788", "2094685149", "1526036637" ], "abstract": [ "An intensional model for the programming language PCF is described in which the types of PCF are interpreted by games and the terms by certain history-free strategies. This model is shown to capture definability in PCF. More precisely, every compact strategy in the model is definable in a certain simple extension of PCF. We then introduce an intrinsic preorder on strategies and show that it satisfies some striking properties such that the intrinsic preorder on function types coincides with the pointwise preorder. We then obtain an order-extensional fully abstract model of PCF by quotienting the intensional model by the intrinsic preorder. This is the first syntax-independent description of the fully abstract model for PCF. (Hyland and Ong have obtained very similar results by a somewhat different route, independently and at the same time.) We then consider the effective version of our model and prove a universality theorem: every element of the effective extensional model is definable in PCF. Equivalently, every recursive strategy is definable up to observational equivalence.", "We present a linear realizability technique for building Partial Equivalence Relations (PER) categories over Linear Combinatory Algebras. These PER categories turn out to be linear categories and to form an adjoint model with their co-Kleisli categories. We show that a special linear combinatory algebra of partial involutions, arising from Geometry of Interaction constructions, gives rise to a fully and faithfully complete modelfor ML polymorphic types of system F.", "We present a game semantics for Linear Logic, in which formulas denote games and proofs denote winning strategies. We show that our semantics yields a categorical model of Linear Logic and prove full completeness for Multiplicative Linear Logic with the MIX rule: every winning strategy is the denotation of a unique cut-free proof net. A key role is played by the notion of history-free strategy: strong connections are made between history-free strategies and the Geometry of Interaction. Our semantics incorporates a natural notion of polarity, leading to a refined treatment of the additives. We make comparisons with related work by Joyal, Blass, et al", "" ] }
0801.3372
2057867606
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
A similar approach to our geometric analysis of MP atom selection rule has been proposed in @cite_10 . In that paper, a dictionary of ( @math -normalized) wavelets is seen as a manifold associate to a Riemannian metric. However, the authors restrict their work to wavelet parametrization inherited from Lie group (such as the affine group). They also work only on the @math (dictionary) distance between dictionary atoms and do not introduce intrinsic geodesic distance. They define a discretization of the parametrization @math such that, in our notations, @math , with @math the local width of the cell localized on @math . There is however no analysis of the effect of this discretization on the MP rate of convergence.
{ "cite_N": [ "@cite_10" ], "mid": [ "1971186456" ], "abstract": [ "Abstract A new method of wavelet packet analysis is presented where the wavelet packets are chosen from a manifold rather than a discrete grid. A generalisation of the wavelet transform is defined on this manifold by correlation of the wavelet packets with the signal or image, and a discrete subset of the wavelet packets is then chosen from local maxima in the modulus of this function as a form of signal or image feature extraction. We show that consideration of the geometry of the manifold aids the search for these local maxima. We also show that the resulting wavelet characterisation is the best local approximation to the signal or image and represents signal and image components with the greatest signal to noise ratio, and is thus useful to surveillance and detection." ] }
0801.3372
2057867606
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
In @cite_33 , the author uses a 4-dimensional Gaussian chirp dictionary to analyze 1-D signals with MP algorithm. He develops a fast procedure to find the best atom of this dictionary in the representation of the current MP residual by applying a two-step search. First, by setting the chirp rate parameter to zero, the best common Gabor atom is found with full search procedure taking advantage of the FFT algorithm. Next, a ridge theorem proves that starting from this Gabor atom, the best Gaussian chirp atom can be approximated with a controlled error. The whole method is similar to the development of our optimized matching pursuit since we start also from a discrete parametrization to find a better atom in the continuous one. However, our approach is more general since we are not restricted to a specific dictionary. We use the intrinsic geometry of any smooth dictionary manifold to perform a optimization driven by a geometric gradient ascent.
{ "cite_N": [ "@cite_33" ], "mid": [ "2141660238" ], "abstract": [ "We introduce a modified matching pursuit algorithm, called fast ridge pursuit, to approximate N-dimensional signals with M Gaussian chirps at a computational cost O(MN) instead of the expected O(MN sup 2 logN). At each iteration of the pursuit, the best Gabor atom is first selected, and then, its scale and chirp rate are locally optimized so as to get a \"good\" chirp atom, i.e., one for which the correlation with the residual is locally maximized. A ridge theorem of the Gaussian chirp dictionary is proved, from which an estimate of the locally optimal scale and chirp is built. The procedure is restricted to a sub-dictionary of local maxima of the Gaussian Gabor dictionary to accelerate the pursuit further. The efficiency and speed of the method is demonstrated on a sound signal." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
The @cite_18 of a finite set of points @math is a partition of the plane into @math regions such that each region contains exactly those points having the same nearest neighbor in @math . The points in @math are also called . It is well known that the Voronoi diagram of a set of points is the face dual of the Delaunay graph of that set of points @cite_18 , i.e. two points have adjacent Voronoi regions if and only if they share an edge in the Delaunay graph (see Figure ).
{ "cite_N": [ "@cite_18" ], "mid": [ "2149906774" ], "abstract": [ "This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Let @math be a real number. Two set of points @math and @math in @math are if there exists two @math -dimensional balls @math and @math of same radius @math respectively containing the bounding boxes of @math and @math such that the distance between @math and @math is greater than or equal to @math . The distance between @math and @math is defined as the distance between their centers minus @math . A @cite_3 @cite_29 is a set of unordered pairs @math of subsets of @math that are well-separated with respect to @math with the additional property that for every two points @math there is exactly one pair @math such that @math and @math . showed that for @math , every point set admits a WSPD with separation ratio @math of @math size that can be computed in @math time. Choosing one edge per pair allows to construct a @math -spanner that has @math size with @math .
{ "cite_N": [ "@cite_29", "@cite_3" ], "mid": [ "1505970908", "2054011861" ], "abstract": [ "Part I. Introduction: 1. Introduction 2. Algorithms and graphs 3. The algebraic computation-tree model Part II. Spanners Based on Simplical Cones: 4. Spanners based on the Q-graph 5. Cones in higher dimensional space and Q-graphs 6. Geometric analysis: the gap property 7. The gap-greedy algorithm 8. Enumerating distances using spanners of bounded degree Part III. The Well Separated Pair Decomposition and its Applications: 9. The well-separated pair decomposition 10. Applications of well-separated pairs 11. The Dumbbell theorem 12. Shortcutting trees and spanners with low spanner diameter 13. Approximating the stretch factor of Euclidean graphs Part IV. The Path Greedy Algorithm: 14. Geometric analysis: the leapfrog property 15. The path-greedy algorithm Part V. Further Results and Applications: 16. The distance range hierarchy 17. Approximating shortest paths in spanners 18. Fault-tolerant spanners 19. Designing approximation algorithms with spanners 20. Further results and open problems.", "We define the notion of a well-separated pair decomposition of points in d -dimensional space. We then develop efficient sequential and parallel algorithms for computing such a decomposition. We apply the resulting decomposition to the efficient computation of k -nearest neighbors and n -body potential fields." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Unit disk graphs @cite_17 @cite_8 received a lot of attention from the wireless community. A is a graph whose nodes are points in the plane and edges join two points whose distance is at most one unit. It is well-known that intersecting a unit disk graph with the Delaunay or the Yao graph of the points provides a @math -spanner of the unit disk graph @cite_12 , where the constant @math is the same as the one of the original graph. However, this simple strategy does not work with all spanners. In particular, it does not work with the @math -graph @cite_25 . Unit disk graphs can be seen as intersection graphs of disks of same radius in the plane. The general problem of computing spanners for geometric intersection graphs has been studied by .
{ "cite_N": [ "@cite_8", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2063572899", "2949856235", "2079037434", "2056979696" ], "abstract": [ "Unit disk graphs are the intersection graphs of equal sized circles in the plane: they provide a graph-theoretic model for broadcast networks (cellular networks) and for some problems in computational geometry. We show that many standard graph theoretic problems remain NP-complete on unit disk graphs, including coloring, independent set, domination, independent domination, and connected domination; NP-completeness for the domination problem is shown to hold even for grid graphs, a subclass of unit disk graphs. In contrast, we give a polynomial time algorithm for finding cliques when the geometric representation (circles in the plane) is provided.", "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "In a geometric bottleneck shortest path problem, we are given a set S of n points in the plane, and want to answer queries of the following type: given two points p and q of S and a real number L, compute (or approximate) a shortest path between p and q in the subgraph of the complete graph on S consisting of all edges whose lengths are less than or equal to L. We present efficient algorithms for answering several query problems of this type. Our solutions are based on Euclidean minimum spanning trees, spanners, and the Delaunay triangulation. A result of independent interest is the following. For any two points p and q of S, there is a path between p and q in the Delaunay triangulation, whose length is less than or equal to 2π (3 cos(π 6)) times the Euclidean distance |pq| between p and q, and all of whose edges have length at most |pq|.", "In this paper we introduce the minimum-order approach to frequency assignment and present a theory which relates this approach to the traditional one. This new approach is potentially more desirable than the traditional one. We model assignment problems as both frequency-distance constrained and frequency constrained optimization problems. The frequency constrained approach should be avoided if distance separation is employed to mitigate interference. A restricted class of graphs, called disk graphs, plays a central role in frequency-distance constrained problems. We introduce two generalizations of chromatic number and show that many frequency assignment problems are equivalent to generalized graph coloring problems. Using these equivalences and recent results concerning the complexity of graph coloring, we classify many frequency assignment problems according to the \"execution time efficiency\" of algorithms that may be devised for their solution. We discuss applications to important real world problems and identify areas for further work." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Another graph that has been looked at is the . In that case, points are assigned a unique color (which may be thought of as a positive integer) between 1 and @math , and there is an edge between two points if and only if they are assigned different colors. Bose al @cite_21 showed that the WSPD can be adapted to compute a @math -spanner of that graph that has @math edges for arbitrary values of @math strictly greater than 5.
{ "cite_N": [ "@cite_21" ], "mid": [ "1780187217" ], "abstract": [ "We address the following problem: Given a complete @math -partite geometric graph @math whose vertex set is a set of @math points in @math , compute a spanner of @math that has a “small” stretch factor and “few” edges. We present two algorithms for this problem. The first algorithm computes a @math -spanner of @math with @math edges in @math time. The second algorithm computes a @math -spanner of @math with @math edges in @math time. The latter result is optimal: We show that for any @math , spanners with @math edges and stretch factor less than 3 do not exist for all complete @math -partite geometric graphs." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
For spanners of arbitrary geometric graphs, much less is known. Alth "o fer @cite_26 have shown that for any @math , every weighted graph @math with @math vertices contains a subgraph with @math edges, which is a @math -spanner of @math . Observe that this result holds for any weighted graph; in particular, it is valid for any geometric graph. For geometric graphs, a lower bound was given by Gudmundsson and Smid @cite_6 : They proved that for every real number @math with @math , there exists a geometric graph @math with @math vertices, such that every @math -spanner of @math contains @math edges. Thus, if we are looking for spanners with @math edges of arbitrary geometric graphs, then the best spanning ratio we can obtain is @math .
{ "cite_N": [ "@cite_26", "@cite_6" ], "mid": [ "2002041206", "1496182974" ], "abstract": [ "Given a graphG, a subgraphG' is at-spanner ofG if, for everyu,v ?V, the distance fromu tov inG' is at mostt times longer than the distance inG. In this paper we give a simple algorithm for constructing sparse spanners for arbitrary weighted graphs. We then apply this algorithm to obtain specific results for planar graphs and Euclidean graphs. We discuss the optimality of our results and present several nearly matching lower bounds.", "Given a connected geometric graph G, we consider the problem of constructing a t-spanner of G having the minimum number of edges. We prove that for every t with @math , there exists a connected geometric graph G with n vertices, such that every t-spanner of G contains Ω( n1+1 t ) edges. This bound almost matches the known upper bound, which states that every connected weighted graph with n vertices contains a t-spanner with O(tn1+2 (t+1)) edges. We also prove that the problem of deciding whether a given geometric graph contains a t-spanner with at most K edges is NP-hard. Previously, this NP-hardness result was only known for non-geometric graphs" ] }
0801.0523
1663280704
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
As a summary, proofs written for version of the project up to versions 0.8 @cite_24 are typically composed of several pages of paper proof and several pages of supporting Maple for a few lines of code. This provides an excellent documentation and helps maintaining the code, but experience has consistently shown that such proofs are extremely error-prone. Implementing the error computation in Maple was a first step towards the automation of this process, but if it helps avoiding computation mistakes, it does not prevent methodological mistakes. Gappa was designed, among other objectives, in order to fill this void.
{ "cite_N": [ "@cite_24" ], "mid": [ "2624158749" ], "abstract": [ "The crlibm project aims at developing a portable, proven, correctly rounded, and efficient mathematical library (libm) for double precision. Current libm implementation do not always return the floating-point number that is closest to the exact mathematical result. As a consequence, different libm implementation will return different results for the same input, which prevents full portability of floating-point ap- plications. In addition, few libraries support but the round-to-nearest mode of the IEEE754 IEC 60559 standard for floating-point arithmetic (hereafter usually referred to as the IEEE-754 stan- dard). crlibm provides the four rounding modes: To nearest, to +∞, to −∞ and to zero." ] }
0801.0523
1663280704
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
There has been other attempts of assisted proofs of elementary functions or similar floating-point code. The pure formal proof approach of Harrison @cite_18 @cite_19 @cite_7 goes deeper than the Gappa approach, as it accounts for approximation errors. However it is accessible only to experts of formal proofs, and fragile in case of a change to the code. The approach of Krmer @cite_30 @cite_12 relies on operator overloading and does not provide a formal proof.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_19", "@cite_12" ], "mid": [ "", "1672719503", "2008863866", "1509799206", "2041313350" ], "abstract": [ "", "Since they often embody compact but mathematically sophisticated algorithms, operations for computing the common transcendental functions in floating point arithmetic seem good targets for formal verification using a mechanical theorem prover. We discuss some of the general issues that arise in verifications of this class, and then present a machine-checked verification of an algorithm for computing the exponential function in IEEE-754 standard binary floating point arithmetic. We confirm (indeed strengthen) the main result of a previously published error analysis, though we uncover a minor error in the hand proof and are forced to confront several subtle issues that might easily be overlooked informally.", "We discuss the formal verification of some low-level mathematical software for the Intel® Itanium® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more detail the verification of a square root algorithm, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications.", "We have formal verified a number of algorithms for evaluating transcendental functions in double-extended precision floating point arithmetic in the Intel® IA-64 architecture. These algorithms are used in the Itanium? processor to provide compatibility with IA-32 (x86) hard-ware transcendentals, and similar ones are used in mathematical software libraries. In this paper we describe in some depth the formal verification of the sin and cos functions, including the initial range reduction step. This illustrates the different facets of verification in this field, covering both pure mathematics and the detailed analysis of floating point rounding.", "filibpp is an extension of the interval library filib originally developed at the University of Karlsruhe. The most important aim of filib is the fast computation of guaranteed bounds for interval versions of a comprehensive set of elementary functions. filibpp extends this library in two aspects. First, it adds a second mode, the extended mode, that extends the exception-free computation mode (using special values to represent infinities and NaNs known from the IEEE floating-point standard 754) to intervals. In this mode, the so-called containment sets are computed to enclose the topological closure of a range of a function over an interval. Second, our new design uses templates and traits classes to obtain an efficient, easily extendable, and portable Cpp library." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Jones @cite_3 was an early paper on control-flow analysis of the untyped @math -calculus. Shivers' thesis and subsequent work @cite_12 @cite_19 on CFA (control flow analysis) developed this approach considerably further and applied it to the Scheme programming language. This line is closely related to the approximate semantics (static control graph) of Section @cite_3 .
{ "cite_N": [ "@cite_19", "@cite_12", "@cite_3" ], "mid": [ "2071990711", "", "2725654452" ], "abstract": [ "Traditional flow analysis techniques, such as the ones typically employed by optimising Fortran compilers, do not work for Scheme-like languages. This paper presents a flow analysis technique --- control flow analysis --- which is applicable to Scheme-like languages. As a demonstration application, the information gathered by control flow analysis is used to perform a traditional flow analysis problem, induction variable elimination. Extensions and limitations are discussed.The techniques presented in this paper are backed up by working code. They are applicable not only to Scheme, but also to related languages, such as Common Lisp and ML.", "", "We describe a method to analyze the data and control flow during mechanical evaluation of lambda expressions. The method produces a finite approximate description of the set of all states entered by a call-by-value lambda-calculus interpreter; a similar approach can easily be seen to work for call-by-name. A proof is given that the approximation is ''safe'' i.e. that it includes descriptions of every intermediate lambda-expression which occurs in the evaluation. From a programming languages point of view the method extends previously developed interprocedural analysis methods to include both local and global variables, call-by-name or call-by-value parameter transmission and the use of procedures both as arguments to other procedures and as the results returned by them." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
We had anticipated from the start that our framework could naturally be extended to higher-order functional programs, e.g., functional subsets of Scheme or ML. This has since been confirmed by Sereni and Jones, first reported in @cite_25 . Sereni's Ph.D. thesis @cite_0 develops this direction in considerably more detail with full proofs, and also investigates problems with lazy (call-by-name) languages. Independently and a bit later, Giesl and coauthors have addressed the analysis of the lazy functional language Haskell @cite_7 .
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_7" ], "mid": [ "2144538202", "", "2168393711" ], "abstract": [ "Size-change termination (SCT) automatically identifies termination of first-order functional programs. The SCT principle: a program terminates if every infinite control flow sequence would cause an infinite descent in a well-founded data value (POPL 2001). More recent work (RTA 2004) developed a termination analysis of the pure untyped λ-calculus using a similar approach, but an entirely different notion of size was needed to compare higher-order values. Again this is a powerful analysis, even proving termination of certain λ-expressions containing the fixpoint combinator Y. However the language analysed is tiny, not even containing constants. These techniques are unified and extended significantly, to yield a termination analyser for higher-order, call-by-value programs as in ML’s purely functional core or similar functional languages. Our analyser has been proven correct, and implemented for a substantial subset of OCaml.", "", "There are many powerful techniques for automated termination analysis of term rewriting. However, up to now they have hardly been used for real programming languages. We present a new approach which permits the application of existing techniques from term rewriting in order to prove termination of programs in the functional language Haskell. In particular, we show how termination techniques for ordinary rewriting can be used to handle those features of Haskell which are missing in term rewriting (e.g., lazy evaluation, polymorphic types, and higher-order functions). We implemented our results in the termination prover AProVE and successfully evaluated them on existing Haskell-libraries." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Term rewriting systems: The popular dependency pair'' method was developed by Arts and Giesl @cite_24 for first-order programs in TRS form. This community has begun to study termination of higher order term rewriting systems, including research by Giesl et.al. @cite_4 @cite_7 , Toyama @cite_26 and others.
{ "cite_N": [ "@cite_24", "@cite_26", "@cite_4", "@cite_7" ], "mid": [ "2090855107", "1568506451", "1508261706", "2168393711" ], "abstract": [ "We present techniques to prove termination and innermost termination of term rewriting systems automatically. In contrast to previous approaches, we do not compare left- and right-hand sides of rewrite rules, but introduce the notion of dependency pairs to compare left-hand sides with special subterms of the right-hand sides. This results in a technique which allows to apply existing methods for automated termination proofs to term rewriting systems where they failed up to now. In particular, there are numerous term rewriting systems where a direct termination proof with simplification orderings is not possible, but in combination with our technique, well-known simplification orderings (such as the recursive path ordering, polynomial orderings, or the Knuth–Bendix ordering) can now be used to prove termination automatically. Unlike previous methods, our technique for proving innermost termination automatically can also be applied to prove innermost termination of term rewriting systems that are not terminating. Moreover, as innermost termination implies termination for certain classes of term rewriting systems, this technique can also be used for termination proofs of such systems.", "This paper expands the termination proof techniques based on the lexicographic path ordering to term rewriting systems over varyadic terms, in which each function symbol may have more than one arity. By removing the deletion property from the usual notion of the embedding relation, we adapt Kruskal’s tree theorem to the lexicographic comparison over varyadic terms. The result presented is that finite term rewriting systems over varyadic terms are terminating whenever they are compatible with the lexicographic path order. The ordering is simple, but powerful enough to handle most of higher-order rewriting systems without λ-abstraction, expressed as S-expression rewriting systems.", "The dependency pair technique is a powerful modular method for automated termination proofs of term rewrite systems (TRSs). We present two important extensions of this technique: First, we show how to prove termination of higher-order functions using dependency pairs. To this end, the dependency pair technique is extended to handle (untyped) applicative TRSs. Second, we introduce a method to prove non-termination with dependency pairs, while up to now dependency pairs were only used to verify termination. Our results lead to a framework for combining termination and non-termination techniques for first- and higher-order functions in a very flexible way. We implemented and evaluated our results in the automated termination prover AProVE.", "There are many powerful techniques for automated termination analysis of term rewriting. However, up to now they have hardly been used for real programming languages. We present a new approach which permits the application of existing techniques from term rewriting in order to prove termination of programs in the functional language Haskell. In particular, we show how termination techniques for ordinary rewriting can be used to handle those features of Haskell which are missing in term rewriting (e.g., lazy evaluation, polymorphic types, and higher-order functions). We implemented our results in the termination prover AProVE and successfully evaluated them on existing Haskell-libraries." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Recently there has been a tremendous amount of work on summarizing sentiment @cite_9 and in particular summarizing sentiment by extracting and aggregating sentiment over ratable aspects. There have been many methods proposed from unsupervised to fully supervised systems.
{ "cite_N": [ "@cite_9" ], "mid": [ "2121752782" ], "abstract": [ "We introduce the idea of a sentiment summary, a single passage from a document that captures an author’ s opinion about his or her subject. Using supervised data from the Rotten Tomatoes website, we examine features that appear to be helpful in locating a good summary sentence. These features are used to fit Naive Bayes and regularized logistic regression models for summary extraction." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
In terms of unsupervised aspect extraction, in which this work can be categorized, the system of Hu and Liu @cite_32 @cite_23 was one of the earliest endeavors. In that study association mining is used to extract product aspects that can be rated. Hu and Liu defined an aspect as simply a string and there was no attempt to cluster or infer aspects that are mentioned implicitly, e.g., The amount of stains in the room was overwhelming'' is about the aspect for hotels. A similar work by Popescu and Etzioni @cite_0 also extract explicit aspects mentions without describing how implicit mentions are extracted and clustered. Though they imply that this is done somewhere in their system. Clustering can be of particular importance for domains in which aspects are described with a large vocabulary, such as for restaurants or for hotels. Both implicit mentions and clustering arise naturally out of the topic model formulation requiring no additional augmentations.
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_23" ], "mid": [ "2081375810", "2160660844", "1581485226" ], "abstract": [ "Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22 higher precision (with only 3 lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
@cite_11 present an unsupervised system that does incorporate clustering, however, their method clusters sentences and not individual aspects to produce a sentence based summary. Sentence clusters are labeled with the most frequent non-stop word stem in the cluster. @cite_27 present a weakly supervised model that uses the algorithms of Hu and Liu @cite_32 @cite_23 to extract explicit aspect mentions from reviews. The method is extended through a user supplied aspect hierarchy of a product class. Extracted aspects are clustered by placing the aspects into the hierarchy using various string and semantic similarity metrics. This method is then used to compare extractive versus abstractive summarizations for sentiment @cite_6 .
{ "cite_N": [ "@cite_32", "@cite_6", "@cite_27", "@cite_23", "@cite_11" ], "mid": [ "2160660844", "1936155969", "1999912290", "1581485226", "1573641422" ], "abstract": [ "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "In many decision-making scenarios, people can benefit from knowing what other people's opinions are. As more and more evaluative documents are posted on the Web, summarizing these useful resources becomes a critical task for many organizations and individuals. This paper presents a framework for summarizing a corpus of evaluative documents about a single entity by a natural language summary. We propose two summarizers: an extractive summarizer and an abstractive one. As an additional contribution, we show how our abstractive summarizer can be modified to generate summaries tailored to a model of the user preferences that is solidly grounded in decision theory and can be effectively elicited from users. We have tested our framework in three user studies. In the first one, we compared the two summarizers. They performed equally well relative to each other quantitatively, while significantly outperforming a baseline standard approach to multidocument summarization. Trends in the results as well as qualitative comments from participants suggest that the summarizers have different strengths and weaknesses. After this initial user study, we realized that the diversity of opinions expressed in the corpus (i.e., its controversiality) might play a critical role in comparing abstraction versus extraction. To clearly pinpoint the role of controversiality, we ran a second user study in which we controlled for the degree of controversiality of the corpora that were summarized for the participants. The outcome of this study indicates that for evaluative text abstraction tends to be more effective than extraction, particularly when the corpus is controversial. In the third user study we assessed the effectiveness of our user tailoring strategy. The results of this experiment confirm that user tailored summaries are more informative than untailored ones.", "Capturing knowledge from free-form evaluative texts about an entity is a challenging task. New techniques of feature extraction, polarity determination and strength evaluation have been proposed. Feature extraction is particularly important to the task as it provides the underpinnings of the extracted knowledge. The work in this paper introduces an improved method for feature extraction that draws on an existing unsupervised method. By including user-specific prior knowledge of the evaluated entity, we turn the task of feature extraction into one of term similarity by mapping crude (learned) features into a user-defined taxonomy of the entity's features. Results show promise both in terms of the accuracy of the mapping as well as the reduction in the semantic redundancy of crude features.", "It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective.", "We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
There has also been some studies of supervised aspect extraction methods. For example, @cite_20 work on sentiment summarization for movie reviews. In that work, aspects are extracted and clustered, but they are done so manually through the examination of a labeled data set. The short-coming of such an approach is that it requires a labeled corpus for every domain of interest.
{ "cite_N": [ "@cite_20" ], "mid": [ "2112744748" ], "abstract": [ "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
A key point of note is that our topic model approach is orthogonal to most of the methods mentioned above. For example, the topic model can be used to help cluster explicit aspects extracted by @cite_32 @cite_23 @cite_0 or used to improve the recall of knowledge driven approaches that require domain specific ontologies @cite_27 or labeled data @cite_20 .
{ "cite_N": [ "@cite_32", "@cite_0", "@cite_27", "@cite_23", "@cite_20" ], "mid": [ "2160660844", "2081375810", "1999912290", "1581485226", "2112744748" ], "abstract": [ "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22 higher precision (with only 3 lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity.", "Capturing knowledge from free-form evaluative texts about an entity is a challenging task. New techniques of feature extraction, polarity determination and strength evaluation have been proposed. Feature extraction is particularly important to the task as it provides the underpinnings of the extracted knowledge. The work in this paper introduces an improved method for feature extraction that draws on an existing unsupervised method. By including user-specific prior knowledge of the evaluated entity, we turn the task of feature extraction into one of term similarity by mapping crude (learned) features into a user-defined taxonomy of the entity's features. Results show promise both in terms of the accuracy of the mapping as well as the reduction in the semantic redundancy of crude features.", "It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective.", "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Several models have been proposed to overcome the bag-of-words assumption by explicitly modeling topic transitions @cite_17 @cite_25 @cite_30 @cite_10 @cite_26 @cite_24 . In our MG-LDA model we instead proposed a sliding windows to model local topics, as it is computationally less expensive and leads to good results. The model of Blei and Moreno @cite_17 also uses windows, but their windows are not overlapping and, therefore, it is a priori known from which window a word is going to be sampled. They perform explicit modeling of topic transitions between these windows. In our case the distribution of sentences over overlapping windows @math is responsible for modeling transitions. However, it is possible to construct a multi-grain model which uses a n-gram topic model for local topics and a distribution fixed per document for global topics.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_24", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "106244805", "2124585778", "1498269992", "2104210067", "2112971401", "2053569739" ], "abstract": [ "Abstract : Most of the popular topic models (such as Latent Dirichlet Allocation) have an underlying assumption: bag of words. However, text is indeed a sequence of discrete word tokens, and without considering the order of words (in another word, the nearby context where a word is located), the accurate meaning of language cannot be exactly captured by word co-occurrences only. In this sense, collocations of words (phrases) have to be considered. However, like individual words, phrases sometimes show polysemy as well depending on the context. More noticeably, a composition of two (or more) words is a phrase in some contexts, but not in other contexts. In this paper, the authors propose a new probabilistic generative model that automatically determines unigram words and phrases based on context and simultaneously associates them with a mixture of topics. They present very interesting results on large text corpora.", "We present a method for unsupervised topic modelling which adapts methods used in document classification (, 2003; Griffiths and Steyvers, 2004) to unsegmented multi-party discourse transcripts. We show how Bayesian inference in this generative model can be used to simultaneously address the problems of topic segmentation and topic identification: automatically segmenting multi-party meetings into topically coherent segments with performance which compares well with previous unsupervised segmentation-only methods (, 2003) while simultaneously extracting topics which rate highly when assessed for coherence by human judges. We also show that this method appears robust in the face of off-topic dialogue and speech recognition errors.", "Algorithms such as Latent Dirichlet Allocation (LDA) have achieved significant progress in modeling word document relationships. These algorithms assume each word in the document was generated by a hidden topic and explicitly model the word distribution of each topic as well as the prior distribution over topics in the document. Given these parameters, the topics of all words in the same document are assumed to be independent. In this paper, we propose modeling the topics of words in the document as a Markov chain. Specifically, we assume that all words in the same sentence have the same topic, and successive sentences are more likely to have the same topics. Since the topics are hidden, this leads to using the well-known tools of Hidden Markov Models for learning and inference. We show that incorporating this dependency allows us to learn better topics and to disambiguate words that can belong to different topics. Quantitatively, we show that we obtain better perplexity in modeling documents with only a modest increase in learning and inference complexity.", "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.", "Statistical approaches to language learning typically focus on either short-range syntactic dependencies or long-range semantic dependencies between words. We present a generative model that uses both kinds of dependencies, and can be used to simultaneously find syntactic classes and semantic topics despite having no representation of syntax or semantics beyond statistical dependency. This model is competitive on tasks like part-of-speech tagging and document classification with models that exclusively use short- and long-range dependencies respectively.", "We present a novel probabilistic method for topic segmentation on unstructured text. One previous approach to this problem utilizes the hidden Markov model (HMM) method for probabilistically modeling sequence data [7]. The HMM treats a document as mutually independent sets of words generated by a latent topic variable in a time series. We extend this idea by embedding Hofmann's aspect model for text [5] into the segmenting HMM to form an aspect HMM (AHMM). In doing so, we provide an intuitive topical dependency between words and a cohesive segmentation model. We apply this method to segment unbroken streams of New York Times articles as well as noisy transcripts of radio programs on SpeechBot , an online audio archive indexed by an automatic speech recognition engine. We provide experimental comparisons which show that the AHMM outperforms the HMM for this task." ] }
0712.3936
1949066771
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.
Much work has been done on covering problems because of both their simple and elegant formulation, and their pervasiveness in different application areas. In its most general form the problem, also known as Set Cover, cannot be approximated within @math unless @math @cite_2 . Due to this hardness, easier, special cases have been studied.
{ "cite_N": [ "@cite_2" ], "mid": [ "2143996311" ], "abstract": [ "Given a collection F of subsets of S = 1,…, n , setcover is the problem of selecting as few as possiblesubsets from F such that their union covers S, , and maxk-cover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems areNP-hard. We prove that (1 - o (1)) ln n is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms. This closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o (1)) lnn), and provious results of Lund and Yanakakis, that showed hardness ofapproximation within a ratio of log 2 n 2s0.72 ln n . For max k -cover, we show an approximationthreshold of (1 - 1 e )(up tolow-order terms), under assumption that P≠NP ." ] }
0712.4279
2950218338
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
Beame, Pitassi, Segerlind, and Wigderson @cite_26 devised a method based on a direct product theorem to show a @math bound on the complexity of three-party disjointness in a model stronger than one-way where the first player speaks once, and then the two remaining players interact arbitrarily.
{ "cite_N": [ "@cite_26" ], "mid": [ "1985293484" ], "abstract": [ "The \"Number on the Forehead\" model of multi-party communication complexity was first suggested by Chandra, Furst and Lipton. The best known lower bound, for an explicit function (in this model), is a lower bound of ( (n 2^k) ), where n is the size of the input of each player, and k is the number of players (first proved by Babai, Nisan and Szegedy). This lower bound has many applications in complexity theory. Proving a better lower bound, for an explicit function, is a major open problem. Based on the result of BNS, Chung gave a sufficient criterion for a function to have large multi-party communication complexity (up to ( (n 2^k) )). In this paper, we use some of the ideas of BNS and Chung, together with some new ideas, resulting in a new (easier and more modular) proof for the results of BNS and Chung. This gives a simpler way to prove lower bounds for the multi-party communication complexity of a function." ] }
0712.4279
2950218338
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
Following up on our work, David, Pitassi, and Viola @cite_22 gave an explicit function which separates nondeterministic and randomized communication complexity for up to @math players. They are also able, for any constant @math to give a function computable in @math which separates them for up to @math players. Note that disjointness can be computed in @math , but that our bounds are already trivial for @math players. Even more recently, Beame and Huynh-Ngoc @cite_7 have shown a bound of @math on the @math -party complexity of disjointness. This bound remains non-trivial for up to @math many players, but is not as strong as our bound for few players.
{ "cite_N": [ "@cite_22", "@cite_7" ], "mid": [ "2035762900", "2015924804" ], "abstract": [ "We exhibit an explicit function f : 0, 1 n → 0, 1 that can be computed by a nondeterministic number-on-forehead protocol communicating O(logn) bits, but that requires nΩ(1) bits of communication for randomized number-on-forehead protocols with k = Δ·logn players, for any fixed Δ We also show that for any k = A ·loglogn the above function f is computable by a small circuit whose depth is constant whenever A is a (possibly large) constant. Recent results again give such functions but only when the number of players is k", "We prove n (1) lower bounds on the multiparty communication complexity of AC 0 functions in the number-on-forehead (NOF) model for up to �(logn) players. These are the first lower bounds for any AC 0 function for !(loglogn) players. In particular we show that there are families of depth 3 read-once AC 0 formulas having k-player randomized multiparty NOF communication complexity n (1) 2 O(k) . We show similar lower bounds for depth 4 read-once AC 0 formulas that have nondeterministic communication complexity O(log 2 n), yielding exponential separations between k-party nondeterministic and randomized communication complexity for AC 0 functions. As a consequence of the latter bound, we obtain an n (1 k) 2 O(k) lower bound on the k-party NOF communication complexity of set disjointness. This is non-trivial for up to �( p logn) players which is significantly larger than the up to �(loglogn) players allowed in the best previous lower bounds for multiparty set disjointness given by Lee and Shraibman [LS08] and Chattopadhyay and Ada [CA08] (though our complexity bounds themselves are not as strong as those in [LS08, CA08] for o(loglogn) players). We derive these results by extending the k-party generalization in [CA08, LS08] of the pattern matrix method of Sherstov [She07, She08]. Using this technique, we derive a new sufficient criterion for strong communication complexity lower bounds based on functions having many diverse subfunctions that do not have good low-degree polynomial approximations. This criterion guarantees that such functions have orthogonalizing distributions that are “max-smooth” as opposed to the “min-smooth” orthogonalizing distributions used by Razborov and Sherstov [RS08] to analyze the sign-rank of AC 0 ." ] }
0712.2682
2152668278
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
This basic algorithmic problem and several variations were initially presented in @cite_9 with the name of direct clustering. The same problem and its variations have also been referred to as two-way clustering, co-clustering or subspace clustering. In practice, finding highly homogeneous biclusters has important applications in biological data analysis (see @cite_7 for review and references), where a bicluster may, for example, correspond to an activation pattern common to a group of genes only under specific experimental conditions.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2036328877", "2144544802" ], "abstract": [ "Abstract Clustering algorithms are now in widespread use for sorting heterogeneous data into homogeneous blocks. If the data consist of a number of variables taking values over a number of cases, these algorithms may be used either to construct clusters of variables (using, say, correlation as a measure of distance between variables) or clusters of cases. This article presents a model, and a technique, for clustering cases and variables simultaneously. The principal advantage in this approach is the direct interpretation of the clusters on the data.", "A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications." ] }
0712.2682
2152668278
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
An alternative definition of the basic biclustering problem described in the introduction consists on finding the maximal bicluster in a given matrix. A well-known connection of this alternative formulation is its reduction to the problem of finding a biclique in a bipartite graph @cite_4 . Algorithms for detecting bicliques enumerate them in the graph by using the monotonicity property that a subset of a biclique is also a biclique @cite_3 @cite_6 . These algorithms usually have a high order of complexity.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_3" ], "mid": [ "2051006653", "2079535727", "1980503920" ], "abstract": [ "We present here 2-approximation algorithms for several node deletion and edge deletion biclique problems and for an edge deletion clique problem. The biclique problem is to find a node induced subgraph that is bipartite and complete. The objective is to minimize the total weight of nodes or edges deleted so that the remaining subgraph is bipartite complete. Several variants of the biclique problem are studied here, where the problem is defined on bipartite graph or on general graphs with or without the requirement that each side of the bipartition forms an independent set. The maximum clique problem is formulated as maximizing the number (or weight) of edges in the complete subgraph. A 2-approximation algorithm is given for the minimum edge deletion version of this problem. The approximation algorithms given here are derived as a special case of an approximation technique devised for a class of formulations introduced by Hochbaum. All approximation algorithms described (and the polynomial algorithms for two versions of the node biclique problem) involve calls to a minimum cut algorithm. One conclusion of our analysis of the NP-hard problems here is that all of these problems are MAX SNP-hard and at least as difficult to approximate as the vertex cover problem. Another conclusion is that the problem of finding the minimum node cut-set, the removal of which leaves two cliques in the graph, is NP-hard and 2-approximable.", "In graphs of bounded arboricity, the total complexity of all maximal complete bipartite subgraphs is O(n). We described a linear time algorithm to list such subgraphs. The arboricity bound is necessary: for any constant k and any n there exists an n-vertex graph with O(n) edges and (nlog n)k maximal complete bipartite subgraphs Kk,l.", "We describe a new algorithm for generating all maximal bicliques (i.e. complete bipartite, not necessarily induced subgraphs) of a graph. The algorithm is inspired by, and is quite similar to, the consensus method used in propositional logic. We show that some variants of the algorithm are totally polynomial, and even incrementally polynomial. The total complexity of the most efficient variant of the algorithms presented here is polynomial in the input size, and only linear in the output size. Computational experiments demonstrate its high efficiency on randomly generated graphs with up to 2000 vertices and 20,000 edges." ] }
0712.3113
1662211127
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
The compiler of Mercury, a pure declarative Prolog-variant does predicate reordering according to the I O modes of the predicates, as described in @cite_7 . The mode system of Mercury is much more expressive than the mode system of SINTAGMA's Query Optimizer, our !in! and !out! modes are easily handled by the Mercury compiler. On the other hand, it does not offer optimizations similar to our optimizer, it only reorders the predicates according to their I O modes.
{ "cite_N": [ "@cite_7" ], "mid": [ "1982243747" ], "abstract": [ "We introduce Mercury, a new purely declarative logic programming language designed to provide the support that groups of application programmers need when building large programs. Mercury's strong type, mode, and determinism systems improve program reliability by catching many errors at compile time. We present a new and relatively simple execution model that takes advantage of the information these systems provide, yielding very efficient code. The Mercury compiler uses this execution model to generate portable C code. Our benchmarking shows that the code generated by our implementation is significantly faster than the code generated by mature optimizing implementations of other logic programming languages." ] }
0712.3113
1662211127
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
The SIMS and the Infomaster information integration systems have a query optimizer component, as described in @cite_4 and @cite_6 , however, they have a different task than ours. In those systems, query optimizers take advantage of semantic knowledge about the information sources to choose a query plan that needs the least number of information source accesses, among the plans which answer the user query. In the Mediator of SINTAGMA, this is the task of the Query Planner, and Query Optimizer optimizes only the query execution plan.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2106082581", "1710121815" ], "abstract": [ "New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization.", "Information integration systems, also knows as mediators, information brokers, or information gathering agents, provide uniform user interfaces to varieties of different information sources. With corporate databases getting connected by intranets, and vast amounts of information becoming available over the Internet, the need for information integration systems is increasing steadily. Our work focuses on query planning in such systems. Query planning is the task of transforming a user query, represented in the user's interface language and vocabulary, into queries that can be executed by the information sources. Every information source might require a different query language and might use different vocabularies. The resulting answers of the information sources need to be translated and combined before the final answer can be reported to the user. We show that query plans with a fixed number of database operations are insufficient to extract all information from the sources, if functional dependencies or limitations on binding patterns are present. Dependencies complicate query planning because they allow query plans that would otherwise be invalid. We present an algorithm that constructs query plans that are guaranteed to extract all available information in these more general cases. This algorithm is also able to handle datalog user queries. We examine further extensions of the languages allowed for user queries and for describing information sources: disjunction, recursion and negation in source descriptions, negation and inequality in user queries. For these more expressive cases, we determine the data complexity required of languages able to represent \"best possible\" query plans." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Alon and Kahale @cite_7 were the first to employ spectral techniques for 3-coloring sparse random graphs. They present a spectral heuristic and show that this heuristic finds a 3-coloring in the so-called planted solution model''. This model is somewhat more difficult to deal with algorithmically than the @math model that we study in the present work. For while in the @math -model each vertex @math has @math neighbors in each of the other color classes @math , in the planted solution model of Alon and Kahale the number of neighbors of @math in @math has a Poisson distribution with mean @math . In effect, the spectral algorithm in @cite_7 is more sophisticated than the spectral heuristic from . In particular, the Alon-Kahale algorithm succeeds on @math -regular graphs (and hence on @math w.h.p.).
{ "cite_N": [ "@cite_7" ], "mid": [ "2079035346" ], "abstract": [ "Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
There are numerous papers on the performance of message passing algorithms for constraint satisfaction problems (e.g., Belief Propagation Survey Propagation) by authors from the statistical physics community (cf. @cite_6 @cite_4 @cite_9 and the references therein). While these papers provide rather plausible (and insightful) explanations for the success of message passing algorithms on problem instances such as random graphs @math or random @math -SAT formulae, the arguments (e.g., the replica or the cavity method) are mathematically non-rigorous. To the best of our knowledge, no connection between spectral methods and BP has been established in the physics literature.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_6" ], "mid": [ "2168290833", "1982531027", "1582487958" ], "abstract": [ "An instance of a random constraint satisfaction problem defines a random subset 𝒮 (the set of solutions) of a large product space X N (the set of assignments). We consider two prototypical problem ensembles (random k -satisfiability and q -coloring of random regular graphs) and study the uniform measure with support on S . As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states (“clusters”) and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may also beat this threshold.", "We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005", "Survey Propagation is an algorithm designed for solving typical instances of random constraint satisfiability problems. It has been successfully tested on random 3-SAT and random @math graph 3-coloring, in the hard region of the parameter space. Here we provide a generic formalism which applies to a wide class of discrete Constraint Satisfaction Problems." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Feige, Mossel, and Vilenchik @cite_15 showed that the Warning Propagation (WP) algorithm for 3-SAT converges in polynomial time to a satisfying assignment on a model of random 3-SAT instances with a planted solution. Since the messages in WP are additive in nature, and not multiplicative as in BP, the WP algorithm is conceptually much simpler. Moreover, on the model studied in @cite_15 a fairly simple combinatorial algorithm (based on the majority vote'' algorithm) is known to succeed. By contrast, no purely combinatorial algorithm (that does not rely on spectral methods or semi-definite programming) is known to 3-color @math or even arbitrary @math -regular instances.
{ "cite_N": [ "@cite_15" ], "mid": [ "2139919528" ], "abstract": [ "Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
A very recent paper by Yamamoto and Watanabe @cite_10 deals with a spectral approach to analyzing BP for the Minimum Bisection problem. Their work is similar to ours in that they point out that a BP-related algorithm pseudo-bp emulates spectral methods. However, a significant difference is that pseudo-bp is a simplified version of BP that is easier to analyze, whereas in the present work we make a point of analyzing the BP algorithm for coloring as it is stated in @cite_6 (cf. Remark for more detailed comments). Nonetheless, an interesting aspect of @cite_10 certainly is that this paper shows that BP can be applied to an actual optimization problem, rather than to the problem of just finding any feasible solution (e.g., a @math -coloring).
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2807232057", "1582487958" ], "abstract": [ "We address the question of convergence in the loopy belief propagation (LBP) algorithm. Specifically, we relate convergence of LBP to the existence of a weak limit for a sequence of Gibbs measures defined on the LBP's associated computation tree. Using tools from the theory of Gibbs measures we develop easily testable sufficient conditions for convergence. The failure of convergence of LBP implies the existence of multiple phases for the associated Gibbs specification. These results give new insight into the mechanics of the algorithm.", "Survey Propagation is an algorithm designed for solving typical instances of random constraint satisfiability problems. It has been successfully tested on random 3-SAT and random @math graph 3-coloring, in the hard region of the parameter space. Here we provide a generic formalism which applies to a wide class of discrete Constraint Satisfaction Problems." ] }
0711.4562
2951599038
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
Battista al @cite_16 showed that the decision version of the ToR problem () is an NP-complete problem in the general case. Motivated by the hardness of the general problem, they proposed approximation algorithms and reduced the ToR-D problem to a 2SAT formula by mapping any two adjacent edges in all input AS-level routing paths into a clause with two literals, while adding heuristics based inference.
{ "cite_N": [ "@cite_16" ], "mid": [ "2130725804" ], "abstract": [ "The problem of computing the types of the relationships between Internet autonomous systems is investigated. We refer to the model introduced in (ref.1), (ref.2) that bases the discovery of such relationships on the analysis of the AS paths extracted from the BGP routing tables. We characterize the time complexity of the above problem, showing both NP-completeness results and efficient algorithms for solving specific cases. Motivated by the hardness of the general problem, we propose heuristics based on a novel paradigm and show their effectiveness against publicly available data sets. The experiments put in evidence that our heuristics performs significantly better than state of the art heuristics." ] }
0711.4562
2951599038
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
Dimitropoulos al @cite_1 addressed a problem in current ToR algorithms. They showed that although ToR algorithms produce a directed Internet graph with a very small number of invalid paths, the resulting AS relationships are far from reality. This led them to the conclusion that simply trying to maximize the number of valid paths (namely improving the result of the ToR algorithms) does not produce realistic results. Later in @cite_17 they showed that ToR has no means to deterministically select the most realistic solution when facing multiple possible solutions. In order to solve this problem, the authors suggested a new objective function by adding a notion of "AS importance", which is the AS degree "gradient" in the original undirected Internet graph. The modified ToR algorithm directs the edges from low importance AS to a higher one. The authors showed that although they have high success rate in p2c inference (96.5 in s2s inference (90.3 (82.8 and mention that for some of them, the BGP tables, which are the source for AS-level routing paths for most works in this research field, miss up to 86.2 ASes, most of which are of p2p type.
{ "cite_N": [ "@cite_1", "@cite_17" ], "mid": [ "2125832781", "2120514843" ], "abstract": [ "Recent techniques for inferring business relationships between ASs [1,2] have yielded maps that have extremely few invalid BGP paths in the terminology of Gao[3]. However, some relationships inferred by these newer algorithms are incorrect, leading to the deduction of unrealistic AS hierarchies. We investigate this problem and discover what causes it. Having obtained such insight, we generalize the problem of AS relationship inference as a multiobjective optimization problem with node-degree-based corrections to the original objective function of minimizing the number of invalid paths. We solve the generalized version of the problem using the semidefinite programming relaxation of the MAX2SAT problem. Keeping the number of invalid paths small, we obtain a more veracious solution than that yielded by recent heuristics.", "Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5 customer to provider (c2p), 82.8 peer to peer (p2p), and 90.3 sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2 of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Ehrgott @cite_28 considered a variant of , where all objectives are encoded into a single objective by using some norm. He proved approximation ratios between @math and @math for this problem, where the ratio depends on the norm used.
{ "cite_N": [ "@cite_28" ], "mid": [ "1963770385" ], "abstract": [ "Abstract The computational complexity of combinatorial multiple objective programming problems is investigated. NP -completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well-known tree and Christofides’ heuristics for the traveling salesman problem is investigated in the multicriteria case with respect to the two definitions of approximability." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Manthey and Ram @cite_6 designed a @math approximation algorithm for and an approximation algorithm for , which achieves a constant ratio but works only for @math . They left open the existence of approximation algorithms for , , and .
{ "cite_N": [ "@cite_6" ], "mid": [ "1902458947" ], "abstract": [ "We analyze approximation algorithms for several variants of the traveling salesman problem with multiple objective functions. First, we consider the symmetric TSP (STSP) with γ-triangle inequality. For this problem, we present a deterministic polynomial-time algorithm that achieves an approximation ratio of @math and a randomized approximation algorithm that achieves a ratio of @math . In particular, we obtain a 2+e approximation for multi-criteria metric STSP. Then we show that multi-criteria cycle cover problems admit fully polynomial-time randomized approximation schemes. Based on these schemes, we present randomized approximation algorithms for STSP with γ-triangle inequality (ratio @math ), asymmetric TSP (ATSP) with γ-triangle inequality (ratio @math ), STSP with weights one and two (ratio 4 3) and ATSP with weights one and two (ratio 3 2)." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Bl " @cite_19 devised the first randomized approximation algorithms for and . Their algorithms achieve ratios of @math for k and @math for k. They argue that with their approach, only approximation ratios of @math can be achieved. Nevertheless, they conjectured that approximation ratios of @math are possible.
{ "cite_N": [ "@cite_19" ], "mid": [ "2952926320" ], "abstract": [ "We present randomized approximation algorithms for multi-criteria Max-TSP. For Max-STSP with k > 1 objective functions, we obtain an approximation ratio of @math for arbitrarily small @math . For Max-ATSP with k objective functions, we obtain an approximation ratio of @math ." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
For an overview of the literature about multi-criteria optimization, including multi-criteria TSP, we refer to Ehrgott and Gandibleux @cite_22 .
{ "cite_N": [ "@cite_22" ], "mid": [ "2083743691" ], "abstract": [ "This paper provides a survey of the research in and an annotated bibliography of multiple objective combinatorial optimization, MOCO. We present a general formulation of MOCO problems, describe the main characteristics of MOCO problems, and review the main properties and theoretical results for these problems. The main parts of the paper are a section on the review of the available solution methodology, both exact and heuristic, and a section on the annotation of the existing literature in the field organized problem by problem. We conclude the paper by stating open questions and areas of future research." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
As relative deviation is essentially equivalent to absolute deviation on a logarithmically scaled objective space, this choice should not affect the convergence results obtained but rather depend on the actual application problem at hand. The nice property of relative deviation is that it allows to prove that, under very mild assumptions, there is always an @math -Pareto set whose size is polynomial in the input length @cite_5 @cite_11 . Further approximation results for particular combinatorial multiobjective optimization problems are given in @cite_10 , where the question was how well a single solution can approximate the whole Pareto set, which is a special case of our question restricted to @math and with focus on deterministic algorithms.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "", "1963770385", "2060645964" ], "abstract": [ "", "Abstract The computational complexity of combinatorial multiple objective programming problems is investigated. NP -completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well-known tree and Christofides’ heuristics for the traveling salesman problem is investigated in the multicriteria case with respect to the two definitions of approximability.", "For multiobjective optimization problems, it is meaningful to compute a set of solutions covering all possible trade-offs between the different objectives. The multiobjective knapsack problem is a generalization of the classical knapsack problem in which each item has several profit values. For this problem, efficient algorithms for computing a provably good approximation to the set of all nondominated feasible solutions, the Pareto frontier, are studied.For the multiobjective one-dimensional knapsack problem, a practical fully polynomial-time approximation scheme (FPTAS) is derived. It is based on a new approach to the single-objective knapsack problem using a partition of the profit space into intervals of exponentially increasing length. For the multiobjectivem-dimensional knapsack problem, the first known polynomial-time approximation scheme (PTAS), based on linear programming, is presented." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
Despite the existence of suitable approximation concepts, investigations on the of particular algorithms towards such approximation sets, that is, their ability to obtain a suitable Pareto set approximation in the limit, have remained rare. In @cite_15 @cite_1 the stochastic search procedure proposed by earlier by @cite_7 was analyzed and proved to converge to an @math -Pareto set with @math in case of a finite search space. Obviously, the solution set maintained by this algorithm might in the worst case grow as large as the Pareto set @math itself. Thus, a different version with bounded memory of at most @math elements was proposed and shown to converge to some subset of @math of size at most @math , but no guarantee about the approximation quality could be given. Similar results were obtained by @cite_8 for continuous search spaces.
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2168196445", "2150881546", "70304834", "1980272771" ], "abstract": [ "The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a real-valued function or of finding Pareto-optimal points of a multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is finite, the time-invariant variation operator is associated with a positive transition probability function and that the selection operator obeys the so-called ‘elite preservation strategy.’", "We present four abstract evolutionary algorithms for multi-objective optimization and theoretical results that characterize their convergence behavior. Thanks to these results it is easy to verify whether or not a particular instantiation of these abstract evolutionary algorithms offers the desired limit behavior. Several examples are given.", "", "Abstract We consider the usage of evolutionary algorithms for multiobjective programming (MOP), i.e. for decision problems with alternatives taken from a real-valued vector space and evaluated according to a vector-valued objective function. Selection mechanisms, possibilities of temporary fitness deterioration, and problems of unreachable alternatives for such multiobjective evolutionary algorithms (MOEAs) are studied. Theoretical properties of MOEAs such as stochastic convergence with probability 1 are analyzed." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
One option to control the approximation quality under size restrictions is to define a which maps each possible solution set to a real value that can then be used to decide on the inclusion of a new search point. Several algorithms have been proposed that implement this concept @cite_20 @cite_16 . In case that such a quality indicator fulfils certain monotonicity conditions, it can be used as a potential function in the convergence analysis. As shown in @cite_9 @cite_17 , this entails convergence to a subset of the Pareto set as a local optimum of the quality indicator, but it remained open how such a local optimum relates to a guarantee on the approximation quality @math . @cite_17 also analyzed an adaptive grid archiving method proposed in @cite_19 and proved that after finite time, even though the solution set itself might permanently oscillate, it will always represent an @math -approximation whose approximation quality depends on the granularity of the adaptive grid and on the number of allowed solutions. The results depend on the additional assumption that the grid boundaries converge after finite time, which is fulfilled in certain special cases.
{ "cite_N": [ "@cite_9", "@cite_19", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "90390084", "2053900989", "2038420231", "1588375755", "2165626989" ], "abstract": [ "", "We introduce a simple evolution scheme for multiobjective optimization problems, called the Pareto Archived Evolution Strategy (PAES). We argue that PAES may represent the simplest possible nontrivial algorithm capable of generating diverse solutions in the Pareto optimal set. The algorithm, in its simplest form, is a (1 + 1) evolution strategy employing local search but using a reference archive of previously found solutions in order to identify the approximate dominance ranking of the current and candidate solution vectors. (1 + 1)-PAES is intended to be a baseline approach against which more involved methods may be compared. It may also serve well in some real-world applications when local search seems superior to or competitive with population-based methods. We introduce (1 + λ) and (μ | λ) variants of PAES as extensions to the basic algorithm. Six variants of PAES are compared to variants of the Niched Pareto Genetic Algorithm and the Nondominated Sorting Genetic Algorithm over a diverse suite of six test functions. Results are analyzed and presented using techniques that reduce the attainment surfaces generated from several optimization runs into a set of univariate distributions. This allows standard statistical analysis to be carried out for comparative purposes. Our results provide strong evidence that PAES performs consistently well on a range of multiobjective optimization tasks.", "Abstract The hypervolume measure (or S metric) is a frequently applied quality measure for comparing the results of evolutionary multiobjective optimisation algorithms (EMOA). The new idea is to aim explicitly for the maximisation of the dominated hypervolume within the optimisation process. A steady-state EMOA is proposed that features a selection operator based on the hypervolume measure combined with the concept of non-dominated sorting. The algorithm’s population evolves to a well-distributed set of solutions, thereby focussing on interesting regions of the Pareto front. The performance of the devised S metric selection EMOA ( SMS-EMOA ) is compared to state-of-the-art methods on two- and three-objective benchmark suites as well as on aeronautical real-world applications.", "This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSGA-II and SPEA2, with respect to different performance measures.", "Search algorithms for Pareto optimization are designed to obtain multiple solutions, each offering a different trade-off of the problem objectives. To make the different solutions available at the end of an algorithm run, procedures are needed for storing them, one by one, as they are found. In a simple case, this may be achieved by placing each point that is found into an \"archive\" which maintains only nondominated points and discards all others. However, even a set of mutually nondominated points is potentially very large, necessitating a bound on the archive's capacity. But with such a bound in place, it is no longer obvious which points should be maintained and which discarded; we would like the archive to maintain a representative and well-distributed subset of the points generated by the search algorithm, and also that this set converges. To achieve these objectives, we propose an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able to prove a form of convergence. The method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by \"good spread of points\" across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
A wrapper is a tool that extracts information (entities or values) from a document, or a set of documents, with a purpose of reusing information in another system. A lot of research has been carried out in this field by the database community, mostly in relation to querying heterogeneous databases @cite_24 @cite_21 @cite_15 @cite_5 . More recently, wrappers have also been built to extract information from web pages with different applications in mind, such as product comparison, reuse of information in virtual documents, or building experimental data sets. Most web wrappers are either based on scripting languages @cite_24 @cite_21 that are very close to current XML query languages, or use wrapper induction @cite_15 @cite_5 that learn rules for extracting information.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_15", "@cite_21" ], "mid": [ "1583156127", "", "2020998101", "1602270052" ], "abstract": [ "The importance of reuse is well recognised for electronic document writing. However, it is rarely achieved satisfactorily because of the complexity of the task: integrating different formats, handling updates of information, addressing document author’s need for intuitiveness and simplicity, etc. In this paper, we present a language for information reuse that allows users to write virtual documents, where dynamic information objects can be retrieved from various sources, transformed, and included along with static information in SGML documents. The language uses a tree-like structure for the representation of information objects, and allows querying without a complete knowledge of the structure or the types of information. The data structures and the syntax of the language are presented through an example application. A major strength of our approach is to treat the document as a non-monolithic set of reusable information objects.", "", "This paper describes a tool, called Nodose, we have developed to expedite the creation of robust wrappers. Nodose allows non-programmers to build components that can convert data from the source format to XML or another generic format. Further, the generated code performs a set of statistical checks at runtime that attempt to find extraction errors before they are propogated back to users.", "The Web has become a major conduit to information repositories of all kinds. Today, more than 80 of information published on the Web is generated by underlying databases (however access is granted through a Web gateway using forms as a query language and HTML as a display vehicle) and this proportion keeps increasing. But Web data sources also consist of standalone HTML pages hand-coded by individuals, that provide very useful information such as reviews, digests, links, etc. As for the information that also exists in underlying databases, the HTML interface is often the only one available for many would-be clients." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
To prevent wrappers breaking over time without notice when pages change, @cite_9 propose using machine learning for wrapper verification and re-induction. Rather than repairing a wrapper over changes in the web data, Callan and Mitamura @cite_7 propose generating the wrapper dynamically --- that is at the time of wrapping, using data previously extracted and stored in a database. The extraction rules are based on heuristics around a few pre-defined lexico-syntactic HTML patterns such as lists, tables, and links. The patterns are weighted according to the number of examples they recognise; the best patterns are used to dynamically extract new data.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2115770258", "1968067123" ], "abstract": [ "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.", "The usual approach to named-entity detection is to learn extraction rules that rely on linguistic, syntactic, or document format patterns that are consistent across a set of documents. However, when there is no consistency among documents, it may be more effective to learn document-specific extraction rules.This paper presents a knowledge-based approach to learning rules for named-entity extraction. Document-specific extraction rules are created using a generate-and-test paradigm and a database of known named-entities. Experimental results show that this approach is effective on Web documents that are difficult for the usual methods." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
Other approaches for entity extraction are based on the use of external resources, such as an ontology or a dictionary. @cite_18 use a populated ontology for entity extraction, while Cohen and Sarawagi @cite_19 exploit a dictionary for named entity extraction. @cite_2 use an ontology for automatic semantic annotation of web pages. Their system firstly identifies the syntactic structure that characterises an entity in a page, and then uses subsumption to identify the more specific concept to be associated with this entity.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_2" ], "mid": [ "2048468185", "14559458", "1511347393" ], "abstract": [ "We consider the problem of improving named entity recognition (NER) systems by using external dictionaries---more specifically, the problem of extending state-of-the-art NER systems by incorporating information about the similarity of extracted entities to entities in an external dictionary. This is difficult because most high-performance named entity recognition systems operate by sequentially classifying words as to whether or not they participate in an entity name; however, the most useful similarity measures score entire candidate names. To correct this mismatch we formalize a semi-Markov extraction process, which is based on sequentially classifying segments of several adjacent words, rather than single words. In addition to allowing a natural way of coupling high-performance NER methods and high-performance similarity functions, this formalism also allows the direct use of other useful entity-level features, and provides a more natural formulation of the NER problem than sequential word classification. Experiments in multiple domains show that the new model can substantially improve extraction performance over previous methods for using external dictionaries in NER.", "The approach towards Semantic Web Information Extraction (IE) presented here is implemented in KIM – a platform for semantic indexing, annotation, and retrieval. It combines IE based on the mature text engineering platform (GATE1) with Semantic Web-compliant knowledge representation and management. The cornerstone is automatic generation of named-entity (NE) annotations with class and instance references to a semantic repository. Simplistic upper-level ontology, providing detailed coverage of the most popular entity types (Person, Organization, Location, etc.; more than 250 classes) is designed and used. A knowledge base (KB) with de-facto exhaustive coverage of real-world entities of general importance is maintained, used, and constantly enriched. Extensions of the ontology and KB take care of handling all the lexical resources used for IE, most notable, instead of gazetteer lists, aliases of specific entities are kept together with them in the KB. A Semantic Gazetteer uses the KB to generate lookup annotations. Ontologyaware pattern-matching grammars allow precise class information to be handled via rules at the optimal level of generality. The grammars are used to recognize NE, with class and instance information referring to the KIM ontology and KB. Recognition of identity relations between the entities is used to unify their references to the KB. Based on the recognized NE, template relation construction is performed via grammar rules. As a result of the latter, the KB is being enriched with the recognized relations between entities. At the final phase of the IE process, previously unknown aliases and entities are being added to the KB with their specific types.", "Cet article presente un systeme automatique d'annotation semantique de pages web. Les systemes d'annotation automatique existants sont essentiellement syntaxiques, meme lorsque les travaux visent a produire une annotation semantique. La prise en compte d'informations semantiques sur le domaine pour l'annotation d'un element dans une page web a partir d'une ontologie suppose d'aborder conjointement deux problemes : (1) l'identification de la structure syntaxique caracterisant cet element dans la page web et (2) l'identification du concept le plus specifique (en termes de subsumption) dans l'ontologie dont l'instance sera utilisee pour annoter cet element. Notre demarche repose sur la mise en oeuvre d'une technique d'apprentissage issue initialement des wrappers que nous avons articulee avec des raisonnements exploitant la structure formelle de l'ontologie." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
@cite_0 use a populated ontology'' to assist in disambiguation of entities, such as names of authors using their published papers or domain of interest. They use text proximity between entities to disambiguate names (e.g. organisation name would be close to author's name). They also use text co-occurrence, for example for topics relevant to an author. So their algorithm is tuned for their actual ontology, while our algorithm is more based on the the categories and the structural properties of the Wikipedia.
{ "cite_N": [ "@cite_0" ], "mid": [ "1482174963" ], "abstract": [ "Precisely identifying entities in web documents is essential for document indexing, web search and data integration. Entity disambiguation is the challenge of determining the correct entity out of various candidate entities. Our novel method utilizes background knowledge in the form of a populated ontology. Additionally, it does not rely on the existence of any structure in a document or the appearance of data items that can provide strong evidence, such as email addresses, for disambiguating person names. Originality of our method is demonstrated in the way it uses different relationships in a document as well as from the ontology to provide clues in determining the correct entity. We demonstrate the applicability of our method by disambiguating names of researchers appearing in a collection of DBWorld posts using a large scale, real-world ontology extracted from the DBLP bibliography website. The precision and recall measurements provide encouraging results." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
Cucerzan @cite_12 uses Wikipedia data for named entity disambiguation. He first pre-processed a version of the Wikipedia collection (September 2006), and extracted more than 1.4 millions entities with an average of 2.4 surface forms by entities. He also extracted more than one million (entities, category) pairs that were further filtered down to 540 thousand pairs. Lexico-syntactic patterns, such as titles, links, paragraphs and lists, are used to build co-references of entities in limited contexts. The knowledge extracted from Wikipedia is then used for improving entity disambiguation in the context of web and news search.
{ "cite_N": [ "@cite_12" ], "mid": [ "86887328" ], "abstract": [ "This paper presents a large-scale system for the recognition and semantic disambiguation of named entities based on information extracted from a large encyclopedic collection and Web search results. It describes in detail the disambiguation paradigm employed and the information extraction process from Wikipedia. Through a process of maximizing the agreement between the contextual information extracted from Wikipedia and the context of a document, as well as the agreement among the category tags associated with the candidate entities, the implemented system shows high disambiguation accuracy on both news stories and Wikipedia articles." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
PageRank, an algorithm proposed by Brin and Page @cite_3 , is a link analysis algorithm that assigns a numerical weighting to each page of a hyperlinked set of web pages. The idea of PageRank is that a web page is a good page if it is popular, that is if many other (also preferably popular) web pages are referring to it.
{ "cite_N": [ "@cite_3" ], "mid": [ "2066636486" ], "abstract": [ "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want." ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
In section , we show that the two sided Bregman centroids @math and @math with respect to Bregman divergence @math are unique and easily obtained as generalized means for the identity and @math functions, respectively. We extend Sibson' s notion of information radius @cite_28 for these sided centroids, and show that they are both equal to the @math -Jensen difference, a generalized Jensen-Shannon divergence @cite_7 also known as Burbea-Rao divergences @cite_19 .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_7" ], "mid": [ "", "2070134780", "2146950091" ], "abstract": [ "", "Three measures of divergence between vectors in a convex set of a n -dimensional real vector space are defined in terms of certain types of entropy functions, and their convexity property is studied. Among other results, a classification of the entropies of degree is obtained by the convexity of these measures. These results have applications in information theory and biological studies.", "A novel class of information-theoretic divergence measures based on the Shannon entropy is introduced. Unlike the well-known Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The measures are also well characterized by the properties of nonnegativity, finiteness, semiboundedness, and boundedness. >" ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
The symmetrized Kullback-Leibler divergence ( @math -divergence) and symmetrized Itakura-Saito divergence (COSH distance) are often used in sound image applications, where our fast geodesic dichotomic walk algorithm converging to the unique symmetrized Bregman centroid comes in handy over former complex adhoc methods @cite_9 @cite_15 @cite_27 @cite_18 @cite_29 . considers applications of the generic geodesic-walk algorithm to two cases: The symmetrized Kullback-Leibler for probability mass functions represented as @math -dimensional points lying in the @math -dimensional simplex @math . These discrete distributions are handled as multinomials of the exponential families @cite_23 with @math degrees of freedom. We instantiate the generic geodesic-walk algorithm for that setting, show how it compares favorably with the prior convex optimization work of Veldhuis @cite_14 @cite_18 , and validate formally experimental remarks of Veldhuis.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_9", "@cite_29", "@cite_27", "@cite_23", "@cite_15" ], "mid": [ "2115685217", "2132512523", "143794185", "2149175990", "1887401733", "2116327295", "2133672829" ], "abstract": [ "This paper investigates the use of features based on posterior probabilities of subword units such as phonemes. These features are typically transformed when used as inputs for a hidden Markov model with mixture of Gaussians as emission distribution (HMM GMM). In this work, we introduce a novel acoustic model that avoids the Gaussian assumption and directly uses posterior features without any transformation. This model is described by a finite state machine where each state is characterized by a target distribution and the cost function associated to each state is given by the Kullback-Leibler (KL) divergence between its target distribution and the posterior features. Furthermore, hybrid HMM ANN system can be seen as a particular case of this KL-based model where state target distributions are predefined. A recursive training algorithm to estimate the state target distributions is also presented.", "This paper discusses the computation of the centroid induced by the symmetrical Kullback-Leibler distance. It is shown that it is the unique zeroing argument of a function which only depends on the arithmetic and the normalized geometric mean of the cluster. An efficient algorithm for its computation is presented. Speech spectra are used as an example.", "A locking cone chassis and several use methods to eliminate double and triple handling of shipping freight containers. The chassis device comprises two parallel I-beam main rails, a plurality of transverse ribs, a forward cone receiver formed from steel square tubing affixed to a forward end of the main rails, and a similar rear cone receiver affixed to the rear end of the main rails, the only essential difference between the receivers being that the forward cone receiver has an upper flange to guide and to stop a freight container. Each receiver has two box-shaped end portions, which operate like the corner pockets of a freight container-that is, each box-shaped end portion receives a locking cone through an upper surface cone receiving aperture. Each end portion also has access apertures for manually unlocking a cone.", "Maximum a posteriori (MAP) estimation has been successfully applied to speaker adaptation in speech recognition systems using hidden Markov models. When the amount of data is sufficiently large, MAP estimation yields recognition performance as good as that obtained using maximum-likelihood (ML) estimation. This paper describes a structural maximum a posteriori (SMAP) approach to improve the MAP estimates obtained when the amount of adaptation data is small. A hierarchical structure in the model parameter space is assumed and the probability density functions for model parameters at one level are used as priors for those of the parameters at adjacent levels. Results of supervised adaptation experiments using nonnative speakers' utterances showed that SMAP estimation reduced error rates by 61 when ten utterances were used for adaptation and that it yielded the same accuracy as MAP and ML estimation when the amount of data was sufficiently large. Furthermore, the recognition results obtained in unsupervised adaptation experiments showed that SMAP estimation was effective even when only one utterance from a new speaker was used for adaptation. An effective way to combine rapid supervised adaptation and on-line unsupervised adaptation was also investigated.", "Concatenative speech synthesis systems attempt to minimize audible signal discontinuities between two successive concatenated units. An objective distance measure which is able to predict audible discontinuities is therefore very important, particularly in unit selection synthesis, for which units are selected from among a large inventory at run time. In this paper, we describe a perceptual test to measure the detection rate of concatenation discontinuity by humans, and then we evaluate 13 different objective distance measures based on their ability to predict the human results. Criteria used to classify these distances include the detection rate, the Bhattacharyya measure of separability of two distributions, and receiver operating characteristic (ROC) curves. Results show that the Kullback-Leibler distance on power spectra has the higher detection rate followed by the Euclidean distance on Mel-frequency cepstral coefficients (MFCC).", "The Voronoi diagram of a finite set of objects is a fundamental geometric structure that subdivides the embedding space into regions, each region consisting of the points that are closer to a given object than to the others. We may define many variants of Voronoi diagrams depending on the class of objects, the distance functions and the embedding space. In this paper, we investigate a framework for defining and building Voronoi diagrams for a broad class of distance functions called Bregman divergences. Bregman divergences include not only the traditional (squared) Euclidean distance but also various divergence measures based on entropic functions. Accordingly, Bregman Voronoi diagrams allow to define information-theoretic Voronoi diagrams in statistical parametric spaces based on the relative entropy of distributions. We define several types of Bregman diagrams, establish correspondences between those diagrams (using the Legendre transformation), and show how to compute them efficiently. We also introduce extensions of these diagrams, e.g. k-order and k-bag Bregman Voronoi diagrams, and introduce Bregman triangulations of a set of points and their connexion with Bregman Voronoi diagrams. We show that these triangulations capture many of the properties of the celebrated Delaunay triangulation. Finally, we give some applications of Bregman Voronoi diagrams which are of interest in the context of computational geometry and machine learning.", "The directed divergence, which is a measure based on the discrimination information between two signal classes, is investigated. A simplified expression for computing the directed divergence is derived for comparing two Gaussian autoregressive processes such as those found in speech. This expression alleviates both the computational cost (reduced by two thirds) and the numerical problems encountered in computing the directed divergence. In addition, the simplified expression is compared with the Itakura-Saito distance (which asymptotically approaches the directed divergence). Although the expressions for these two distances closely resemble each other, only moderate correlations between the two were found on a set of actual speech data. >" ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
The symmetrized Kullback-Leibler of multivariate normal distributions. We describe the geodesic-walk for this particular mixed-type exponential family of multivariate normals, and explain the Legendre mixed-type vector matrix dual convex conjugates defining the corresponding Bregman divergences. This yields a simple, fast and elegant geometric method compared to the former overly complex method of Myrvoll and Soong @cite_9 that relies on solving Riccati matrix equations.
{ "cite_N": [ "@cite_9" ], "mid": [ "143794185" ], "abstract": [ "A locking cone chassis and several use methods to eliminate double and triple handling of shipping freight containers. The chassis device comprises two parallel I-beam main rails, a plurality of transverse ribs, a forward cone receiver formed from steel square tubing affixed to a forward end of the main rails, and a similar rear cone receiver affixed to the rear end of the main rails, the only essential difference between the receivers being that the forward cone receiver has an upper flange to guide and to stop a freight container. Each receiver has two box-shaped end portions, which operate like the corner pockets of a freight container-that is, each box-shaped end portion receives a locking cone through an upper surface cone receiving aperture. Each end portion also has access apertures for manually unlocking a cone." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Competitive algorithms for advanced reservation networks is the focus of @cite_14 . This work discusses the lazy ftp problem, where reservations are made for channels for the transfer of different files. The algorithm presented there provides a 4-competitive algorithm for the makespan (the total completion time). However, @cite_14 focuses on the case of fixed routes. When routing is also to be considered, the time complexity of the algorithm presented there may be exponential in the network size.
{ "cite_N": [ "@cite_14" ], "mid": [ "2056870183" ], "abstract": [ "In this paper we consider the online ftp problem. The goal is to service a sequence of file transfer requests given bandwidth constraints of the underlying communication network. The main result of the paper is a technique that leads to algorithms that optimize several natural metrics, such as max-stretch, total flow time, max flow time, and total completion time. In particular, we show how to achieve optimum total flow time and optimum max-stretch if we increase the capacity of the underlying network by a logarithmic factor. We show that the resource augmentation is necessary by proving polynomial lower bounds on the max-stretch and total flow time for the case where online and offline algorithms are using same-capacity edges. Moreover, we also give poly-logarithmic lower bounds on the resource augmentation factor necessary in order to keep the total flow time and max-stretch within a constant factor of optimum." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Another recent work @cite_28 , focusing on routing in packet switched network in an adversarial setting, discusses choosing routes for fixed size packets injected by an adversary. It enforces regularity limitations on the adversary which are stronger than the ones required here, and achieves the network capacity with a guarantee on the maximum queue size. It does not discuss the case of advance reservation with different job sizes or bandwidth requirements. It is based upon approximating an integer programming, which may not be extensible to a case where path reservation, rather than packet-based routing is involved.
{ "cite_N": [ "@cite_28" ], "mid": [ "2133049312" ], "abstract": [ "We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms.When the paths are known (either given by the adversary or computed as above), our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this article, we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet.Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Most of the works regarding competitive approaches to routing focused mainly on call admission, without the ability of advance reservation. For some results in this field see, e.g., @cite_20 @cite_3 . Some results involving advanced reservation are presented in @cite_6 . However, the path selection there is based on several alternatives supplied by a user in the request rather than a path selection using an automated mechanism attempting to optimize performance, as discussed here. In @cite_27 a combination of call admission and circuit switching is used to obtain a routing scheme with a logarithmic competitive ratio on the total revenue received. A competitive routing scheme in terms of the number of failed routes in the setting of ad-hoc networks is presented in @cite_15 . A survey of on-line routing results is presented in @cite_22 . A competitive algorithm for admission and routing in a multicasting setting is presented in @cite_18 . Most of the other existing work in this area consists of heuristic approaches which main emphasis are on the algorithm correctness and computational complexity, without throughput guarantees.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_3", "@cite_6", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "2114641109", "2173141774", "2008126942", "2546896278", "2096909875", "2118792918", "2099684070" ], "abstract": [ "We present the first polylog-competitive online algorithm for the general multicast admission control and routing problem in the throughput model. The ratio of the number of requests accepted by the optimum offline algorithm to the expected number of requests accepted by our algorithm is O((logn + loglogM)(logn + logM)logn), where M is the number of multicast groups and n is the number of nodes in the graph. We show that this is close to optimum by presenting an Ω(lognlogM) lower bound on this ratio for any randomized online algorithm against an oblivious adversary, when M is much larger than the link capacities. Our lower bound applies even in the restricted case where the link capacities are much larger than bandwidth requested by a single multicast. We also present a simple proof showing that it is impossible to be competitive against an adaptive online adversary.As in the previous online routing algorithms, our algorithm uses edge-costs when deciding on which is the best path to use. In contrast to the previous competitive algorithms in the throughput model, our cost is not a direct function of the edge load. The new cost definition allows us to decouple the effects of routing and admission decisions of different multicast groups.", "In this chapter we have described competitive on-line algorithms for on-line network routing problems. We have concentrated on routing in electrical and optical networks, presented algorithms for load minimization and throughput maximization problems, and mentioned some of the most popular open problems in the area.", "We study the on-line call admission problem in optical networks. We present a general technique that allows us to reduce the problem of call admission and wavelength selection to the call admission problem. We then give randomized algorithms with logarithmic competitive ratios for specific topologies in switchless and reconfigurable optical networks. We conclude by considering full duplex communications.", "", "Classical routing and admission control strategies achieve provably good performance by relying on an assumption that the virtual circuits arrival pattern can be described by some a priori known probabilistic model. A new on-line routing framework, based on the notion of competitive analysis, was proposed. This framework is geared toward design of strategies that have provably good performance even in the case where there are no statistical assumptions on the arrival pattern and parameters of the virtual circuits. The on-line strategies motivated by this framework are quite different from the min-hop and reservation-based strategies. This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues. >", "An ad hoc wireless network is an autonomous self-organizing system of mobile nodes connected by wireless links where nodes not in direct range communicate via intermediary nodes. Routing in ad hoc networks is a challenging problem as a result of highly dynamic topology as well as bandwidth and energy constraints. In addition, security is critical in these networks due to the accessibility of the shared wireless medium and the cooperative nature of ad hoc networks. However, none of the existing routing algorithms can withstand a dynamic proactive adversarial attack. The routing protocol presented in this work attempts to provide throughput-competitive route selection against an adaptive adversary. A proof of the convergence time of our algorithm is presented as well as preliminary simulation results.", "The authors examine routing strategies for fast packet switching networks based on flooding and predefined routes. The concern is to get both efficient routing and an even balanced use of network resources. They present efficient algorithms for assigning weights to edges in a controlled flooding scheme but show that the flooding scheme is not likely to yield a balanced use of the resources. Efficient algorithms are presented for choosing routes along breadth-first search trees and shortest paths. It is shown that in both cases a balanced use of network resources can be guaranteed. >" ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
In @cite_1 a rate achieving scheme for packet switching at the switch level is presented. Their scheme is based on convergence to the optimal multicommodity flow using delayed decision for queued packets. Their results somewhat resemble our @math algorithm. However, their scheme depends on the existence of an average packet size, whereas our scheme addresses the full routing-scheduling question for any size distribution and any (adversarial) arrival schedule. In @cite_11 queuing analysis of several optical transport network architectures is conducted, and it is shown that under some conditions on the arrival process, some of the schemes can achieve the maximum network rate. As the previous one, this paper does not discuss the full routing-scheduling question discussed here and does not handle unbounded job sizes. Another difference is that our paper provides an algorithm, @math discussed below, guaranteeing the ending time of a job at the time of arrival, which, as far as we know is the first such algorithm.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2125953414", "2164101046" ], "abstract": [ "Input Queued (IQ) switches have been very well studied in the recent past. The main problem in the IQ switches concerns scheduling. The main focus of the research has been the fixed length packet-known as cells-case. The scheduling decision becomes relatively easier for cells compared to the variable length packet case as scheduling needs to be done at a regular interval of fixed cell time. In real traffic dividing the variable packets into cells at the input side of the switch and then reassembling these cells into packets on the output side achieve it. The disadvantages of this cell-based approach are the following: (a) bandwidth is lost as division of a packet may generate incomplete cells, and (b) additional overhead of segmentation and reassembling cells into packets. This motivates the packet scheduling: scheduling is done in units of arriving packet sizes and in nonpreemptive fashion. In M.A. (2001) the problem of packet scheduling was first considered. They show that under any admissible Bernoulli i.i.d. arrival traffic a simple modification of maximum weight matching (MWM) algorithm is stable, similar to cell-based MWM. In this paper, we study the stability properties of packet based scheduling algorithm for general admissible arrival traffic pattern. We first show that the result of extends to general regenerative traffic model instead of just admissible traffic, that is, packet based MWM is stable. Next we show that there exists an admissible traffic pattern under which any work-conserving (that is maximal type) scheduling algorithm will be unstable. This suggests that the packet based MWM will be unstable too. To overcome this difficulty we propose a new class of \"waiting\" algorithms. We show that \"waiting\"-MWM algorithm is stable for any admissible traffic using fluid limit technique.", "We compare three optical transport network architectures—optical packet switching (OPS), optical flow switching (OFS), and optical burst switching (OBS)—based on a notion of network capacity as the set of exogenous traffic rates that can be stably supported by a network under its operational constraints. We characterize the capacity regions of the transport architectures, and show that the capacity region of OPS dominates that of OFS, and that the capacity region of OFS dominates that of OBS. We then apply these results to two important network topologies—bidirectional rings and Moore graphs—under uniform all-to-all traffic. Motivated by the incommensurate complexity cost of comparable transport architectures, we also investigate the dependence of the relative capacity performance of the switching architectures on the number of switch ports per fiber at core nodes." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Many papers have discussed the issue of path dispersion and attempted to achieve good throughput with limited dispersion, a survey of some results in this field is given in @cite_9 . In @cite_23 @cite_21 heuristic methods of controlling multipath routing and some quantitative measures are presented. As far as we know, our work proposes the first formal treatment allowing the approximation of a flow using a limited number of paths with any desired accuracy.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_23" ], "mid": [ "2063029333", "2151962283", "1481933765" ], "abstract": [ "Aggregation of resources is a means to improve performance and efficiency in statistically shared systems in general, and communication networks in particular. One approach to this is traffic dispersion, which means that the traffic from a source is spread over multiple paths and transmitted in parallel through the network. Traffic dispersion may help in utilizing network resources to their full potential, while providing quality-of-service guarantees. It is a topic gaining interest, and much work has been done in the field. The results are, however, difficult to find, since the technique appears under many different labels. This article is therefore an attempt to gather and report on the work done on traffic dispersion in communication networks. It looks at a specific instance of this general method. The processes are communication processes, and the resource is the link capacity in a packet switched network.", "Traditional multimedia streaming techniques usually assume single-path (unicast) data delivery. But when the aggregate traffic between 2 nodes exceeds the bandwidth capacity of single link path, a feasible solution is to appropriately disperse the aggregate traffic over multiple paths between these 2 nodes. In this paper we propose a set of multi-path streaming models for MPEG video traffic transmission. In addition to the attributes (such as load balancing and security) inherited from conventional data dispersion models, the proposed multimedia dispersion models are designed to achieve high error-free frame rate based on the characteristics of MPEG video structure. Our simulation results show that significant quality improvement can be observed if the proposed streaming models are employed appropriately.", "Internet service provider faces a daunting challenge in provisioning network efficiently. We introduce a proactive multipath routing scheme that tries to route traffic according to its built-in properties. Based on mathematical analysis, our approach disperses incoming traffic flows onto multiple paths according to path qualities. Long-lived flows are detected and migrated to the shortest path if their QoS could be guaranteed there. Suggesting nondisjoint path set, four types of dispersion policies are analyzed, and flow classification policy which relates flow trigger with link state update period is investigated. Simulation experiments show that our approach outperforms traditional single path routing significantly." ] }
0711.1242
1843364394
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
A large body of recent work (initiated mainly by Roughgarden and Tardos @cite_2 @cite_19 ) has studied from a game-theory perspective, how selfishness can degrade the overall performance of a system that has multiple (selfish) users. Much of this work has focused on situations where users have access to shared resources, and the cost of using a resource increases as the resource attracts more usage. Our focus here is on the parallel links'' network topology, also referred to as scheduling jobs to a set of load-dependent machines, which is one of the most commonly studied models (e.g. @cite_10 @cite_11 @cite_16 @cite_21 @cite_6 @cite_20 ). Papers such as @cite_1 @cite_4 @cite_16 have studied the price of anarchy for these games in the unsplittable flow'' setting, where each user may only use a single resource. In contrast we study the splittable flow'' setting of @cite_15 . This version (finitely many players, splittable flow) was shown in @cite_15 @cite_8 to possess unique pure Nash equilibria (see Definition ). @cite_14 study the cost of selfish behaviour in this model, and compare it with the cost of selfish behaviour in the Wardrop model (i.e. infinitely many infinitesimal users).
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2140489133", "", "", "", "2017227260", "", "", "2112269231", "2113692632", "", "1966132455", "", "" ], "abstract": [ "In this paper we initiate the study of how collusion alters the quality of solutions obtained in competitive games. The price of anarchy aims to measure the cost of the lack of coordination by comparing the quality of a Nash equilibrium to that of a centrally designed optimal solution. This notion assumes that players act not only selfishly, but also independently. We propose a framework for modeling groups of colluding players, in which members of a coalition cooperate so as to selfishly maximize their collective welfare. Clearly, such coalitions can improve the social welfare of the participants, but they can also harm the welfare of those outside the coalition. One might hope that the improvement for the coalition participants outweighs the negative effects on the others. This would imply that increased cooperation can only improved the overall solution quality of stable outcomes. However, increases in coordination can actually lead to significant decreases in total social welfare. In light of this, we propose the price of collusion as a measure of the possible negative effect of collusion, specifying the factor by which solution quality can deteriorate in the presence of coalitions. We give examples to show that the price of collusion can be arbitrarily high even in convex games. Our main results show that in the context of load-balancing games, the price of collusion depends upon the disparity in market power among the game participants. We show that in some symmetric nonatomic games (where all users have access to the same set of strategies) increased cooperation always improves the solution quality, and in the discrete analogs of such games, the price of collusion is bounded by two.", "", "", "", "The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable.In this paper, we assume this more realistic unsplittable model, and investigate the \"price of anarchy\", or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly @math 2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic \"congestion games\", where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is dδ(d). Our results hold also for mixed strategies.Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4 3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent.", "", "", "We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times---the total latency---is minimized.In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a \"selfishly motivated\" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance.In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4 3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total latency of the routes chosen by unregulated selfish network users may be arbitrarily larger than the minimum possible total latency; however, we prove that it is no more than the total latency incurred by optimally routing twice as much traffic.", "The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. >", "", "We study the problem of traffic routing in noncooperative networks. In such networks, users may follow selfish strategies to optimize their own performance measure and therefore, their behavior does not have to lead to optimal performance of the entire network. In this article we investigate the worst-case coordination ratio, which is a game-theoretic measure aiming to reflect the price of selfish routing. Following a line of previous work, we focus on the most basic networks consisting of parallel links with linear latency functions. Our main result is that the worst-case coordination ratio on m parallel links of possibly different speeds is Θ(log m log log log m). In fact, we are able to give an exact description of the worst-case coordination ratio, depending on the number of links and ratio of speed of the fastest link over the speed of the slowest link. For example, for the special case in which all m parallel links have the same speed, we can prove that the worst-case coordination ratio is Γ(−1) (m) p Θ(1), with Γ denoting the Gamma (factorial) function. Our bounds entirely resolve an open problem posed recently by Koutsoupias and Papadimitriou [1999].", "", "" ] }
0711.1242
1843364394
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
Stackelberg leadership refers to a game-theoretic situation where one player (the leader'') selects his action first, and commits to it. The other player(s) then choose their own action based on the choice made by the leader. Recent work on Stackelberg scheduling in the context of network flow (e.g. @cite_7 @cite_0 @cite_17 ), has studied it as a tool to mitigate the performance degradation due to selfish users. The flow that is controlled by the leader is routed so as to minimise social cost in the presence of followers who minimise their own costs. In contrast, here we consider what happens when the leading flow is controlled by another selfish agent. We show here that the price of decentralised behaviour goes up even further in the presence of a Stackelberg leader.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_17" ], "mid": [ "2029456721", "169126583", "2145512846" ], "abstract": [ "We study the problem of optimizing the performance of a system shared by selfish, noncooperative users. We consider the concrete setting of scheduling small jobs on a set of shared machines possessing latency functions that specify the amount of time needed to complete a job, given the machine load. We measure system performance by the total latency of the system. Assigning jobs according to the selfish interests of individual users, who wish to minimize only the latency that their own jobs experience, typically results in suboptimal system performance. However, in many systems of this type there is a mixture of \"selfishly controlled\" and \"centrally controlled\" jobs. The congestion due to centrally controlled jobs will influence the actions of selfish users, and we thus aspire to contain the degradation in system performance due to selfish behavior by scheduling the centrally controlled jobs in the best possible way. We formulate this goal as an optimization problem via Stackelberg games, games in which one player acts a leader (here, the centralized authority interested in optimizing system performance) and the rest as followers (the selfish users). The problem is then to compute a strategy for the leader (a Stackelberg strategy) that induces the followers to react in a way that (approximately) minimizes the total latency in the system. In this paper, we prove that it is NP-hard to compute an optimal Stackelberg strategy and present simple strategies with provably good performance guarantees. More precisely, we give a simple algorithm that computes a strategy inducing a job assignment with total latency no more than a constant times that of the optimal assignment of all of the jobs; in the absence of centrally controlled jobs and a Stackelberg strategy, no result of this type is possible. We also prove stronger performance guarantees in the special case where every machine latency function is linear in the machine load.", "We consider network games with atomic players, which indicates that some players control a positive amount of flow. Instead of studying Nash equilibria as previous work has done, we consider that players with considerable market power will make decisions before the others because they can predict the decisions of players without market power. This description fits the framework of Stackelberg games, where those with market power are leaders and the rest are price-taking followers. As Stackelberg equilibria are difficult to characterize, we prove bounds on the inefficiency of the solutions that arise when the leader uses a heuristic that approximate its optimal strategy.", "It is well known that in a network with arbitrary (convex) latency functions that are a function of edge traffic, the worst-case ratio, over all inputs, of the system delay caused due to selfish behavior versus the system delay of the optimal centralized solution may be unbounded even if the system consists of only two parallel links. This ratio is called the price of anarchy (PoA). In this paper, we investigate ways by which one can reduce the performance degradation due to selfish behavior. We investigate two primary methods (a) Stackelberg routing strategies, where a central authority, e.g., network manager, controls a fixed fraction of the flow, and can route this flow in any desired way so as to influence the flow of selfish users; and (b) network tolls, where tolls are imposed on the edges to modify the latencies of the edges, and thereby influence the induced Nash equilibrium. We obtain results demonstrating the effectiveness of both Stackelberg strategies and tolls in controlling the price of anarchy. For Stackelberg strategies, we obtain the first results for nonatomic routing in graphs more general than parallel-link graphs, and strengthen existing results for parallel-link graphs, (i) In series-parallel graphs, we show that Stackelberg routing reduces the PoA to a constant (depending on the fraction of flow controlled). (ii) For general graphs, we obtain latency-class specific bounds on the PoA with Stackelberg routing, which give a continuous trade-off between the fraction of flow controlled and the price of anarchy, (iii) In parallel-link graphs, we show that for any given class L of latency functions, Stackelberg routing reduces the PoA to at most α + (1 - α) · ρ(L), where α is the fraction of flow controlled and ρ(L) is the PoA of class L (when α = 0). For network tolls, motivated by the known strong results for nonatomic games, we consider the more general setting of atomic splittable routing games. We show that tolls inducing an optimal flow always exist, even for general asymmetric games with heterogeneous users, and can be computed efficiently by solving a convex program. Furthermore, we give a complete characterization of flows that can be induced via tolls. These are the first results on the effectiveness of tolls for atomic splittable games." ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Gorodnitsky and Rao @cite_3 propose FOCUSS as an iterative method for finding sparse solutions to underdetermined systems. At each iteration, FOCUSS solves a reweighted @math minimization with weights for @math . For nonzero signal coefficients, it is shown that each step of FOCUSS is equivalent to a step of the modified Newton's method for minimizing the function subject to @math . As the iterations proceed, it is suggested to identify those coefficients apparently converging to zero, remove them from subsequent iterations, and constrain them instead to be identically zero.
{ "cite_N": [ "@cite_3" ], "mid": [ "2122315118" ], "abstract": [ "We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging." ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Harikumar and Bresler @cite_4 propose an iterative algorithm that can be viewed as a generalization of FOCUSS. At each stage, the algorithm solves a convex optimization problem with a reweighted @math cost function that encourages sparse solutions. The algorithm allows for different reweighting rules; for a given choice of reweighting rule, the algorithm converges to a local minimum of some concave objective function (analogous to the log-sum penalty function in )). These methods build upon @math minimization rather than @math minimization.
{ "cite_N": [ "@cite_4" ], "mid": [ "2164630728" ], "abstract": [ "We present an iterative algorithm for computing sparse solutions (or sparse approximate solutions) to linear inverse problems. The algorithm is intended to supplement the existing arsenal of techniques. It is shown to converge to the local minima of a function of the form used for picking out sparse solutions, and its connection with existing techniques explained. Finally, it is demonstrated on subset selection and deconvolution examples. The fact that the proposed algorithm is sometimes successful when existing greedy algorithms fail is also demonstrated." ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Delaney and Bresler @cite_41 also propose a general algorithm for minimizing functionals having concave regularization penalties, again by solving a sequence of reweighted convex optimization problems (though not necessarily @math problems) with weights that decrease as a function of the prior estimate. With the particular choice of a log-sum regularization penalty, the algorithm resembles the noise-aware reweighted @math minimization discussed in .
{ "cite_N": [ "@cite_41" ], "mid": [ "2101455165" ], "abstract": [ "We introduce a generalization of a deterministic relaxation algorithm for edge-preserving regularization in linear inverse problems. This algorithm transforms the original (possibly nonconvex) optimization problem into a sequence of quadratic optimization problems, and has been shown to converge under certain conditions when the original cost functional being minimized is strictly convex. We prove that our more general algorithm is globally convergent (i.e., converges to a local minimum from any initialization) under less restrictive conditions, even when the original cost functional is nonconvex. We apply this algorithm to tomographic reconstruction from limited-angle data by formulating the problem as one of regularized least-squares optimization. The results demonstrate that the constraint of piecewise smoothness, applied through the use of edge-preserving regularization, can provide excellent limited-angle tomographic reconstructions. Two edge-preserving regularizers-one convex, the other nonconvex-are used in numerous simulations to demonstrate the effectiveness of the algorithm under various limited-angle scenarios, and to explore how factors, such as the choice of error norm, angular sampling rate and amount of noise, affect the reconstruction quality and algorithm performance. These simulation results show that for this application, the nonconvex regularizer produces consistently superior results." ] }
0710.2505
2135346163
Trace semantics has been defined for various kinds of state-based systems, notably with different forms of branching such as non-determinism vs. probability. In this paper we claim to identify one underlying mathematical structure behind these "trace semantics," namely coinduction in a Kleisli category. This claim is based on our technical result that, under a suitably order-enriched setting, a final coalgebra in a Kleisli category is given by an initial algebra in the category Sets. Formerly the theory of coalgebras has been employed mostly in Sets where coinduction yields a finer process semantics of bisimilarity. Therefore this paper extends the application field of coalgebras, providing a new instance of the principle "process semantics via coinduction."
In a different context of functional programming, the work @cite_42 also studies initial algebras and final coalgebras in a Kleisli category. The motivation there is to combine and . More specifically, an initial algebra and a final coalgebra support the and the operators, respectively, used in recursive programs over datatypes. A computational effect is presented as a monad, and its Kleisli category is the category of effectful computations. The difference between @cite_42 and the current work is as follows. @cite_42 , the original category of pure functions is already algebraically compact; the paper studies the conditions for the algebraic compactness to be carried over to Kleisli categories. In contrast, in the current work, it is a monad---with a suitable order structure, embodying the essence of branching''---which yields the initial algebra-final coalgebra coincidence on a Kleisli category; the coincidence is not present in the original category @math .
{ "cite_N": [ "@cite_42" ], "mid": [ "2010907903" ], "abstract": [ "Fusion laws permit to eliminate various of the intermediate data structures that are created in function compositions. The fusion laws associated with the traditional recursive operators on datatypes cannot, in general, be used to transform recursive programs with effects. Motivated by this fact, this paper addresses the definition of two recursive operators on datatypes that capture functional programs with effects. Effects are assumed to be modeled by monads. The main goal is thus the derivation of fusion laws for the new operators. One of the new operators is called monadic unfold. It captures programs (with effects) that generate a data structure in a standard way. The other operator is called monadic hylomorphism, and corresponds to programs formed by the composition of a monadic unfold followed by a function defined by structural induction on the data structure that the monadic unfold generates." ] }
0710.3392
2950145130
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
Finally, in @cite_4 , Barannikov constructs from any modular operad a BV-style master equation ( [(5.5)] Barmobvg ) whose solutions are equivalent to algebras over the Feynman transform of that operad. When one sets the modular operad to be the operad denoted by @math in ( 9), one obtains a slight modification of the BV algebra defined above for the case of a quiver with one vertex, obtained by adding a parameter @math and keeping track of a genus grading.
{ "cite_N": [ "@cite_4" ], "mid": [ "2043706520" ], "abstract": [ "I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras." ] }
0710.3392
2950145130
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
One may also form a directed analogue of the construction of @cite_4 , replacing modular operads with wheeled PROPs (which include commutative wheelgebras), by replacing undirected graphs with directed graphs. Here, one can additionally keep track of a genus grading at vertices.
{ "cite_N": [ "@cite_4" ], "mid": [ "2043706520" ], "abstract": [ "I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras." ] }
0710.3777
1802395297
We present a deterministic channel model which captures several key features of multiuser wireless communication. We consider a model for a wireless network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes. This result is a natural generalization of the max-flow min-cut theorem for wireline networks. Finally to demonstrate the connections between deterministic model and Gaussian model, we look at two examples: the single-relay channel and the diamond network. We show that in each of these two examples, the capacity-achieving scheme in the corresponding deterministic model naturally suggests a scheme in the Gaussian model that is within 1 bit and 2 bit respectively from cut-set upper bound, for all values of the channel gains. This is the first part of a two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut theorem of a class of deterministic networks of which our model is a special case.
Finite field addition makes the model much more tractable, and neglecting the 1-bit carryover from one level to the next introduce a small error when the SNR is high. Other works @cite_7 have also exploited the simplicity of finite-field addition over real addition. Aref @cite_1 is one of the earliest works that use deterministic models for relay networks, and for which he proved a capacity result for the single-source-single-destination case. However, his model only captures the broadcast aspect but not the superposition aspect. This work was later extended to the multicast setting by Ratnaker and Kramer @cite_8 . Aref and El Gamal @cite_0 also computed the capacity of the semi-determinstic relay channel but only with a single relay. @cite_4 also uses finite-field deterministic addition to model the superposition property, but they do not have the notion of signal scale and the channel as sending some of the signal scales to below noise level. Instead they use random erasures to model noise.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_0" ], "mid": [ "2124163667", "", "2051479030", "1616017690", "2155150064" ], "abstract": [ "We consider a finite-field model for the wireless broadcast and additive interference network (WBAIN), both in the presence and absence of fading. We show that the single-source unicast capacity (with extension to multicast) of a WBAIN with or without fading can be upper bounded by the capacity of an equivalent broadcast erasure network. We further present a coding strategy for WBAINs with i.i.d. and uniform fading based on random linear coding at each node that achieves a rate differing from the upper bound by no more than O(1 q), where q is the field size. Using these results, we show that channel fading in conjunction with network coding can lead to large gains in the unicast (multicast) capacity as compared to no fading", "", "The multicast capacity is determined for networks that have deterministic channels with broadcasting at the transmitters and no interference at the receivers. The multicast capacity is shown to have a cut-set interpretation. It is further shown that one cannot always layer channel and network coding in such networks. The proof of the latter result partially generalizes to discrete memoryless broadcast channels and is used to bound the common rate for problems where one achieves a cut bound on throughput.", "A centrifugating device for biological liquids, e.g. blood, in which a rotatable container carries a specially shaped seal that surrounds and bears on a fixed assembly with a minimum area of interface between the fixed and rotating parts. This seal is disposed outside the path of the liquid to be treated. The fixed assembly, in turn, is releasably carried by a bracket, the bracket being selectively longitudinally extensible as well as selectively adjustably swingable about a vertical axis of oscillation eccentric to the centrifuge, thereby to permit exact positioning of the fixed assembly coaxially of the rotatable container. The parts are so simple and inexpensive in construction that at least some of them can be used once and thrown away. Moreover, the fixed assembly is easily insertable in sealed relationship in any of a variety of containers, by the simplest of manual assembly and disassembly operations.", "The capacity of the class of relay channels with sender x_ 1 , a relay sender x_ 2 , a relay receiver y_ 1 =f(x_ 1 ,x_ 2 ) , and ultimate receiver y is proved to be C = p(x_ 1 ,x_ 2 ) I(X_ 1 , X_ 2 ; Y), H(Y_ 1 |X_ 2 )+I(X_ 1 ;Y|X_ 2 ,Y_ 1 ) ." ] }
0710.3824
2953239217
Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does rely on a subset of nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most @math faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most @math misbehaving sensors. On the negative side, we prove that no deterministic protocol can identify faking sensors if their number is @math . Thus our scheme is almost optimal with respect to the number of faking sensors. We discuss application of our technique in the trusted sensor model. More precisely our results can be used to minimize the number of trusted sensors that are needed to defeat faking ones.
Relaxing the assumption of trusted nodes makes the problem more challenging, and to our knowledge, has only been investigated very recently @cite_19 . We call this model where no trusted node preexists the (or ) model. The approach of @cite_19 is randomized and consists of two phases: distance measurement and filtering. In the distance measurement phase, sensors measure their distances to their neighbors, faking sensors being allowed to corrupt the distance measure technique. In the filtering phase each correct sensor randomly picks up @math so-called sensors. Next each sensor @math uses trilateration with respect to the chosen pivot sensors to compute the location of its neighbor @math . If there is a match between the announced location and the computed location, the @math link is added to the network, otherwise it is discarded. Of course, the chosen pivot sensors could be faking and lying, so the protocol may only give probabilistic guarantee.
{ "cite_N": [ "@cite_19" ], "mid": [ "2162079464" ], "abstract": [ "In an adversarial environment, various kinds of security attacks become possible if malicious nodes could claim fake locations that are different from where they are physically located. In this paper, we propose a secure localization mechanism that detects the existence of these nodes, termed as phantom nodes, without relying on any trusted entities, an approach significantly different from the existing ones. The proposed mechanism enjoys a set of nice features. First, it does not have any central point of attack. All nodes play the role of verifier, by generating local map, i.e. a view constructed based on ranging information from its neighbors. Second, this distributed and localized construction results in quite strong results: even when the number of phantom nodes is greater than that of honest nodes, we can Alter out most phantom nodes. Our analysis and simulations under realistic noisy settings demonstrate our scheme is effective in the presence of a large number of phantom nodes." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Operational transformation (OT) @cite_11 considers collaborative editing based on non-commutative single-character operations. To this end, OT transforms the arguments of remote operations to take into account the effects of concurrent executions. OT requires two correctness conditions @cite_11 : the transformation should enable concurrent operations to execute in either order, and furthermore, transformation functions themselves must commute. The former is relatively easy. The latter is more complex, and @cite_15 prove that all existing transformations violate it.
{ "cite_N": [ "@cite_15", "@cite_11" ], "mid": [ "335169515", "1996958808" ], "abstract": [ "Operational transformation (OT) is an approach which allows to build real-time groupware tools. This approach requires correct transformation functions regarding two conditions called TP1 and TP2. Proving correctness of these transformation functions is very complex and error prone. In this paper, we show how a theorem prover can address this serious bottleneck. To validate our approach, we verifed correctness of state-of-art transformation functions de ned on strings of characters with surprising results. Counter-examples provided by the theorem prover helped us to design the tombstone transformation functions. These functions verify TP1 and TP2, preserve intentions and ensure multi-effect relationships.", "Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di. ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go of identifying the major is-m s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras ation, intention pre tion, group e&tors, groupware, distributed computing." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
A number of papers study the advantages of commutativity for concurrency and consistency control [for instance] syn:alg:1466,syn:1470 . Systems such as Psync @cite_10 , Generalized Paxos @cite_12 , Generic Broadcast @cite_7 and IceCube @cite_2 make use of commutativity information to relax consistency or scheduling requirements. However, these works do not address the issue of achieving commutativity.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_12", "@cite_2" ], "mid": [ "2158145873", "2144606173", "2106670435", "2118871086" ], "abstract": [ "Psync is an IPC protocol that explicitly preserves the partial order of messages exchanged among a set of processes. A description is given of how Psync can be used to implement replicated objects in the presence of network and host failures. Unlike conventional algorithms that depend on an underlying mechanism that totally orders messages for implementing replicated objects, the authors' approach exploits the partial order provided by Psync to achieve additional concurrency. >", "Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely \"syntactic,\" that is, message \"semantics\" is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast.", "Theoretician’s Abstract Consensus has been regarded as the fundamental problem that must be solved to implement a fault-tolerant distributed system. However, only a weaker problem than traditional consensus need be solved. We generalize the consensus problem to include both traditional consensus and this weaker version. A straightforward generalization of the Paxos consensus algorithm implements general consensus. The generalizations of consensus and of the Paxos algorithm require a mathematical detour de force into a type of object called a command-structure set.", "IceCube is a system for optimistic replication, supporting collaborative work and mobile computing. It lets users write to shared data with no mutual synchronisation; however replicas diverge and must be reconciled. IceCube is a general-purpose reconciliation engine, parameterised by “constraints” capturing data semantics and user intents. IceCube combines logs of disconnected actions into near-optimal reconciliation schedules that honour the constraints. IceCube features a simple, high-level, systematic API . It seamlessly integrates diverse applications, sharing various data, and run by concurrent users. This paper focus on the IceCube API and algorithms. Application experience indicates that IceCube simplifies application design, supports a wide variety of application semantics, and seamlessly integrates diverse applications. On a realistic benchmark, IceCube runs at reasonable speeds and scales to large input sets." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Weihl @cite_1 distinguishes between forward and backward commutativity. They differ only when operations fail their pre-condition. In this work, we consider only operations that succeed at the submission site, and ensure by design that they won't fail at replay sites.
{ "cite_N": [ "@cite_1" ], "mid": [ "2157092502" ], "abstract": [ "Two novel concurrency algorithms for abstract data types are presented that ensure serializability of transactions. It is proved that both algorithms ensure a local atomicity property called dynamic atomicity. The algorithms are quite general, permitting operations to be both partial and nondeterministic. The results returned by operations can be used in determining conflicts, thus allowing higher levels of concurrency than otherwise possible. The descriptions and proofs encompass recovery as well as concurrency control. The two algorithms use different recovery methods: one uses intentions lists, and the other uses undo logs. It is shown that conflict relations that work with one recovery method do not necessarily work with the other. A general correctness condition that must be satisfied by the combination of a recovery method and a conflict relation is identified. >" ] }