id
stringlengths
1
5
document_id
stringlengths
1
5
text_1
stringlengths
78
2.56k
text_2
stringlengths
95
23.3k
text_1_name
stringclasses
1 value
text_2_name
stringclasses
1 value
29501
29500
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
We present a new classification system for aspect-oriented programs. This system characterizes the interactions between aspects and methods and identifies classes of interactions that enable modular reasoning about the crosscut program. We argue that this system can help developers structure their understanding of aspect-oriented programs and promotes their ability to reason productively about the consequences of crosscutting a program with a given aspect. We have designed and implemented a program analysis system that automatically classifies interactions between aspects and methods and have applied this analysis to a set of benchmark programs. We found that our analysis is able to 1) identify interactions with desirable properties (such as lack of interference), 2) identify potentially problematic interactions (such as interference caused by the aspect and the method both writing the same field), and 3) direct the developer's attention to the causes of such interactions. Aspects are intended to add needed functionality to a system or to treat concerns of the system by augmenting or changing the existing code in a manner that cross-cuts the usual class or process hierarchy. However, sometimes aspects can invalidate some of the already existing desirable properties of the system. This paper shows how to automatically identify such situations. The importance of specifications of the underlying system is emphasized, and shown to clarify the degree of obliviousness appropriate for aspects. The use of regression testing is considered, and regression verification is recommended instead, with possible division into static analysis, deductive proofs, and aspect validation using model checking. Static analysis of only the aspect code is effective when strongly typed and clearly parameterized aspect languages are used. Spectative aspects can then be identified, and imply absence of harm for all safety and liveness properties involving only the variables and fields of the original system. Deductive proofs can be extended to show inductive invariants are not harmed by an aspect, also by treating only the aspect code. Aspect validation to establish lack of harm is defined and suggested as an optimal approach when the entire augmented system with the aspect woven in must be considered.
Abstract of query paper
Cite abstracts
29502
29501
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
In general, aspect-oriented programs require a whole-program analysis to understand the semantics of a single method invocation. This property can make reasoning difficult, impeding maintenance efforts, contrary to a stated goal of aspect-oriented programming. We propose some simple modifications to AspectJ that permit modular reasoning. This eliminates the need for whole-program analysis and makes code easier to understand and maintain.
Abstract of query paper
Cite abstracts
29503
29502
In the first part of the paper we generalize a descent technique due to Harish-Chandra to the case of a reductive group acting on a smooth affine variety both defined over arbitrary local field F of characteristic zero. Our main tool is Luna slice theorem. In the second part of the paper we apply this technique to symmetric pairs. In particular we prove that the pair (GL(n,C),GL(n,R)) is a Gelfand pair. We also prove that any conjugation invariant distribution on GL(n,F) is invariant with respect to transposition. For non-archimedean F the later is a classical theorem of Gelfand and Kazhdan. We use the techniques developed here in our subsequent work [AG3] where we prove an archimedean analog of the theorem on uniqueness of linear periods by H. Jacquet and S. Rallis.
Symmetric spaces over local fields, and the harmonic analysis of their class one representations, arise as the local calculation of Jacquet's theory of the relative trace formula. There is an extensive literature at the real place, but few general results for p-adic fields are known. The objective here is to carry over to symmetric spaces as much as possible of Queens and the prerequisite results of Howe, and to provide counterexamples for those things which do not generalize. We derive the Weyl integration formula, local constancy of the spherical character on the θ-regular set, Howe's conjecture for θ-groups, the germ expansion for spherical characters at the origin, and a spherical version of Howe's Kirillov theory for compact p-adic groups. We find that the density property of regular orbital integrals fails. Some of the basic ideas are nascent in Hakim's thesis (written under Herve Jacquet).
Abstract of query paper
Cite abstracts
29504
29503
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
Data aggregation is a widely used technique in wireless sensor networks. The security issues, data confidentiality and integrity, in data aggregation become vital when the sensor network is deployed in a hostile environment. There has been many related work proposed to address these security issues. In this paper we survey these work and classify them into two cases: hop-by-hop encrypted data aggregation and end-to-end encrypted data aggregation. We also propose two general frameworks for the two cases respectively. The framework for end-to-end encrypted data aggregation has higher computation cost on the sensor nodes, but achieves stronger security, in comparison with the framework for hop-by-hop encrypted data aggregation. In-network data aggregation is a popular technique for reducing the energy consumption tied to data transmission in a multi-hop wireless sensor network. However, data aggregation in untrusted or even hostile environments becomes problematic when end-to-end privacy between sensors and the sink is desired. In this paper we revisit and investigate the applicability of additively homomorphic public-key encryption algorithms for certain classes of wireless sensor networks. Finally, we provide recommendations for selecting the most suitable public key schemes for different topologies and wireless sensor network scenarios. Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.
Abstract of query paper
Cite abstracts
29505
29504
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
Data aggregation is a widely used technique in wireless sensor networks. The security issues, data confidentiality and integrity, in data aggregation become vital when the sensor network is deployed in a hostile environment. There has been many related work proposed to address these security issues. In this paper we survey these work and classify them into two cases: hop-by-hop encrypted data aggregation and end-to-end encrypted data aggregation. We also propose two general frameworks for the two cases respectively. The framework for end-to-end encrypted data aggregation has higher computation cost on the sensor nodes, but achieves stronger security, in comparison with the framework for hop-by-hop encrypted data aggregation. In-network data aggregation is a popular technique for reducing the energy consumption tied to data transmission in a multi-hop wireless sensor network. However, data aggregation in untrusted or even hostile environments becomes problematic when end-to-end privacy between sensors and the sink is desired. In this paper we revisit and investigate the applicability of additively homomorphic public-key encryption algorithms for certain classes of wireless sensor networks. Finally, we provide recommendations for selecting the most suitable public key schemes for different topologies and wireless sensor network scenarios. As sensor networks edge closer towards wide-spread deployment, security issues become a central concern. So far, much research has focused on making sensor networks feasible and useful, and has not concentrated on security. We present a suite of security building blocks optimized for resource-constrained environments and wireless communication. SPINS has two secure building blocks: SNEP and μTESLA SNEP provides the following important baseline security primitives: Data confidentiality, two-party data authentication, and data freshness. A particularly hard problem is to provide efficient broadcast authentication, which is an important mechanism for sensor networks. μTESLA is a new protocol which provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols.
Abstract of query paper
Cite abstracts
29506
29505
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
In sensor networks, data aggregation is a vital primitive enabling efficient data queries. An on-site aggregator device collects data from sensor nodes and produces a condensed summary which is forwarded to the off-site querier, thus reducing the communication cost of the query. Since the aggregator is on-site, it is vulnerable to physical compromise attacks. A compromised aggregator may report false aggregation results. Hence, it is essential that techniques are available to allow the querier to verify the integrity of the result returned by the aggregator node. We propose a novel framework for secure information aggregation in sensor networks. By constructing efficient random sampling mechanisms and interactive proofs, we enable the querier to verify that the answer given by the aggregator is a good approximation of the true value, even when the aggregator and a fraction of the sensor nodes are corrupted. In particular, we present efficient protocols for secure computation of the median and average of the measurements, for the estimation of the network size, for finding the minimum and maximum sensor reading, and for random sampling and leader election. Our protocols require only sublinear communication between the aggregator and the user. Hop-by-hop data aggregation is a very important technique for reducing the communication overhead and energy expenditure of sensor nodes during the process of data collection in a sensor network. However, because individual sensor readings are lost in the per-hop aggregation process, compromised nodes in the network may forge false values as the aggregation results of other nodes, tricking the base station into accepting spurious aggregation results. Here a fundamental challenge is: how can the base station obtain a good approximation of the fusion result when a fraction of sensor nodes are compromised.To answer this challenge, we propose SDAP, a Secure Hop-by-hop Data Aggregation Protocol for sensor networks. The design of SDAP is based on the principles of divide-and-conquer and commit-and-attest. First, SDAP uses a novel probabilistic grouping technique to dynamically partition the nodes in a tree topology into multiple logical groups (subtrees) of similar sizes. A commitment-based hop-by-hop aggregation is performed in each group to generate a group aggregate. The base station then identifies the suspicious groups based on the set of group aggregates. Finally, each group under suspect participates in an attestation process to prove the correctness of its group aggregate. Our analysis and simulations show that SDAP can achieve the level of efficiency close to an ordinary hop-by-hop aggregation protocol while providing certain assurance on the trustworthiness of the aggregation result. Moreover, SDAP is a general-purpose secure aggregation protocol applicable to multiple aggregation functions. An emerging class of important applications uses ad hoc wireless networks of low-power sensor devices to monitor and send information about a possibly hostile environment to a powerful base station connected to a wired network. To conserve power, intermediate network nodes should aggregate results from individual sensors. However, this opens the risk that a single compromised sensor device can render the network useless, or worse, mislead the operator into trusting a false reading. We present a protocol that provides a secure aggregation mechanism for wireless networks that is resilient to both intruder devices and single device key compromises. Our protocol is designed to work within the computation, memory and power consumption limits of inexpensive sensor devices, but takes advantage of the properties of wireless networking, as well as the power asymmetry between the devices and the base station. In-network aggregation is an essential primitive for performing queries on sensor network data. However, most aggregation algorithms assume that all intermediate nodes are trusted. In contrast, the standard threat model in sensor network security assumes that an attacker may control a fraction of the nodes, which may misbehave in an arbitrary (Byzantine) manner.We present the first algorithm for provably secure hierarchical in-network data aggregation. Our algorithm is guaranteed to detect any manipulation of the aggregate by the adversary beyond what is achievable through direct injection of data values at compromised nodes. In other words, the adversary can never gain any advantage from misrepresenting intermediate aggregation computations. Our algorithm incurs only O(Δ log2 n) node congestion, supports arbitrary tree-based aggregator topologies and retains its resistance against aggregation manipulation in the presence of arbitrary numbers of malicious nodes. The main algorithm is based on performing the sum aggregation securely by first forcing the adversary to commit to its choice of intermediate aggregation results, and then having the sensor nodes independently verify that their contributions to the aggregate are correctly incorporated. We show how to reduce secure median , count , and average to this primitive. Sensor networks include nodes with limited computation and communication capabilities. One of the basic functions of sensor networks is to sense and transmit data to the end users. The resource constraints and security issues pose a challenge to information aggregation in large sensor networks. Bootstrapping keys is another challenge because public key cryptosystems are unsuitable for use in resource-constrained sensor networks. In this paper, we propose a solution by dividing the problem in two domains. First, we present a protocol for establishing cluster keys in sensor networks using verifiable secret sharing. We chose elliptic curve cryptosystems for security because of their smaller key size, faster computations and reductions in processing power. Second, we develop a secure data aggregation and verification (SecureDAV) protocol that ensures that the base station never accepts faulty aggregate readings. An integrity check of the readings is done using Merkle hash trees, avoiding over-reliance on the cluster-heads.
Abstract of query paper
Cite abstracts
29507
29506
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Networks of small, densely distributed wireless sensor nodes are capable of solving a variety of collaborative problems such as monitoring and surveillance. We develop a simple algorithm that detects and tracks a moving target, and alerts sensor nodes along the projected path of the target. The algorithm involves only simple computation and localizes communication only to the nodes in the vicinity of the target and its projected course. The algorithm is evaluated on a small-scale testbed of Berkeley motes using a light source as the moving target. The performance results are presented emphasizing the accuracy of the technique, along with a discussion about our experience in using such a platform for target tracking experiments.
Abstract of query paper
Cite abstracts
29508
29507
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed. We present novel grid coverage strategies for effective surveillance and target location in distributed sensor networks. We represent the sensor field as a grid (two or three-dimensional) of points (coordinates) and use the term target location to refer to the problem of locating a target at a grid point at any instant in time. We first present an integer linear programming (ILP) solution for minimizing the cost of sensors for complete coverage of the sensor field. We solve the ILP model using a representative public-domain solver and present a divide-and-conquer approach for solving large problem instances. We then use the framework of identifying codes to determine sensor placement for unique target location, We provide coding-theoretic bounds on the number of sensors and present methods for determining their placement in the sensor field. We also show that grid-based sensor placement for single targets provides asymptotically complete (unambiguous) location of multiple targets in the grid.
Abstract of query paper
Cite abstracts
29509
29508
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @Q(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.
Abstract of query paper
Cite abstracts
29510
29509
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
The tradeoff between performance and scalability is a fundamental issue in distributed sensor networks. In this paper, we propose a novel scheme to efficiently organize and utilize network resources for target localization. Motivated by the essential role of geographic proximity in sensing, sensors are organized into geographically local collaborative groups. In a target tracking context, we present a dynamic group management method to initiate and maintain multiple tracks in a distributed manner. Collaborative groups are formed, each responsible for tracking a single target. The sensor nodes within a group coordinate their behavior using geographically-limited message passing. Mechanisms such as these for managing local collaborations are essential building blocks for scalable sensor network applications.
Abstract of query paper
Cite abstracts
29511
29510
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Sensor nodes have limited sensing range and are not very reliable. To obtain accurate sensing data, many sensor nodes should he deployed and then the collaboration among them becomes an important issue. In W. Zhang and G. Cao, a tree-based approach has been proposed to facilitate sensor nodes collaborating in detecting and tracking a mobile target. As the target moves, many nodes in the tree may become faraway from the root of the tree, and hence a large amount of energy may be wasted for them to send their sensing data to the root. We address the tree reconfiguration problem. We formalize it as finding a min-cost convoy tree sequence, and solve it by proposing an optimized complete reconfiguration scheme and an optimized interception-based reconfiguration scheme. Analysis and simulation are conducted to compare the proposed schemes with each other and with other reconfiguration schemes. The results show that the proposed schemes are more energy efficient than others.
Abstract of query paper
Cite abstracts
29512
29511
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
The wireless sensor network is an emerging technology that may greatly facilitate human life by providing ubiquitous sensing, computing, and communication capability, through which people can more closely interact with the environment wherever he she goes. To be context-aware, one of the central issues in sensor networks is location tracking, whose goal is to monitor the roaming path of a moving object. While similar to the location-update problem in PCS networks, this problem is more challenging in two senses: (1) there are no central control mechanism and backbone network in such environment, and (2) the wireless communication bandwidth is very limited. In this paper, we propose a novel protocol based on the mobile agent paradigm. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object. The agent is mobile since it will choose the sensor closest to the object to stay. The agent may invite some nearby slave sensors to cooperatively position the object and inhibit other irrelevant (i.e., farther) sensors from tracking the object.As a result, the communication and sensing overheads are greatly reduced. Our prototyping of the location-tracking mobile agent based on IEEE 802.11b NICs and our experimental experiences are also reported.
Abstract of query paper
Cite abstracts
29513
29512
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed. Ch 1 Intro. Ch 2 Canonical Problem: Localization and Tracking Ch 3 Networking Sensor Networks Ch 4 Synchronization and Localization Ch 5 Sensor Tasking and Control Ch 6 Sensor Network Database Ch 7 Sensor Network Platforms and Tools Ch 8 Application and Future Direction
Abstract of query paper
Cite abstracts
29514
29513
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
This paper takes a systematic look at methods for estimating the curvature of surfaces represented by triangular meshes. We have developed a suite of test cases for assessing both the detailed behavior of these methods, and the error statistics that occur for samples from a general mesh. Detailed behavior is represented by the sensitivity of curvature calculation methods to noise, mesh resolution, and mesh regularity factors. Statistical analysis breaks out the effects of valence, triangle shape, and curvature sign. These tests are applied to existing discrete curvature approximation techniques and common surface fitting methods. We provide a summary of existing curvature estimation methods, and also look at alternatives to the standard parameterization techniques. The results illustrate the impact of noise and mesh related issues on the accuracy of these methods and provide guidance in choosing an appropriate method for applications requiring curvature estimates. In a variety of practical situations such as reverse engineering of boundary representation from depth maps of scanned objects, range data analysis, model-based recognition and algebraic surface design, there is a need to recover the shape of visible surfaces of a dense 3D point set. In particular, it is desirable to identify and fit simple surfaces of known type wherever these are in reasonable agreement with the data. We are interested in the class of quadric surfaces, that is, algebraic surfaces of degree 2, instances of which are the sphere, the cylinder and the cone. A comprehensive survey of the recent work in each subtask pertaining to the extraction of quadric surfaces from triangulations is presented.
Abstract of query paper
Cite abstracts
29515
29514
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
The purpose of this book is to reveal to the interested (but perhaps mathematically unsophisticated) user the foundations and major features of several basic methods for curve and surface fitting that are currently in use.
Abstract of query paper
Cite abstracts
29516
29515
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Discrete curvature and shape operators, which capture complete information about directional curvatures at a point, are essential in a variety of applications: simulation of deformable two-dimensional objects, variational modeling and geometric data processing. In many of these applications, objects are represented by meshes. Currently, a spectrum of approaches for formulating curvature operators for meshes exists, ranging from highly accurate but computationally expensive methods used in engineering applications to efficient but less accurate techniques popular in simulation for computer graphics. We propose a simple and efficient formulation for the shape operator for variational problems on general meshes, using degrees of freedom associated with normals. On the one hand, it is similar in its simplicity to some of the discrete curvature operators commonly used in graphics; on the other hand, it passes a number of important convergence tests and produces consistent results for different types of meshes and mesh refinement. Curvature-based energy and forces are used in a broad variety of contexts, ranging from modeling of thin plates and shells to surface fairing and variational surface design. The approaches to discretization preferred in different areas often have little in common: engineering shell analysis is dominated by finite elements, while spring-particle models are often preferred for animation and qualitative simulation due to their simplicity and low computational cost. Both types of approaches have found applications in geometric modeling. While there is a well-established theory for finite element methods, alternative discretizations are less well understood: many questions about mesh dependence, convergence and accuracy remain unanswered. We discuss the general principles for defining curvature-based energy on discrete surfaces based on geometric invariance and convergence considerations. We show how these principles can be used to understand the behavior of some commonly used discretizations, to establish relations between some well-known discrete geometry and finite element formulations and to derive new simple and efficient discretizations.
Abstract of query paper
Cite abstracts
29517
29516
We study the problem of deciding satisfiability of first order logic queries over views, our aim being to delimit the boundary between the decidable and the undecidable fragments of this language. Views currently occupy a central place in database research, due to their role in applications such as information integration and data warehousing. Our main result is the identification of a decidable class of first order queries over unary conjunctive views that generalises the decidability of the classical class of first order sentences over unary relations, known as the Lowenheim class. We then demonstrate how various extensions of this class lead to undecidability and also provide some expressivity results. Besides its theoretical interest, our new decidable class is potentially interesting for use in applications such as deciding implication of complex dependencies, analysis of a restricted class of active database rules, and ontology reasoning.
Query containment under constraints is the problem of checking whether for every database satisfying a given set of constraints, the result of one query is a subset of the result of another query. Recent research points out that this is a central problem in several database applications, and we address it within a setting where constraints are specified in the form of special inclusion dependencies over complex expressions, built by using intersection and difference of relations, special forms of quantification, regular expressions over binary relations, and cardinality constraints. These types of constraints capture a great variety of data models, including the relational, the entity-relational, and the object-oriented model. We study the problem of checking whether q is contained in q′ with respect to the constraints specified in a schema S, where q and q′ are nonrecursive Datalog programs whose atoms are complex expressions. We present the following results on query containment. For the case where q does not contain regular expressions, we provide a method for deciding query containment, and analyze its computational complexity. We do the same for the case where neither S nor q, q′ contain number restrictions. To the best of our knowledge, this yields the first decidability result on containment of conjunctive queries with regular expressions. Finally, we prove that the problem is undecidable for the case where we admit inequalities in q′.
Abstract of query paper
Cite abstracts
29518
29517
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
A system of techniques is presented for optimizing open shortest path first (OSPF) or intermediate system-intermediate system (IS-IS) weights for intradomain routing in a changing world, the goal being to avoid overloaded links. We address predicted periodic changes in traffic as well as problems arising from link failures and emerging hot spots. Intra-domain routing in IP backbone networks relies on link-state protocols such as IS-IS or OSPF. These protocols associate a weight (or cost) with each network link, and compute traffic routes based on these weight. However, proposed methods for selecting link weights largely ignore the issue of failures which arise as part of everyday network operations (maintenance, accidental, etc.). Changing link weights during a short-lived failure is impractical. However such failures are frequent enough to impact network performance. We propose a Tabu-search heuristic for choosing link weights which allow a network to function almost optimally during short link failures. The heuristic takes into account possible link failure scearios when choosing weights, thereby mitigating the effect of such failures. We find that the weights chosen by the heuristic can reduce link overload during transient link failures by as much as 40 at the cost of a small performance degradation in the absence of failures (10 ). In this paper, we adapt the heuristic of Fortz and Thorup for optimizing the weights of Shortest Path First protocols suchas Open Shortest Path First (OSPF) or Intermediate System-Intermediate System (IS-IS), in order to take into account failurescenarios.More precisely, we want to find a set of weights that is robust to all single link failures. A direct application of the originalheuristic, evaluating all the link failures, is too time consuming for realistic networks, so we developed a method based on acritical set of scenarios aimed to be representative of the whole set of scenarios. This allows us to make the problem manageableand achieve very robust solutions.
Abstract of query paper
Cite abstracts
29519
29518
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
Link weight optimization is shown to be a key issue in engineering of IGPs using shortest path first routing. The IGP weight optimization problem seeks a weight array resulting an optimal load distribution in the network based on the topology information and a traffic demand matrix. Several solution methods for various kinds of this problem have been proposed in the literature. However, the interaction of IGP with BGP is generally neglected in these studies. In reality, the optimized weights may not perform as well as expected, since updated link weights can cause shifts in the traffic demand matrix by hot-potato routing in the decision process of BGP. Hot-potato routing occurs when BGP decides the egress router for a destination prefix according to the IGP lengths. This paper mainly investigates the possible degradation of an IGP weight optimization tool due to hot-potato routing under a worst-case example and some experiments which are carried out by using an open source traffic engineering toolbox. Furthermore, it proposes an approach based on robust optimization to overcome the negative effect of hot-potato routing and analyzes its performance
Abstract of query paper
Cite abstracts
29520
29519
We investigate Quantum Key Distribution (QKD) relaying models. Firstly, we propose a novel quasi-trusted QKD relaying model. The quasi-trusted relays are defined as follows: (i) being honest enough to correctly follow a given multi-party finite-time communication protocol; (ii) however, being under the monitoring of eavesdroppers. We develop a simple 3-party quasi-trusted model, called Quantum Quasi-Trusted Bridge (QQTB) model, to show that we could securely extend up to two times the limited range of single-photon based QKD schemes. We also develop the Quantum Quasi-Trusted Relay (QQTR) model to show that we could securely distribute QKD keys over arbitrarily long distances. The QQTR model requires EPR pair sources, but does not use entanglement swapping and entanglement purification schemes. Secondly, we show that our quasi-trusted models could be improved to become untrusted models in which the security is not compromised even though attackers have full controls over some relaying nodes. We call our two improved models the Quantum Untrusted Bridge (QUB) and Quantum Untrusted Relay (QUR) ones. The QUB model works on single photons and allows securely extend up to two times the limited QKD range. The QUR model works on entangled photons but does not use entanglement swapping and entanglement purification operations. This model allows securely transmit shared keys over arbitrarily long distances without dramatically decreasing the key rate of the original QKD schemes.
The performances of Quantum Key Distribution (QKD) systems have notably progressed since the early experimental demonstrations and several recent works indicate that the pace of this progression is very likely to be maintained -if not increased- in the future years. In parallel to this fast progression of QKD techniques, commercial products are also being developed, making QKD deployment for securization of some specific ?real? data networks more and more likely to occur. It is the goal to the European project Secoqc to deploy a secure long-distance network based on quantum cryptography. It implies the conception of a specific architecture able to connect multiple users that may possibly be very far away from each other while QKD links are currently?point-to-point only? and intrinsically limited in distance. QKD networks are of much interest due to their capacity of providing extremely high security keys to network participants. Most QKD network studies so far focus on trusted models where all the network nodes are assumed to be perfectly secured. This restricts QKD networks to be small. In this paper, we first develop a novel model dedicated to large-scale QKD networks, some of whose nodes could be eavesdropped secretely. Then, we investigate the key transmission problem in the new model by an approach based on percolation theory and stochastic routing. Analyses show that under computable conditions large-scale QKD networks could protect secret keys with an extremely high probability. Simulations validate our results. We show how quantum key distribution (QKD) techniques can be employed within realistic, highly secure communications systems, using the internet architecture for a specific example. We also discuss how certain drawbacks in existing QKD point-to-point links can be mitigated by building QKD networks, where such networks can be composed of trusted relays or untrusted photonic switches.
Abstract of query paper
Cite abstracts
29521
29520
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm. We describe a simple random-sampling based procedure for producing sparse matrix approximations. Our procedure and analysis are extremely simple: the analysis uses nothing more than the Chernoff-Hoeffding bounds. Despite the simplicity, the approximation is comparable and sometimes better than previous work. Our algorithm computes the sparse matrix approximation in a single pass over the data. Further, most of the entries in the output matrix are quantized, and can be succinctly represented by a bit vector, thus leading to much savings in space. We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(rlog r) with a small error in the spectral norm, where r e VAV2F VAV22 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables. Given an m ? n matrix A and an n ? p matrix B, we present 2 simple and intuitive algorithms to compute an approximation P to the product A ? B, with provable bounds for the norm of the "error matrix" P - A ? B. Both algorithms run in 0(mp+mn+np) time. In both algorithms, we randomly pick s = 0(1) columns of A to form an m ? s matrix S and the corresponding rows of B to form an s ? p matrix R. After scaling the columns of S and the rows of R, we multiply them together to obtain our approximation P. The choice of the probability distribution we use for picking the columns of A and the scaling are the crucial features which enable us to fairly elementary proofs of the error bounds. Our first algorithm can be implemented without storing the matrices A and B in Random Access Memory, provided we can make two passes through the matrices (stored in external memory). The second algorithm has a smaller bound on the 2-norm of the error matrix, but requires storage of A and B in RAM. We also present a fast algorithm that "describes" P as a sum of rank one matrices if B = AT. We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning.
Abstract of query paper
Cite abstracts
29522
29521
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markov-chain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the "length" of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commute-time distance. This graph PCA provides a nice interpretation to the "Fiedler vector," widely used for graph partitioning. The model is evaluated on a collaborative-recommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacian-based similarities perform well in comparison with other methods. The model, which nicely fits into the so-called "statistical relational learning" framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machine-learning and pattern-recognition tasks involving a relational database In the era of globalization, traditional theories and models of social systems are shifting their focus from isolation and independence to networks and connectedness. Analyzing these new complex social models is a growing, and computationally demanding area of research. In this study, we investigate the integration of genetic algorithms (GAs) with a random-walk-based distance measure to find subgroups in social networks. We test our approach by synthetically generating realistic social network data sets. Our clustering experiments using random-walk-based distances reveal exceptionally accurate results compared with the experiments using Euclidean distances.
Abstract of query paper
Cite abstracts
29523
29522
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
Our growing reliance on online services accessible on the Internet demands highly available systems that provide correct service without interruptions. Software bugs, operator mistakes, and malicious attacks are a major cause of service interruptions and they can cause arbitrary behavior, that is, Byzantine faults. This article describes a new replication algorithm, BFT, that can be used to build highly available systems that tolerate Byzantine faults. BFT can be used in practice to implement real services: it performs well, it is safe in asynchronous environments such as the Internet, it incorporates mechanisms to defend against Byzantine-faulty clients, and it recovers replicas proactively. The recovery mechanism allows the algorithm to tolerate any number of faults over the lifetime of the system provided fewer than 1 3 of the replicas become faulty within a small window of vulnerability. BFT has been implemented as a generic program library with a simple interface. We used the library to implement the first Byzantine-fault-tolerant NFS file system, BFS. The BFT library and BFS perform well because the library incorporates several important optimizations, the most important of which is the use of symmetric cryptography to authenticate messages. The performance results show that BFS performs 2p faster to 24p slower than production implementations of the NFS protocol that are not replicated. This supports our claim that the BFT library can be used to build practical systems that tolerate Byzantine faults. We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems. We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal.
Abstract of query paper
Cite abstracts
29524
29523
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
Byzantine agreement requires a set of parties in a distributed system to agree on a value even if some parties are maliciously misbehaving. A new protocol for Byzantine agreement in a completely asynchronous network is presented that makes use of new cryptographic protocols, specifically protocols for threshold signatures and coin-tossing. These cryptographic protocols have practical and provably secure implementations in the random oracle model. In particular, a coin-tossing protocol based on the Diffie-Hellman problem is presented and analyzed. The resulting asynchronous Byzantine agreement protocol is both practical and theoretically optimal because it tolerates the maximum number of corrupted parties, runs in constant expected rounds, has message and communication complexity close to the optimum, and uses a trusted dealer only once in a setup phase, after which it can process a virtually unlimited number of transactions. The protocol is formulated as a transaction processing service in a cryptographic security model, which differs from the standard information-theoretic formalization and may be of independent interest.
Abstract of query paper
Cite abstracts
29525
29524
In this paper, we describe a novel proactive recovery scheme based on service migration for long-running Byzantine fault tolerant systems. Proactive recovery is an essential method for ensuring long term reliability of fault tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is a reduced vulnerability window. This is achieved by removing the time-consuming reboot step from the critical path of proactive recovery. Our migration-based proactive recovery is coordinated among the replicas, therefore, it can automatically adjust to different system loads and avoid the problem of excessive concurrent proactive recoveries that may occur in previous work with fixed watchdog timeouts. Moreover, the fast proactive recovery also significantly improves the system availability in the presence of faults.
We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes
Abstract of query paper
Cite abstracts
29526
29525
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Intrusion detection is the problem of identifying unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. The proliferation of heterogeneous computer networks provides additional implications for the intrusion detection problem. Namely, the increased connectivity of computer systems gives greater access to outsiders, and makes it easier for intruders to avoid detection. IDS’s are based on the belief that an intruder’s behavior will be noticeably different from that of a legitimate user. We are designing and implementing a prototype Distributed Intrusion Detection System (DIDS) that combines distributed monitoring and data reduction (through individual host and LAN monitors) with centralized data analysis (through the DIDS director) to monitor a heterogeneous network of computers. This approach is unique among current IDS’s. A main problem considered in this paper is the Network-user Identification problem, which is concerned with tracking a user moving across the network, possibly with a new user-id on each computer. Initial system prototypes have provided quite favorable results on this problem and the detection of attacks on a network. This paper provides an overview of the motivation behind DIDS, the system architecture and capabilities, and a discussion of the early prototype. The paper presents a new approach to representing and detecting computer penetrations in real time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule based expert system for detecting penetrations, called the state transition analysis tool (STAT). The design and implementation of a Unix specific prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. >
Abstract of query paper
Cite abstracts
29527
29526
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The Reliable Software Group at UCSB has developed a new approach to representing computer penetrations. This approach models penetrations as a series of state transitions described in terms of signature actions and state assertions. State transition representations are written to correspond to the states of an actual computer system, and they form the basis of a rule-based expert system for detecting penetrations. The system is called the State Transition Analysis Tool (STAT). On a network filesystem where the files are distributed on many hosts and where each host mounts directories from the others, actions on each host computer need to be audited. A natural extension of the STAT effort is to run the system on audit data collected by multiple hosts. This means an audit mechanism needs to be run on each host. However, running an implementation of STAT on each host would result in inefficient use of computer resources. In addition, the possibility of having cooperative attacks on different hosts would make detection difficult. Therefore, for the distributed version of STAT, called NSTAT, there is a single STAT process with a single, chronological audit trail. We are currently designing a client server approach to the problem. The client side has two threads: a producer that reads and filters the audit trail and a consumer that sends it to the server. The server side merges the filtered information from the various clients and performs the analysis. The paper presents a new approach to representing and detecting computer penetrations in real time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule based expert system for detecting penetrations, called the state transition analysis tool (STAT). The design and implementation of a Unix specific prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. >
Abstract of query paper
Cite abstracts
29528
29527
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
AAFID is a distributed intrusion detection architecture and system, developed in CERIAS at Purdue University. AAFID was the first architecture that proposed the use of autonomous agents for doing intrusion detection. With its prototype implementation, it constitutes a useful framework for the research and testing of intrusion detection algorithms and mechanisms. We describe the AAFID architecture and the existing prototype, as well as some design and implementation experiences and future research issues. ” 2000 Elsevier Science B.V. All rights reserved. The EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) environment is a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade of intrusion detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors contribute to a streamlined event-analysis system that combines signature analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability that can counter attacks occurring across an entire network enterprise. Further, EMERALD introduces a versatile application programmers' interface that enhances its ability to integrate with heterogeneous target hosts and provides a high degree of interoperability with third-party tool suites.
Abstract of query paper
Cite abstracts
29529
29528
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
This paper describes a real-time intrusion-detection expert system (IDES) that observes user behavior on a monitored computer system and adaptively learns what is normal for individual users, groups, remote hosts, and the overall system behavior. Observed behavior is flagged a5 a potential intrusion if it deviates significantly from the expected behavior or if it triggers a rule in the expert-system rule base. The EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) environment is a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade of intrusion detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors contribute to a streamlined event-analysis system that combines signature analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability that can counter attacks occurring across an entire network enterprise. Further, EMERALD introduces a versatile application programmers' interface that enhances its ability to integrate with heterogeneous target hosts and provides a high degree of interoperability with third-party tool suites.
Abstract of query paper
Cite abstracts
29530
29529
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
AAFID is a distributed intrusion detection architecture and system, developed in CERIAS at Purdue University. AAFID was the first architecture that proposed the use of autonomous agents for doing intrusion detection. With its prototype implementation, it constitutes a useful framework for the research and testing of intrusion detection algorithms and mechanisms. We describe the AAFID architecture and the existing prototype, as well as some design and implementation experiences and future research issues. ” 2000 Elsevier Science B.V. All rights reserved.
Abstract of query paper
Cite abstracts
29531
29530
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The cooperation between the different entities of a decentralized prevention system can be solved efficiently using the publish subscribe communication model. Here, clients can share and correlate alert information about the systems they monitor. In this paper, we present the advantages and convenience in using this communication model for a general decentralized prevention framework. Additionally, we outline the design for a specific architecture, and evaluate our design using a freely available publish subscribe message oriented middleware Distributed and coordinated attacks can disrupt electronic commerce applica- tions and cause large revenue losses. The prevention of these attacks is not possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to react against the different actions of such an attack. We are currently working on a decentralized attack prevention framework that is targeted at detecting as well as reacting to these attacks. The cooperation between the different entities of this system has been efficiently solved through the use of a publish subscribe model. In this paper we first present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Then, we present the design for our specific approach. Finally, we shortly discuss our implementation based on a freely available publish subscribe message oriented middleware.
Abstract of query paper
Cite abstracts
29532
29531
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for direct planning. In this paper, we present an approach to the generalization problem based on a new framework of relational Markov Decision Processes (RMDPs). An RMDP can model a set of similar environments by representing objects as instances of different classes. In order to generalize plans to multiple environments, we define an approximate value function specified in terms of classes of objects and, in a multiagent setting, by classes of agents. This class-based approximate value function is optimized relative to a sampled subset of environments, and computed using an efficient linear programming method. We prove that a polynomial number of sampled environments suffices to achieve performance close to the performance achievable when optimizing over the entire space. Our experimental results show that our method generalizes plans successfully to new, significantly larger, environments, with minimal loss of performance relative to environment-specific planning. We demonstrate our approach on a real strategic computer war game. Abstract Markov decision processes (MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based specifications and computations. To alleviate the combinatorial problems associated with such methods, we propose new representational and computational techniques for MDPs that exploit certain types of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a decision-tree representation of rewards. Based on this representation, we develop versions of standard dynamic programming algorithms that directly manipulate decision-tree representations of policies and value functions. This generally obviates the need for state-by-state computation, aggregating states at the leaves of these trees and requiring computations only for each aggregate state. The key to these algorithms is a decision-theoretic generalization of classic regression analysis, in which we determine the features relevant to predicting expected value. We demonstrate the method empirically on several planning problems, showing significant savings for certain types of domains. We also identify certain classes of problems for which this technique fails to perform well and suggest extensions and related ideas that may prove useful in such circumstances. We also briefly describe an approximation scheme based on this approach. Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems. Learning to act optimally in a complex, dynamic and noisy environment is a hard problem. Various threads of research from reinforcement learning, animal conditioning, operations research, machine learning, statistics and optimal control are beginning to come together to offer solutions to this problem. I present a thesis in which novel algorithms are presented for learning the dynamics, learning the value function, and selecting good actions for Markov decision processes. The problems considered have high-dimensional factored state and action spaces, and are either fully or partially observable. The approach I take is to recognize similarities between the problems being solved in the reinforcement learning and graphical models literature, and to use and combine techniques from the two fields in novel ways. In particular I present two new algorithms. First, the DBN algorithm learns a compact representation of the core process of a partially observable MDP. Because inference in the DBN is intractable, I use approximate inference to maintain the belief state. A belief state action-value function is learned using reinforcement learning. I show that this DBN algorithm can solve POMDPs with very large state spaces and useful hidden state. Second, the PoE algorithm learns an approximation to value functions over large factored state-action spaces. The algorithm approximates values as (negative) free energies in a product of experts model. The model parameters can be learned efficiently because inference is tractable in a product of experts. I show that good actions can be found even in large factored action spaces by the use of brief Gibbs sampling. These two new algorithms take techniques from the machine learning community and apply them in new ways to reinforcement learning problems. Simulation results show that these new methods can be used to solve very large problems. The DBN method is used to solve a POMDP with a hidden state space and an observation space of size greater than 2180. The DBN model of the core process has 232 states represented as 32 binary variables. The PoE method is used to find actions in action spaces of size 240 . Abstract Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the induced-width of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set , or a cutset . Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time. Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy Iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and prepositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with recent approximation methods. In the linear programming approach to approximate dynamic programming, one tries to solve a certain linear program--the ALP--that has a relatively small numberK of variables but an intractable numberM of constraints. In this paper, we study a scheme that samples and imposes a subset ofm <
Abstract of query paper
Cite abstracts
29533
29532
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 1040 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.
Abstract of query paper
Cite abstracts
29534
29533
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
Policies of Markov Decision Processes (MDPs) tell the next action to execute, given the current state and (possibly) the history of actions executed so far. Factorization is used when the number of states is exponentially large: both the MDP and the policy can be then represented using a compact form, for example employing circuits. We prove that there are MDPs whose optimal policies require exponential space evenin factored form.
Abstract of query paper
Cite abstracts
29535
29534
This paper reviews the fully complete hypergames model of system @math , presented a decade ago in the author's thesis. Instantiating type variables is modelled by allowing games as moves''. The uniformity of a quantified type variable @math is modelled by copycat expansion: @math represents an unknown game, a kind of black box, so all the player can do is copy moves between a positive occurrence and a negative occurrence of @math . This presentation is based on slides for a talk entitled Hypergame semantics: ten years later'' given at Games for Logic and Programming Languages', Seattle, August 2006.
An intensional model for the programming language PCF is described in which the types of PCF are interpreted by games and the terms by certain history-free strategies. This model is shown to capture definability in PCF. More precisely, every compact strategy in the model is definable in a certain simple extension of PCF. We then introduce an intrinsic preorder on strategies and show that it satisfies some striking properties such that the intrinsic preorder on function types coincides with the pointwise preorder. We then obtain an order-extensional fully abstract model of PCF by quotienting the intensional model by the intrinsic preorder. This is the first syntax-independent description of the fully abstract model for PCF. (Hyland and Ong have obtained very similar results by a somewhat different route, independently and at the same time.) We then consider the effective version of our model and prove a universality theorem: every element of the effective extensional model is definable in PCF. Equivalently, every recursive strategy is definable up to observational equivalence. We present a linear realizability technique for building Partial Equivalence Relations (PER) categories over Linear Combinatory Algebras. These PER categories turn out to be linear categories and to form an adjoint model with their co-Kleisli categories. We show that a special linear combinatory algebra of partial involutions, arising from Geometry of Interaction constructions, gives rise to a fully and faithfully complete modelfor ML polymorphic types of system F. We present a game semantics for Linear Logic, in which formulas denote games and proofs denote winning strategies. We show that our semantics yields a categorical model of Linear Logic and prove full completeness for Multiplicative Linear Logic with the MIX rule: every winning strategy is the denotation of a unique cut-free proof net. A key role is played by the notion of history-free strategy: strong connections are made between history-free strategies and the Geometry of Interaction. Our semantics incorporates a natural notion of polarity, leading to a refined treatment of the additives. We make comparisons with related work by Joyal, Blass, et al
Abstract of query paper
Cite abstracts
29536
29535
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
Abstract A new method of wavelet packet analysis is presented where the wavelet packets are chosen from a manifold rather than a discrete grid. A generalisation of the wavelet transform is defined on this manifold by correlation of the wavelet packets with the signal or image, and a discrete subset of the wavelet packets is then chosen from local maxima in the modulus of this function as a form of signal or image feature extraction. We show that consideration of the geometry of the manifold aids the search for these local maxima. We also show that the resulting wavelet characterisation is the best local approximation to the signal or image and represents signal and image components with the greatest signal to noise ratio, and is thus useful to surveillance and detection.
Abstract of query paper
Cite abstracts
29537
29536
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
We introduce a modified matching pursuit algorithm, called fast ridge pursuit, to approximate N-dimensional signals with M Gaussian chirps at a computational cost O(MN) instead of the expected O(MN sup 2 logN). At each iteration of the pursuit, the best Gabor atom is first selected, and then, its scale and chirp rate are locally optimized so as to get a "good" chirp atom, i.e., one for which the correlation with the residual is locally maximized. A ridge theorem of the Gaussian chirp dictionary is proved, from which an estimate of the locally optimal scale and chirp is built. The procedure is restricted to a sub-dictionary of local maxima of the Gaussian Gabor dictionary to accelerate the pursuit further. The efficiency and speed of the method is demonstrated on a sound signal.
Abstract of query paper
Cite abstracts
29538
29537
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement.
Abstract of query paper
Cite abstracts
29539
29538
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Part I. Introduction: 1. Introduction 2. Algorithms and graphs 3. The algebraic computation-tree model Part II. Spanners Based on Simplical Cones: 4. Spanners based on the Q-graph 5. Cones in higher dimensional space and Q-graphs 6. Geometric analysis: the gap property 7. The gap-greedy algorithm 8. Enumerating distances using spanners of bounded degree Part III. The Well Separated Pair Decomposition and its Applications: 9. The well-separated pair decomposition 10. Applications of well-separated pairs 11. The Dumbbell theorem 12. Shortcutting trees and spanners with low spanner diameter 13. Approximating the stretch factor of Euclidean graphs Part IV. The Path Greedy Algorithm: 14. Geometric analysis: the leapfrog property 15. The path-greedy algorithm Part V. Further Results and Applications: 16. The distance range hierarchy 17. Approximating shortest paths in spanners 18. Fault-tolerant spanners 19. Designing approximation algorithms with spanners 20. Further results and open problems. We define the notion of a well-separated pair decomposition of points in d -dimensional space. We then develop efficient sequential and parallel algorithms for computing such a decomposition. We apply the resulting decomposition to the efficient computation of k -nearest neighbors and n -body potential fields.
Abstract of query paper
Cite abstracts
29540
29539
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Unit disk graphs are the intersection graphs of equal sized circles in the plane: they provide a graph-theoretic model for broadcast networks (cellular networks) and for some problems in computational geometry. We show that many standard graph theoretic problems remain NP-complete on unit disk graphs, including coloring, independent set, domination, independent domination, and connected domination; NP-completeness for the domination problem is shown to hold even for grid graphs, a subclass of unit disk graphs. In contrast, we give a polynomial time algorithm for finding cliques when the geometric representation (circles in the plane) is provided. We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds. In a geometric bottleneck shortest path problem, we are given a set S of n points in the plane, and want to answer queries of the following type: given two points p and q of S and a real number L, compute (or approximate) a shortest path between p and q in the subgraph of the complete graph on S consisting of all edges whose lengths are less than or equal to L. We present efficient algorithms for answering several query problems of this type. Our solutions are based on Euclidean minimum spanning trees, spanners, and the Delaunay triangulation. A result of independent interest is the following. For any two points p and q of S, there is a path between p and q in the Delaunay triangulation, whose length is less than or equal to 2π (3 cos(π 6)) times the Euclidean distance |pq| between p and q, and all of whose edges have length at most |pq|. In this paper we introduce the minimum-order approach to frequency assignment and present a theory which relates this approach to the traditional one. This new approach is potentially more desirable than the traditional one. We model assignment problems as both frequency-distance constrained and frequency constrained optimization problems. The frequency constrained approach should be avoided if distance separation is employed to mitigate interference. A restricted class of graphs, called disk graphs, plays a central role in frequency-distance constrained problems. We introduce two generalizations of chromatic number and show that many frequency assignment problems are equivalent to generalized graph coloring problems. Using these equivalences and recent results concerning the complexity of graph coloring, we classify many frequency assignment problems according to the "execution time efficiency" of algorithms that may be devised for their solution. We discuss applications to important real world problems and identify areas for further work.
Abstract of query paper
Cite abstracts
29541
29540
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
We address the following problem: Given a complete @math -partite geometric graph @math whose vertex set is a set of @math points in @math , compute a spanner of @math that has a “small” stretch factor and “few” edges. We present two algorithms for this problem. The first algorithm computes a @math -spanner of @math with @math edges in @math time. The second algorithm computes a @math -spanner of @math with @math edges in @math time. The latter result is optimal: We show that for any @math , spanners with @math edges and stretch factor less than 3 do not exist for all complete @math -partite geometric graphs.
Abstract of query paper
Cite abstracts
29542
29541
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Given a graphG, a subgraphG' is at-spanner ofG if, for everyu,v ?V, the distance fromu tov inG' is at mostt times longer than the distance inG. In this paper we give a simple algorithm for constructing sparse spanners for arbitrary weighted graphs. We then apply this algorithm to obtain specific results for planar graphs and Euclidean graphs. We discuss the optimality of our results and present several nearly matching lower bounds. Given a connected geometric graph G, we consider the problem of constructing a t-spanner of G having the minimum number of edges. We prove that for every t with @math , there exists a connected geometric graph G with n vertices, such that every t-spanner of G contains Ω( n1+1 t ) edges. This bound almost matches the known upper bound, which states that every connected weighted graph with n vertices contains a t-spanner with O(tn1+2 (t+1)) edges. We also prove that the problem of deciding whether a given geometric graph contains a t-spanner with at most K edges is NP-hard. Previously, this NP-hardness result was only known for non-geometric graphs
Abstract of query paper
Cite abstracts
29543
29542
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
The crlibm project aims at developing a portable, proven, correctly rounded, and efficient mathematical library (libm) for double precision. Current libm implementation do not always return the floating-point number that is closest to the exact mathematical result. As a consequence, different libm implementation will return different results for the same input, which prevents full portability of floating-point ap- plications. In addition, few libraries support but the round-to-nearest mode of the IEEE754 IEC 60559 standard for floating-point arithmetic (hereafter usually referred to as the IEEE-754 stan- dard). crlibm provides the four rounding modes: To nearest, to +∞, to −∞ and to zero.
Abstract of query paper
Cite abstracts
29544
29543
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
Since they often embody compact but mathematically sophisticated algorithms, operations for computing the common transcendental functions in floating point arithmetic seem good targets for formal verification using a mechanical theorem prover. We discuss some of the general issues that arise in verifications of this class, and then present a machine-checked verification of an algorithm for computing the exponential function in IEEE-754 standard binary floating point arithmetic. We confirm (indeed strengthen) the main result of a previously published error analysis, though we uncover a minor error in the hand proof and are forced to confront several subtle issues that might easily be overlooked informally. We discuss the formal verification of some low-level mathematical software for the Intel® Itanium® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more detail the verification of a square root algorithm, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications. We have formal verified a number of algorithms for evaluating transcendental functions in double-extended precision floating point arithmetic in the Intel® IA-64 architecture. These algorithms are used in the Itanium? processor to provide compatibility with IA-32 (x86) hard-ware transcendentals, and similar ones are used in mathematical software libraries. In this paper we describe in some depth the formal verification of the sin and cos functions, including the initial range reduction step. This illustrates the different facets of verification in this field, covering both pure mathematics and the detailed analysis of floating point rounding. filibpp is an extension of the interval library filib originally developed at the University of Karlsruhe. The most important aim of filib is the fast computation of guaranteed bounds for interval versions of a comprehensive set of elementary functions. filibpp extends this library in two aspects. First, it adds a second mode, the extended mode, that extends the exception-free computation mode (using special values to represent infinities and NaNs known from the IEEE floating-point standard 754) to intervals. In this mode, the so-called containment sets are computed to enclose the topological closure of a range of a function over an interval. Second, our new design uses templates and traits classes to obtain an efficient, easily extendable, and portable Cpp library.
Abstract of query paper
Cite abstracts
29545
29544
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Traditional flow analysis techniques, such as the ones typically employed by optimising Fortran compilers, do not work for Scheme-like languages. This paper presents a flow analysis technique --- control flow analysis --- which is applicable to Scheme-like languages. As a demonstration application, the information gathered by control flow analysis is used to perform a traditional flow analysis problem, induction variable elimination. Extensions and limitations are discussed.The techniques presented in this paper are backed up by working code. They are applicable not only to Scheme, but also to related languages, such as Common Lisp and ML. We describe a method to analyze the data and control flow during mechanical evaluation of lambda expressions. The method produces a finite approximate description of the set of all states entered by a call-by-value lambda-calculus interpreter; a similar approach can easily be seen to work for call-by-name. A proof is given that the approximation is ''safe'' i.e. that it includes descriptions of every intermediate lambda-expression which occurs in the evaluation. From a programming languages point of view the method extends previously developed interprocedural analysis methods to include both local and global variables, call-by-name or call-by-value parameter transmission and the use of procedures both as arguments to other procedures and as the results returned by them.
Abstract of query paper
Cite abstracts
29546
29545
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Size-change termination (SCT) automatically identifies termination of first-order functional programs. The SCT principle: a program terminates if every infinite control flow sequence would cause an infinite descent in a well-founded data value (POPL 2001). More recent work (RTA 2004) developed a termination analysis of the pure untyped λ-calculus using a similar approach, but an entirely different notion of size was needed to compare higher-order values. Again this is a powerful analysis, even proving termination of certain λ-expressions containing the fixpoint combinator Y. However the language analysed is tiny, not even containing constants. These techniques are unified and extended significantly, to yield a termination analyser for higher-order, call-by-value programs as in ML’s purely functional core or similar functional languages. Our analyser has been proven correct, and implemented for a substantial subset of OCaml. There are many powerful techniques for automated termination analysis of term rewriting. However, up to now they have hardly been used for real programming languages. We present a new approach which permits the application of existing techniques from term rewriting in order to prove termination of programs in the functional language Haskell. In particular, we show how termination techniques for ordinary rewriting can be used to handle those features of Haskell which are missing in term rewriting (e.g., lazy evaluation, polymorphic types, and higher-order functions). We implemented our results in the termination prover AProVE and successfully evaluated them on existing Haskell-libraries.
Abstract of query paper
Cite abstracts
29547
29546
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
We present techniques to prove termination and innermost termination of term rewriting systems automatically. In contrast to previous approaches, we do not compare left- and right-hand sides of rewrite rules, but introduce the notion of dependency pairs to compare left-hand sides with special subterms of the right-hand sides. This results in a technique which allows to apply existing methods for automated termination proofs to term rewriting systems where they failed up to now. In particular, there are numerous term rewriting systems where a direct termination proof with simplification orderings is not possible, but in combination with our technique, well-known simplification orderings (such as the recursive path ordering, polynomial orderings, or the Knuth–Bendix ordering) can now be used to prove termination automatically. Unlike previous methods, our technique for proving innermost termination automatically can also be applied to prove innermost termination of term rewriting systems that are not terminating. Moreover, as innermost termination implies termination for certain classes of term rewriting systems, this technique can also be used for termination proofs of such systems. This paper expands the termination proof techniques based on the lexicographic path ordering to term rewriting systems over varyadic terms, in which each function symbol may have more than one arity. By removing the deletion property from the usual notion of the embedding relation, we adapt Kruskal’s tree theorem to the lexicographic comparison over varyadic terms. The result presented is that finite term rewriting systems over varyadic terms are terminating whenever they are compatible with the lexicographic path order. The ordering is simple, but powerful enough to handle most of higher-order rewriting systems without λ-abstraction, expressed as S-expression rewriting systems. The dependency pair technique is a powerful modular method for automated termination proofs of term rewrite systems (TRSs). We present two important extensions of this technique: First, we show how to prove termination of higher-order functions using dependency pairs. To this end, the dependency pair technique is extended to handle (untyped) applicative TRSs. Second, we introduce a method to prove non-termination with dependency pairs, while up to now dependency pairs were only used to verify termination. Our results lead to a framework for combining termination and non-termination techniques for first- and higher-order functions in a very flexible way. We implemented and evaluated our results in the automated termination prover AProVE. There are many powerful techniques for automated termination analysis of term rewriting. However, up to now they have hardly been used for real programming languages. We present a new approach which permits the application of existing techniques from term rewriting in order to prove termination of programs in the functional language Haskell. In particular, we show how termination techniques for ordinary rewriting can be used to handle those features of Haskell which are missing in term rewriting (e.g., lazy evaluation, polymorphic types, and higher-order functions). We implemented our results in the termination prover AProVE and successfully evaluated them on existing Haskell-libraries.
Abstract of query paper
Cite abstracts
29548
29547
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
We introduce the idea of a sentiment summary, a single passage from a document that captures an author’ s opinion about his or her subject. Using supervised data from the Rotten Tomatoes website, we examine features that appear to be helpful in locating a good summary sentence. These features are used to fit Naive Bayes and regularized logistic regression models for summary extraction.
Abstract of query paper
Cite abstracts
29549
29548
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22 higher precision (with only 3 lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective.
Abstract of query paper
Cite abstracts
29550
29549
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. In many decision-making scenarios, people can benefit from knowing what other people's opinions are. As more and more evaluative documents are posted on the Web, summarizing these useful resources becomes a critical task for many organizations and individuals. This paper presents a framework for summarizing a corpus of evaluative documents about a single entity by a natural language summary. We propose two summarizers: an extractive summarizer and an abstractive one. As an additional contribution, we show how our abstractive summarizer can be modified to generate summaries tailored to a model of the user preferences that is solidly grounded in decision theory and can be effectively elicited from users. We have tested our framework in three user studies. In the first one, we compared the two summarizers. They performed equally well relative to each other quantitatively, while significantly outperforming a baseline standard approach to multidocument summarization. Trends in the results as well as qualitative comments from participants suggest that the summarizers have different strengths and weaknesses. After this initial user study, we realized that the diversity of opinions expressed in the corpus (i.e., its controversiality) might play a critical role in comparing abstraction versus extraction. To clearly pinpoint the role of controversiality, we ran a second user study in which we controlled for the degree of controversiality of the corpora that were summarized for the participants. The outcome of this study indicates that for evaluative text abstraction tends to be more effective than extraction, particularly when the corpus is controversial. In the third user study we assessed the effectiveness of our user tailoring strategy. The results of this experiment confirm that user tailored summaries are more informative than untailored ones. Capturing knowledge from free-form evaluative texts about an entity is a challenging task. New techniques of feature extraction, polarity determination and strength evaluation have been proposed. Feature extraction is particularly important to the task as it provides the underpinnings of the extracted knowledge. The work in this paper introduces an improved method for feature extraction that draws on an existing unsupervised method. By including user-specific prior knowledge of the evaluated entity, we turn the task of feature extraction into one of term similarity by mapping crude (learned) features into a user-defined taxonomy of the entity's features. Results show promise both in terms of the accuracy of the mapping as well as the reduction in the semantic redundancy of crude features. It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective. We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface.
Abstract of query paper
Cite abstracts
29551
29550
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.
Abstract of query paper
Cite abstracts
29552
29551
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22 higher precision (with only 3 lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. Capturing knowledge from free-form evaluative texts about an entity is a challenging task. New techniques of feature extraction, polarity determination and strength evaluation have been proposed. Feature extraction is particularly important to the task as it provides the underpinnings of the extracted knowledge. The work in this paper introduces an improved method for feature extraction that draws on an existing unsupervised method. By including user-specific prior knowledge of the evaluated entity, we turn the task of feature extraction into one of term similarity by mapping crude (learned) features into a user-defined taxonomy of the entity's features. Results show promise both in terms of the accuracy of the mapping as well as the reduction in the semantic redundancy of crude features. It is a common practice that merchants selling products on the Web ask their customers to review the products and associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds. This makes it difficult for a potential customer to read them in order to make a decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we are only interested in the specific features of the product that customers have opinions on and also whether the opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of the original sentences from the reviews to capture their main points as in the classic text summarization. In this paper, we only focus on mining opinion product features that the reviewers have commented on. A number of techniques are presented to mine such features. Our experimental results show that these techniques are highly effective. With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.
Abstract of query paper
Cite abstracts
29553
29552
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Abstract : Most of the popular topic models (such as Latent Dirichlet Allocation) have an underlying assumption: bag of words. However, text is indeed a sequence of discrete word tokens, and without considering the order of words (in another word, the nearby context where a word is located), the accurate meaning of language cannot be exactly captured by word co-occurrences only. In this sense, collocations of words (phrases) have to be considered. However, like individual words, phrases sometimes show polysemy as well depending on the context. More noticeably, a composition of two (or more) words is a phrase in some contexts, but not in other contexts. In this paper, the authors propose a new probabilistic generative model that automatically determines unigram words and phrases based on context and simultaneously associates them with a mixture of topics. They present very interesting results on large text corpora. We present a method for unsupervised topic modelling which adapts methods used in document classification (, 2003; Griffiths and Steyvers, 2004) to unsegmented multi-party discourse transcripts. We show how Bayesian inference in this generative model can be used to simultaneously address the problems of topic segmentation and topic identification: automatically segmenting multi-party meetings into topically coherent segments with performance which compares well with previous unsupervised segmentation-only methods (, 2003) while simultaneously extracting topics which rate highly when assessed for coherence by human judges. We also show that this method appears robust in the face of off-topic dialogue and speech recognition errors. Algorithms such as Latent Dirichlet Allocation (LDA) have achieved significant progress in modeling word document relationships. These algorithms assume each word in the document was generated by a hidden topic and explicitly model the word distribution of each topic as well as the prior distribution over topics in the document. Given these parameters, the topics of all words in the same document are assumed to be independent. In this paper, we propose modeling the topics of words in the document as a Markov chain. Specifically, we assume that all words in the same sentence have the same topic, and successive sentences are more likely to have the same topics. Since the topics are hidden, this leads to using the well-known tools of Hidden Markov Models for learning and inference. We show that incorporating this dependency allows us to learn better topics and to disambiguate words that can belong to different topics. Quantitatively, we show that we obtain better perplexity in modeling documents with only a modest increase in learning and inference complexity. Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the "bag-of-words" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful. Statistical approaches to language learning typically focus on either short-range syntactic dependencies or long-range semantic dependencies between words. We present a generative model that uses both kinds of dependencies, and can be used to simultaneously find syntactic classes and semantic topics despite having no representation of syntax or semantics beyond statistical dependency. This model is competitive on tasks like part-of-speech tagging and document classification with models that exclusively use short- and long-range dependencies respectively. We present a novel probabilistic method for topic segmentation on unstructured text. One previous approach to this problem utilizes the hidden Markov model (HMM) method for probabilistically modeling sequence data [7]. The HMM treats a document as mutually independent sets of words generated by a latent topic variable in a time series. We extend this idea by embedding Hofmann's aspect model for text [5] into the segmenting HMM to form an aspect HMM (AHMM). In doing so, we provide an intuitive topical dependency between words and a cohesive segmentation model. We apply this method to segment unbroken streams of New York Times articles as well as noisy transcripts of radio programs on SpeechBot , an online audio archive indexed by an automatic speech recognition engine. We provide experimental comparisons which show that the AHMM outperforms the HMM for this task.
Abstract of query paper
Cite abstracts
29554
29553
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.
Given a collection F of subsets of S = 1,…, n , setcover is the problem of selecting as few as possiblesubsets from F such that their union covers S, , and maxk-cover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems areNP-hard. We prove that (1 - o (1)) ln n is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms. This closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o (1)) lnn), and provious results of Lund and Yanakakis, that showed hardness ofapproximation within a ratio of log 2 n 2s0.72 ln n . For max k -cover, we show an approximationthreshold of (1 - 1 e )(up tolow-order terms), under assumption that P≠NP .
Abstract of query paper
Cite abstracts
29555
29554
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
The "Number on the Forehead" model of multi-party communication complexity was first suggested by Chandra, Furst and Lipton. The best known lower bound, for an explicit function (in this model), is a lower bound of ( (n 2^k) ), where n is the size of the input of each player, and k is the number of players (first proved by Babai, Nisan and Szegedy). This lower bound has many applications in complexity theory. Proving a better lower bound, for an explicit function, is a major open problem. Based on the result of BNS, Chung gave a sufficient criterion for a function to have large multi-party communication complexity (up to ( (n 2^k) )). In this paper, we use some of the ideas of BNS and Chung, together with some new ideas, resulting in a new (easier and more modular) proof for the results of BNS and Chung. This gives a simpler way to prove lower bounds for the multi-party communication complexity of a function.
Abstract of query paper
Cite abstracts
29556
29555
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
We exhibit an explicit function f : 0, 1 n → 0, 1 that can be computed by a nondeterministic number-on-forehead protocol communicating O(logn) bits, but that requires nΩ(1) bits of communication for randomized number-on-forehead protocols with k = Δ·logn players, for any fixed Δ We also show that for any k = A ·loglogn the above function f is computable by a small circuit whose depth is constant whenever A is a (possibly large) constant. Recent results again give such functions but only when the number of players is k We prove n (1) lower bounds on the multiparty communication complexity of AC 0 functions in the number-on-forehead (NOF) model for up to �(logn) players. These are the first lower bounds for any AC 0 function for !(loglogn) players. In particular we show that there are families of depth 3 read-once AC 0 formulas having k-player randomized multiparty NOF communication complexity n (1) 2 O(k) . We show similar lower bounds for depth 4 read-once AC 0 formulas that have nondeterministic communication complexity O(log 2 n), yielding exponential separations between k-party nondeterministic and randomized communication complexity for AC 0 functions. As a consequence of the latter bound, we obtain an n (1 k) 2 O(k) lower bound on the k-party NOF communication complexity of set disjointness. This is non-trivial for up to �( p logn) players which is significantly larger than the up to �(loglogn) players allowed in the best previous lower bounds for multiparty set disjointness given by Lee and Shraibman [LS08] and Chattopadhyay and Ada [CA08] (though our complexity bounds themselves are not as strong as those in [LS08, CA08] for o(loglogn) players). We derive these results by extending the k-party generalization in [CA08, LS08] of the pattern matrix method of Sherstov [She07, She08]. Using this technique, we derive a new sufficient criterion for strong communication complexity lower bounds based on functions having many diverse subfunctions that do not have good low-degree polynomial approximations. This criterion guarantees that such functions have orthogonalizing distributions that are “max-smooth” as opposed to the “min-smooth” orthogonalizing distributions used by Razborov and Sherstov [RS08] to analyze the sign-rank of AC 0 .
Abstract of query paper
Cite abstracts
29557
29556
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
Abstract Clustering algorithms are now in widespread use for sorting heterogeneous data into homogeneous blocks. If the data consist of a number of variables taking values over a number of cases, these algorithms may be used either to construct clusters of variables (using, say, correlation as a measure of distance between variables) or clusters of cases. This article presents a model, and a technique, for clustering cases and variables simultaneously. The principal advantage in this approach is the direct interpretation of the clusters on the data. A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications.
Abstract of query paper
Cite abstracts
29558
29557
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
We present here 2-approximation algorithms for several node deletion and edge deletion biclique problems and for an edge deletion clique problem. The biclique problem is to find a node induced subgraph that is bipartite and complete. The objective is to minimize the total weight of nodes or edges deleted so that the remaining subgraph is bipartite complete. Several variants of the biclique problem are studied here, where the problem is defined on bipartite graph or on general graphs with or without the requirement that each side of the bipartition forms an independent set. The maximum clique problem is formulated as maximizing the number (or weight) of edges in the complete subgraph. A 2-approximation algorithm is given for the minimum edge deletion version of this problem. The approximation algorithms given here are derived as a special case of an approximation technique devised for a class of formulations introduced by Hochbaum. All approximation algorithms described (and the polynomial algorithms for two versions of the node biclique problem) involve calls to a minimum cut algorithm. One conclusion of our analysis of the NP-hard problems here is that all of these problems are MAX SNP-hard and at least as difficult to approximate as the vertex cover problem. Another conclusion is that the problem of finding the minimum node cut-set, the removal of which leaves two cliques in the graph, is NP-hard and 2-approximable. In graphs of bounded arboricity, the total complexity of all maximal complete bipartite subgraphs is O(n). We described a linear time algorithm to list such subgraphs. The arboricity bound is necessary: for any constant k and any n there exists an n-vertex graph with O(n) edges and (nlog n)k maximal complete bipartite subgraphs Kk,l. We describe a new algorithm for generating all maximal bicliques (i.e. complete bipartite, not necessarily induced subgraphs) of a graph. The algorithm is inspired by, and is quite similar to, the consensus method used in propositional logic. We show that some variants of the algorithm are totally polynomial, and even incrementally polynomial. The total complexity of the most efficient variant of the algorithms presented here is polynomial in the input size, and only linear in the output size. Computational experiments demonstrate its high efficiency on randomly generated graphs with up to 2000 vertices and 20,000 edges.
Abstract of query paper
Cite abstracts
29559
29558
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
We introduce Mercury, a new purely declarative logic programming language designed to provide the support that groups of application programmers need when building large programs. Mercury's strong type, mode, and determinism systems improve program reliability by catching many errors at compile time. We present a new and relatively simple execution model that takes advantage of the information these systems provide, yielding very efficient code. The Mercury compiler uses this execution model to generate portable C code. Our benchmarking shows that the code generated by our implementation is significantly faster than the code generated by mature optimizing implementations of other logic programming languages.
Abstract of query paper
Cite abstracts
29560
29559
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization. Information integration systems, also knows as mediators, information brokers, or information gathering agents, provide uniform user interfaces to varieties of different information sources. With corporate databases getting connected by intranets, and vast amounts of information becoming available over the Internet, the need for information integration systems is increasing steadily. Our work focuses on query planning in such systems. Query planning is the task of transforming a user query, represented in the user's interface language and vocabulary, into queries that can be executed by the information sources. Every information source might require a different query language and might use different vocabularies. The resulting answers of the information sources need to be translated and combined before the final answer can be reported to the user. We show that query plans with a fixed number of database operations are insufficient to extract all information from the sources, if functional dependencies or limitations on binding patterns are present. Dependencies complicate query planning because they allow query plans that would otherwise be invalid. We present an algorithm that constructs query plans that are guaranteed to extract all available information in these more general cases. This algorithm is also able to handle datalog user queries. We examine further extensions of the languages allowed for user queries and for describing information sources: disjunction, recursion and negation in source descriptions, negation and inequality in user queries. For these more expressive cases, we determine the data complexity required of languages able to represent "best possible" query plans.
Abstract of query paper
Cite abstracts
29561
29560
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.
Abstract of query paper
Cite abstracts
29562
29561
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
An instance of a random constraint satisfaction problem defines a random subset 𝒮 (the set of solutions) of a large product space X N (the set of assignments). We consider two prototypical problem ensembles (random k -satisfiability and q -coloring of random regular graphs) and study the uniform measure with support on S . As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states (“clusters”) and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may also beat this threshold. We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005 Survey Propagation is an algorithm designed for solving typical instances of random constraint satisfiability problems. It has been successfully tested on random 3-SAT and random @math graph 3-coloring, in the hard region of the parameter space. Here we provide a generic formalism which applies to a wide class of discrete Constraint Satisfaction Problems.
Abstract of query paper
Cite abstracts
29563
29562
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results.
Abstract of query paper
Cite abstracts
29564
29563
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
We address the question of convergence in the loopy belief propagation (LBP) algorithm. Specifically, we relate convergence of LBP to the existence of a weak limit for a sequence of Gibbs measures defined on the LBP's associated computation tree. Using tools from the theory of Gibbs measures we develop easily testable sufficient conditions for convergence. The failure of convergence of LBP implies the existence of multiple phases for the associated Gibbs specification. These results give new insight into the mechanics of the algorithm. Survey Propagation is an algorithm designed for solving typical instances of random constraint satisfiability problems. It has been successfully tested on random 3-SAT and random @math graph 3-coloring, in the hard region of the parameter space. Here we provide a generic formalism which applies to a wide class of discrete Constraint Satisfaction Problems.
Abstract of query paper
Cite abstracts
29565
29564
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
The problem of computing the types of the relationships between Internet autonomous systems is investigated. We refer to the model introduced in (ref.1), (ref.2) that bases the discovery of such relationships on the analysis of the AS paths extracted from the BGP routing tables. We characterize the time complexity of the above problem, showing both NP-completeness results and efficient algorithms for solving specific cases. Motivated by the hardness of the general problem, we propose heuristics based on a novel paradigm and show their effectiveness against publicly available data sets. The experiments put in evidence that our heuristics performs significantly better than state of the art heuristics.
Abstract of query paper
Cite abstracts
29566
29565
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
Recent techniques for inferring business relationships between ASs [1,2] have yielded maps that have extremely few invalid BGP paths in the terminology of Gao[3]. However, some relationships inferred by these newer algorithms are incorrect, leading to the deduction of unrealistic AS hierarchies. We investigate this problem and discover what causes it. Having obtained such insight, we generalize the problem of AS relationship inference as a multiobjective optimization problem with node-degree-based corrections to the original objective function of minimizing the number of invalid paths. We solve the generalized version of the problem using the semidefinite programming relaxation of the MAX2SAT problem. Keeping the number of invalid paths small, we obtain a more veracious solution than that yielded by recent heuristics. Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5 customer to provider (c2p), 82.8 peer to peer (p2p), and 90.3 sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2 of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors.
Abstract of query paper
Cite abstracts
29567
29566
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Abstract The computational complexity of combinatorial multiple objective programming problems is investigated. NP -completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well-known tree and Christofides’ heuristics for the traveling salesman problem is investigated in the multicriteria case with respect to the two definitions of approximability.
Abstract of query paper
Cite abstracts
29568
29567
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
We analyze approximation algorithms for several variants of the traveling salesman problem with multiple objective functions. First, we consider the symmetric TSP (STSP) with γ-triangle inequality. For this problem, we present a deterministic polynomial-time algorithm that achieves an approximation ratio of @math and a randomized approximation algorithm that achieves a ratio of @math . In particular, we obtain a 2+e approximation for multi-criteria metric STSP. Then we show that multi-criteria cycle cover problems admit fully polynomial-time randomized approximation schemes. Based on these schemes, we present randomized approximation algorithms for STSP with γ-triangle inequality (ratio @math ), asymmetric TSP (ATSP) with γ-triangle inequality (ratio @math ), STSP with weights one and two (ratio 4 3) and ATSP with weights one and two (ratio 3 2).
Abstract of query paper
Cite abstracts
29569
29568
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
We present randomized approximation algorithms for multi-criteria Max-TSP. For Max-STSP with k > 1 objective functions, we obtain an approximation ratio of @math for arbitrarily small @math . For Max-ATSP with k objective functions, we obtain an approximation ratio of @math .
Abstract of query paper
Cite abstracts
29570
29569
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
This paper provides a survey of the research in and an annotated bibliography of multiple objective combinatorial optimization, MOCO. We present a general formulation of MOCO problems, describe the main characteristics of MOCO problems, and review the main properties and theoretical results for these problems. The main parts of the paper are a section on the review of the available solution methodology, both exact and heuristic, and a section on the annotation of the existing literature in the field organized problem by problem. We conclude the paper by stating open questions and areas of future research.
Abstract of query paper
Cite abstracts
29571
29570
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
Abstract The computational complexity of combinatorial multiple objective programming problems is investigated. NP -completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well-known tree and Christofides’ heuristics for the traveling salesman problem is investigated in the multicriteria case with respect to the two definitions of approximability. For multiobjective optimization problems, it is meaningful to compute a set of solutions covering all possible trade-offs between the different objectives. The multiobjective knapsack problem is a generalization of the classical knapsack problem in which each item has several profit values. For this problem, efficient algorithms for computing a provably good approximation to the set of all nondominated feasible solutions, the Pareto frontier, are studied.For the multiobjective one-dimensional knapsack problem, a practical fully polynomial-time approximation scheme (FPTAS) is derived. It is based on a new approach to the single-objective knapsack problem using a partition of the profit space into intervals of exponentially increasing length. For the multiobjectivem-dimensional knapsack problem, the first known polynomial-time approximation scheme (PTAS), based on linear programming, is presented.
Abstract of query paper
Cite abstracts
29572
29571
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a real-valued function or of finding Pareto-optimal points of a multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is finite, the time-invariant variation operator is associated with a positive transition probability function and that the selection operator obeys the so-called ‘elite preservation strategy.’ We present four abstract evolutionary algorithms for multi-objective optimization and theoretical results that characterize their convergence behavior. Thanks to these results it is easy to verify whether or not a particular instantiation of these abstract evolutionary algorithms offers the desired limit behavior. Several examples are given. Abstract We consider the usage of evolutionary algorithms for multiobjective programming (MOP), i.e. for decision problems with alternatives taken from a real-valued vector space and evaluated according to a vector-valued objective function. Selection mechanisms, possibilities of temporary fitness deterioration, and problems of unreachable alternatives for such multiobjective evolutionary algorithms (MOEAs) are studied. Theoretical properties of MOEAs such as stochastic convergence with probability 1 are analyzed.
Abstract of query paper
Cite abstracts
29573
29572
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
We introduce a simple evolution scheme for multiobjective optimization problems, called the Pareto Archived Evolution Strategy (PAES). We argue that PAES may represent the simplest possible nontrivial algorithm capable of generating diverse solutions in the Pareto optimal set. The algorithm, in its simplest form, is a (1 + 1) evolution strategy employing local search but using a reference archive of previously found solutions in order to identify the approximate dominance ranking of the current and candidate solution vectors. (1 + 1)-PAES is intended to be a baseline approach against which more involved methods may be compared. It may also serve well in some real-world applications when local search seems superior to or competitive with population-based methods. We introduce (1 + λ) and (μ | λ) variants of PAES as extensions to the basic algorithm. Six variants of PAES are compared to variants of the Niched Pareto Genetic Algorithm and the Nondominated Sorting Genetic Algorithm over a diverse suite of six test functions. Results are analyzed and presented using techniques that reduce the attainment surfaces generated from several optimization runs into a set of univariate distributions. This allows standard statistical analysis to be carried out for comparative purposes. Our results provide strong evidence that PAES performs consistently well on a range of multiobjective optimization tasks. Abstract The hypervolume measure (or S metric) is a frequently applied quality measure for comparing the results of evolutionary multiobjective optimisation algorithms (EMOA). The new idea is to aim explicitly for the maximisation of the dominated hypervolume within the optimisation process. A steady-state EMOA is proposed that features a selection operator based on the hypervolume measure combined with the concept of non-dominated sorting. The algorithm’s population evolves to a well-distributed set of solutions, thereby focussing on interesting regions of the Pareto front. The performance of the devised S metric selection EMOA ( SMS-EMOA ) is compared to state-of-the-art methods on two- and three-objective benchmark suites as well as on aeronautical real-world applications. This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSGA-II and SPEA2, with respect to different performance measures. Search algorithms for Pareto optimization are designed to obtain multiple solutions, each offering a different trade-off of the problem objectives. To make the different solutions available at the end of an algorithm run, procedures are needed for storing them, one by one, as they are found. In a simple case, this may be achieved by placing each point that is found into an "archive" which maintains only nondominated points and discards all others. However, even a set of mutually nondominated points is potentially very large, necessitating a bound on the archive's capacity. But with such a bound in place, it is no longer obvious which points should be maintained and which discarded; we would like the archive to maintain a representative and well-distributed subset of the points generated by the search algorithm, and also that this set converges. To achieve these objectives, we propose an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able to prove a form of convergence. The method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by "good spread of points" across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive.
Abstract of query paper
Cite abstracts
29574
29573
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
The importance of reuse is well recognised for electronic document writing. However, it is rarely achieved satisfactorily because of the complexity of the task: integrating different formats, handling updates of information, addressing document author’s need for intuitiveness and simplicity, etc. In this paper, we present a language for information reuse that allows users to write virtual documents, where dynamic information objects can be retrieved from various sources, transformed, and included along with static information in SGML documents. The language uses a tree-like structure for the representation of information objects, and allows querying without a complete knowledge of the structure or the types of information. The data structures and the syntax of the language are presented through an example application. A major strength of our approach is to treat the document as a non-monolithic set of reusable information objects. This paper describes a tool, called Nodose, we have developed to expedite the creation of robust wrappers. Nodose allows non-programmers to build components that can convert data from the source format to XML or another generic format. Further, the generated code performs a set of statistical checks at runtime that attempt to find extraction errors before they are propogated back to users. The Web has become a major conduit to information repositories of all kinds. Today, more than 80 of information published on the Web is generated by underlying databases (however access is granted through a Web gateway using forms as a query language and HTML as a display vehicle) and this proportion keeps increasing. But Web data sources also consist of standalone HTML pages hand-coded by individuals, that provide very useful information such as reviews, digests, links, etc. As for the information that also exists in underlying databases, the HTML interface is often the only one available for many would-be clients.
Abstract of query paper
Cite abstracts
29575
29574
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task. The usual approach to named-entity detection is to learn extraction rules that rely on linguistic, syntactic, or document format patterns that are consistent across a set of documents. However, when there is no consistency among documents, it may be more effective to learn document-specific extraction rules.This paper presents a knowledge-based approach to learning rules for named-entity extraction. Document-specific extraction rules are created using a generate-and-test paradigm and a database of known named-entities. Experimental results show that this approach is effective on Web documents that are difficult for the usual methods.
Abstract of query paper
Cite abstracts
29576
29575
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
We consider the problem of improving named entity recognition (NER) systems by using external dictionaries---more specifically, the problem of extending state-of-the-art NER systems by incorporating information about the similarity of extracted entities to entities in an external dictionary. This is difficult because most high-performance named entity recognition systems operate by sequentially classifying words as to whether or not they participate in an entity name; however, the most useful similarity measures score entire candidate names. To correct this mismatch we formalize a semi-Markov extraction process, which is based on sequentially classifying segments of several adjacent words, rather than single words. In addition to allowing a natural way of coupling high-performance NER methods and high-performance similarity functions, this formalism also allows the direct use of other useful entity-level features, and provides a more natural formulation of the NER problem than sequential word classification. Experiments in multiple domains show that the new model can substantially improve extraction performance over previous methods for using external dictionaries in NER. The approach towards Semantic Web Information Extraction (IE) presented here is implemented in KIM – a platform for semantic indexing, annotation, and retrieval. It combines IE based on the mature text engineering platform (GATE1) with Semantic Web-compliant knowledge representation and management. The cornerstone is automatic generation of named-entity (NE) annotations with class and instance references to a semantic repository. Simplistic upper-level ontology, providing detailed coverage of the most popular entity types (Person, Organization, Location, etc.; more than 250 classes) is designed and used. A knowledge base (KB) with de-facto exhaustive coverage of real-world entities of general importance is maintained, used, and constantly enriched. Extensions of the ontology and KB take care of handling all the lexical resources used for IE, most notable, instead of gazetteer lists, aliases of specific entities are kept together with them in the KB. A Semantic Gazetteer uses the KB to generate lookup annotations. Ontologyaware pattern-matching grammars allow precise class information to be handled via rules at the optimal level of generality. The grammars are used to recognize NE, with class and instance information referring to the KIM ontology and KB. Recognition of identity relations between the entities is used to unify their references to the KB. Based on the recognized NE, template relation construction is performed via grammar rules. As a result of the latter, the KB is being enriched with the recognized relations between entities. At the final phase of the IE process, previously unknown aliases and entities are being added to the KB with their specific types. Cet article presente un systeme automatique d'annotation semantique de pages web. Les systemes d'annotation automatique existants sont essentiellement syntaxiques, meme lorsque les travaux visent a produire une annotation semantique. La prise en compte d'informations semantiques sur le domaine pour l'annotation d'un element dans une page web a partir d'une ontologie suppose d'aborder conjointement deux problemes : (1) l'identification de la structure syntaxique caracterisant cet element dans la page web et (2) l'identification du concept le plus specifique (en termes de subsumption) dans l'ontologie dont l'instance sera utilisee pour annoter cet element. Notre demarche repose sur la mise en oeuvre d'une technique d'apprentissage issue initialement des wrappers que nous avons articulee avec des raisonnements exploitant la structure formelle de l'ontologie.
Abstract of query paper
Cite abstracts
29577
29576
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
Precisely identifying entities in web documents is essential for document indexing, web search and data integration. Entity disambiguation is the challenge of determining the correct entity out of various candidate entities. Our novel method utilizes background knowledge in the form of a populated ontology. Additionally, it does not rely on the existence of any structure in a document or the appearance of data items that can provide strong evidence, such as email addresses, for disambiguating person names. Originality of our method is demonstrated in the way it uses different relationships in a document as well as from the ontology to provide clues in determining the correct entity. We demonstrate the applicability of our method by disambiguating names of researchers appearing in a collection of DBWorld posts using a large scale, real-world ontology extracted from the DBLP bibliography website. The precision and recall measurements provide encouraging results.
Abstract of query paper
Cite abstracts
29578
29577
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
This paper presents a large-scale system for the recognition and semantic disambiguation of named entities based on information extracted from a large encyclopedic collection and Web search results. It describes in detail the disambiguation paradigm employed and the information extraction process from Wikipedia. Through a process of maximizing the agreement between the contextual information extracted from Wikipedia and the context of a document, as well as the agreement among the category tags associated with the candidate entities, the implemented system shows high disambiguation accuracy on both news stories and Wikipedia articles.
Abstract of query paper
Cite abstracts
29579
29578
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
Abstract of query paper
Cite abstracts
29580
29579
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
Three measures of divergence between vectors in a convex set of a n -dimensional real vector space are defined in terms of certain types of entropy functions, and their convexity property is studied. Among other results, a classification of the entropies of degree is obtained by the convexity of these measures. These results have applications in information theory and biological studies. A novel class of information-theoretic divergence measures based on the Shannon entropy is introduced. Unlike the well-known Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The measures are also well characterized by the properties of nonnegativity, finiteness, semiboundedness, and boundedness. >
Abstract of query paper
Cite abstracts
29581
29580
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
This paper investigates the use of features based on posterior probabilities of subword units such as phonemes. These features are typically transformed when used as inputs for a hidden Markov model with mixture of Gaussians as emission distribution (HMM GMM). In this work, we introduce a novel acoustic model that avoids the Gaussian assumption and directly uses posterior features without any transformation. This model is described by a finite state machine where each state is characterized by a target distribution and the cost function associated to each state is given by the Kullback-Leibler (KL) divergence between its target distribution and the posterior features. Furthermore, hybrid HMM ANN system can be seen as a particular case of this KL-based model where state target distributions are predefined. A recursive training algorithm to estimate the state target distributions is also presented. This paper discusses the computation of the centroid induced by the symmetrical Kullback-Leibler distance. It is shown that it is the unique zeroing argument of a function which only depends on the arithmetic and the normalized geometric mean of the cluster. An efficient algorithm for its computation is presented. Speech spectra are used as an example. A locking cone chassis and several use methods to eliminate double and triple handling of shipping freight containers. The chassis device comprises two parallel I-beam main rails, a plurality of transverse ribs, a forward cone receiver formed from steel square tubing affixed to a forward end of the main rails, and a similar rear cone receiver affixed to the rear end of the main rails, the only essential difference between the receivers being that the forward cone receiver has an upper flange to guide and to stop a freight container. Each receiver has two box-shaped end portions, which operate like the corner pockets of a freight container-that is, each box-shaped end portion receives a locking cone through an upper surface cone receiving aperture. Each end portion also has access apertures for manually unlocking a cone. Maximum a posteriori (MAP) estimation has been successfully applied to speaker adaptation in speech recognition systems using hidden Markov models. When the amount of data is sufficiently large, MAP estimation yields recognition performance as good as that obtained using maximum-likelihood (ML) estimation. This paper describes a structural maximum a posteriori (SMAP) approach to improve the MAP estimates obtained when the amount of adaptation data is small. A hierarchical structure in the model parameter space is assumed and the probability density functions for model parameters at one level are used as priors for those of the parameters at adjacent levels. Results of supervised adaptation experiments using nonnative speakers' utterances showed that SMAP estimation reduced error rates by 61 when ten utterances were used for adaptation and that it yielded the same accuracy as MAP and ML estimation when the amount of data was sufficiently large. Furthermore, the recognition results obtained in unsupervised adaptation experiments showed that SMAP estimation was effective even when only one utterance from a new speaker was used for adaptation. An effective way to combine rapid supervised adaptation and on-line unsupervised adaptation was also investigated. Concatenative speech synthesis systems attempt to minimize audible signal discontinuities between two successive concatenated units. An objective distance measure which is able to predict audible discontinuities is therefore very important, particularly in unit selection synthesis, for which units are selected from among a large inventory at run time. In this paper, we describe a perceptual test to measure the detection rate of concatenation discontinuity by humans, and then we evaluate 13 different objective distance measures based on their ability to predict the human results. Criteria used to classify these distances include the detection rate, the Bhattacharyya measure of separability of two distributions, and receiver operating characteristic (ROC) curves. Results show that the Kullback-Leibler distance on power spectra has the higher detection rate followed by the Euclidean distance on Mel-frequency cepstral coefficients (MFCC). The Voronoi diagram of a finite set of objects is a fundamental geometric structure that subdivides the embedding space into regions, each region consisting of the points that are closer to a given object than to the others. We may define many variants of Voronoi diagrams depending on the class of objects, the distance functions and the embedding space. In this paper, we investigate a framework for defining and building Voronoi diagrams for a broad class of distance functions called Bregman divergences. Bregman divergences include not only the traditional (squared) Euclidean distance but also various divergence measures based on entropic functions. Accordingly, Bregman Voronoi diagrams allow to define information-theoretic Voronoi diagrams in statistical parametric spaces based on the relative entropy of distributions. We define several types of Bregman diagrams, establish correspondences between those diagrams (using the Legendre transformation), and show how to compute them efficiently. We also introduce extensions of these diagrams, e.g. k-order and k-bag Bregman Voronoi diagrams, and introduce Bregman triangulations of a set of points and their connexion with Bregman Voronoi diagrams. We show that these triangulations capture many of the properties of the celebrated Delaunay triangulation. Finally, we give some applications of Bregman Voronoi diagrams which are of interest in the context of computational geometry and machine learning. The directed divergence, which is a measure based on the discrimination information between two signal classes, is investigated. A simplified expression for computing the directed divergence is derived for comparing two Gaussian autoregressive processes such as those found in speech. This expression alleviates both the computational cost (reduced by two thirds) and the numerical problems encountered in computing the directed divergence. In addition, the simplified expression is compared with the Itakura-Saito distance (which asymptotically approaches the directed divergence). Although the expressions for these two distances closely resemble each other, only moderate correlations between the two were found on a set of actual speech data. >
Abstract of query paper
Cite abstracts
29582
29581
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
A locking cone chassis and several use methods to eliminate double and triple handling of shipping freight containers. The chassis device comprises two parallel I-beam main rails, a plurality of transverse ribs, a forward cone receiver formed from steel square tubing affixed to a forward end of the main rails, and a similar rear cone receiver affixed to the rear end of the main rails, the only essential difference between the receivers being that the forward cone receiver has an upper flange to guide and to stop a freight container. Each receiver has two box-shaped end portions, which operate like the corner pockets of a freight container-that is, each box-shaped end portion receives a locking cone through an upper surface cone receiving aperture. Each end portion also has access apertures for manually unlocking a cone.
Abstract of query paper
Cite abstracts
29583
29582
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
In this paper we consider the online ftp problem. The goal is to service a sequence of file transfer requests given bandwidth constraints of the underlying communication network. The main result of the paper is a technique that leads to algorithms that optimize several natural metrics, such as max-stretch, total flow time, max flow time, and total completion time. In particular, we show how to achieve optimum total flow time and optimum max-stretch if we increase the capacity of the underlying network by a logarithmic factor. We show that the resource augmentation is necessary by proving polynomial lower bounds on the max-stretch and total flow time for the case where online and offline algorithms are using same-capacity edges. Moreover, we also give poly-logarithmic lower bounds on the resource augmentation factor necessary in order to keep the total flow time and max-stretch within a constant factor of optimum.
Abstract of query paper
Cite abstracts
29584
29583
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms.When the paths are known (either given by the adversary or computed as above), our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this article, we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet.Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.
Abstract of query paper
Cite abstracts
29585
29584
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
We present the first polylog-competitive online algorithm for the general multicast admission control and routing problem in the throughput model. The ratio of the number of requests accepted by the optimum offline algorithm to the expected number of requests accepted by our algorithm is O((logn + loglogM)(logn + logM)logn), where M is the number of multicast groups and n is the number of nodes in the graph. We show that this is close to optimum by presenting an Ω(lognlogM) lower bound on this ratio for any randomized online algorithm against an oblivious adversary, when M is much larger than the link capacities. Our lower bound applies even in the restricted case where the link capacities are much larger than bandwidth requested by a single multicast. We also present a simple proof showing that it is impossible to be competitive against an adaptive online adversary.As in the previous online routing algorithms, our algorithm uses edge-costs when deciding on which is the best path to use. In contrast to the previous competitive algorithms in the throughput model, our cost is not a direct function of the edge load. The new cost definition allows us to decouple the effects of routing and admission decisions of different multicast groups. In this chapter we have described competitive on-line algorithms for on-line network routing problems. We have concentrated on routing in electrical and optical networks, presented algorithms for load minimization and throughput maximization problems, and mentioned some of the most popular open problems in the area. We study the on-line call admission problem in optical networks. We present a general technique that allows us to reduce the problem of call admission and wavelength selection to the call admission problem. We then give randomized algorithms with logarithmic competitive ratios for specific topologies in switchless and reconfigurable optical networks. We conclude by considering full duplex communications. Classical routing and admission control strategies achieve provably good performance by relying on an assumption that the virtual circuits arrival pattern can be described by some a priori known probabilistic model. A new on-line routing framework, based on the notion of competitive analysis, was proposed. This framework is geared toward design of strategies that have provably good performance even in the case where there are no statistical assumptions on the arrival pattern and parameters of the virtual circuits. The on-line strategies motivated by this framework are quite different from the min-hop and reservation-based strategies. This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues. > An ad hoc wireless network is an autonomous self-organizing system of mobile nodes connected by wireless links where nodes not in direct range communicate via intermediary nodes. Routing in ad hoc networks is a challenging problem as a result of highly dynamic topology as well as bandwidth and energy constraints. In addition, security is critical in these networks due to the accessibility of the shared wireless medium and the cooperative nature of ad hoc networks. However, none of the existing routing algorithms can withstand a dynamic proactive adversarial attack. The routing protocol presented in this work attempts to provide throughput-competitive route selection against an adaptive adversary. A proof of the convergence time of our algorithm is presented as well as preliminary simulation results. The authors examine routing strategies for fast packet switching networks based on flooding and predefined routes. The concern is to get both efficient routing and an even balanced use of network resources. They present efficient algorithms for assigning weights to edges in a controlled flooding scheme but show that the flooding scheme is not likely to yield a balanced use of the resources. Efficient algorithms are presented for choosing routes along breadth-first search trees and shortest paths. It is shown that in both cases a balanced use of network resources can be guaranteed. >
Abstract of query paper
Cite abstracts
29586
29585
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Input Queued (IQ) switches have been very well studied in the recent past. The main problem in the IQ switches concerns scheduling. The main focus of the research has been the fixed length packet-known as cells-case. The scheduling decision becomes relatively easier for cells compared to the variable length packet case as scheduling needs to be done at a regular interval of fixed cell time. In real traffic dividing the variable packets into cells at the input side of the switch and then reassembling these cells into packets on the output side achieve it. The disadvantages of this cell-based approach are the following: (a) bandwidth is lost as division of a packet may generate incomplete cells, and (b) additional overhead of segmentation and reassembling cells into packets. This motivates the packet scheduling: scheduling is done in units of arriving packet sizes and in nonpreemptive fashion. In M.A. (2001) the problem of packet scheduling was first considered. They show that under any admissible Bernoulli i.i.d. arrival traffic a simple modification of maximum weight matching (MWM) algorithm is stable, similar to cell-based MWM. In this paper, we study the stability properties of packet based scheduling algorithm for general admissible arrival traffic pattern. We first show that the result of extends to general regenerative traffic model instead of just admissible traffic, that is, packet based MWM is stable. Next we show that there exists an admissible traffic pattern under which any work-conserving (that is maximal type) scheduling algorithm will be unstable. This suggests that the packet based MWM will be unstable too. To overcome this difficulty we propose a new class of "waiting" algorithms. We show that "waiting"-MWM algorithm is stable for any admissible traffic using fluid limit technique. We compare three optical transport network architectures—optical packet switching (OPS), optical flow switching (OFS), and optical burst switching (OBS)—based on a notion of network capacity as the set of exogenous traffic rates that can be stably supported by a network under its operational constraints. We characterize the capacity regions of the transport architectures, and show that the capacity region of OPS dominates that of OFS, and that the capacity region of OFS dominates that of OBS. We then apply these results to two important network topologies—bidirectional rings and Moore graphs—under uniform all-to-all traffic. Motivated by the incommensurate complexity cost of comparable transport architectures, we also investigate the dependence of the relative capacity performance of the switching architectures on the number of switch ports per fiber at core nodes.
Abstract of query paper
Cite abstracts
29587
29586
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Aggregation of resources is a means to improve performance and efficiency in statistically shared systems in general, and communication networks in particular. One approach to this is traffic dispersion, which means that the traffic from a source is spread over multiple paths and transmitted in parallel through the network. Traffic dispersion may help in utilizing network resources to their full potential, while providing quality-of-service guarantees. It is a topic gaining interest, and much work has been done in the field. The results are, however, difficult to find, since the technique appears under many different labels. This article is therefore an attempt to gather and report on the work done on traffic dispersion in communication networks. It looks at a specific instance of this general method. The processes are communication processes, and the resource is the link capacity in a packet switched network. Traditional multimedia streaming techniques usually assume single-path (unicast) data delivery. But when the aggregate traffic between 2 nodes exceeds the bandwidth capacity of single link path, a feasible solution is to appropriately disperse the aggregate traffic over multiple paths between these 2 nodes. In this paper we propose a set of multi-path streaming models for MPEG video traffic transmission. In addition to the attributes (such as load balancing and security) inherited from conventional data dispersion models, the proposed multimedia dispersion models are designed to achieve high error-free frame rate based on the characteristics of MPEG video structure. Our simulation results show that significant quality improvement can be observed if the proposed streaming models are employed appropriately. Internet service provider faces a daunting challenge in provisioning network efficiently. We introduce a proactive multipath routing scheme that tries to route traffic according to its built-in properties. Based on mathematical analysis, our approach disperses incoming traffic flows onto multiple paths according to path qualities. Long-lived flows are detected and migrated to the shortest path if their QoS could be guaranteed there. Suggesting nondisjoint path set, four types of dispersion policies are analyzed, and flow classification policy which relates flow trigger with link state update period is investigated. Simulation experiments show that our approach outperforms traditional single path routing significantly.
Abstract of query paper
Cite abstracts
29588
29587
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
In this paper we initiate the study of how collusion alters the quality of solutions obtained in competitive games. The price of anarchy aims to measure the cost of the lack of coordination by comparing the quality of a Nash equilibrium to that of a centrally designed optimal solution. This notion assumes that players act not only selfishly, but also independently. We propose a framework for modeling groups of colluding players, in which members of a coalition cooperate so as to selfishly maximize their collective welfare. Clearly, such coalitions can improve the social welfare of the participants, but they can also harm the welfare of those outside the coalition. One might hope that the improvement for the coalition participants outweighs the negative effects on the others. This would imply that increased cooperation can only improved the overall solution quality of stable outcomes. However, increases in coordination can actually lead to significant decreases in total social welfare. In light of this, we propose the price of collusion as a measure of the possible negative effect of collusion, specifying the factor by which solution quality can deteriorate in the presence of coalitions. We give examples to show that the price of collusion can be arbitrarily high even in convex games. Our main results show that in the context of load-balancing games, the price of collusion depends upon the disparity in market power among the game participants. We show that in some symmetric nonatomic games (where all users have access to the same set of strategies) increased cooperation always improves the solution quality, and in the discrete analogs of such games, the price of collusion is bounded by two. The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable.In this paper, we assume this more realistic unsplittable model, and investigate the "price of anarchy", or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly @math 2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic "congestion games", where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is dδ(d). Our results hold also for mixed strategies.Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4 3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent. We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times---the total latency---is minimized.In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a "selfishly motivated" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance.In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4 3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total latency of the routes chosen by unregulated selfish network users may be arbitrarily larger than the minimum possible total latency; however, we prove that it is no more than the total latency incurred by optimally routing twice as much traffic. The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. > We study the problem of traffic routing in noncooperative networks. In such networks, users may follow selfish strategies to optimize their own performance measure and therefore, their behavior does not have to lead to optimal performance of the entire network. In this article we investigate the worst-case coordination ratio, which is a game-theoretic measure aiming to reflect the price of selfish routing. Following a line of previous work, we focus on the most basic networks consisting of parallel links with linear latency functions. Our main result is that the worst-case coordination ratio on m parallel links of possibly different speeds is Θ(log m log log log m). In fact, we are able to give an exact description of the worst-case coordination ratio, depending on the number of links and ratio of speed of the fastest link over the speed of the slowest link. For example, for the special case in which all m parallel links have the same speed, we can prove that the worst-case coordination ratio is Γ(−1) (m) p Θ(1), with Γ denoting the Gamma (factorial) function. Our bounds entirely resolve an open problem posed recently by Koutsoupias and Papadimitriou [1999].
Abstract of query paper
Cite abstracts
29589
29588
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
We study the problem of optimizing the performance of a system shared by selfish, noncooperative users. We consider the concrete setting of scheduling small jobs on a set of shared machines possessing latency functions that specify the amount of time needed to complete a job, given the machine load. We measure system performance by the total latency of the system. Assigning jobs according to the selfish interests of individual users, who wish to minimize only the latency that their own jobs experience, typically results in suboptimal system performance. However, in many systems of this type there is a mixture of "selfishly controlled" and "centrally controlled" jobs. The congestion due to centrally controlled jobs will influence the actions of selfish users, and we thus aspire to contain the degradation in system performance due to selfish behavior by scheduling the centrally controlled jobs in the best possible way. We formulate this goal as an optimization problem via Stackelberg games, games in which one player acts a leader (here, the centralized authority interested in optimizing system performance) and the rest as followers (the selfish users). The problem is then to compute a strategy for the leader (a Stackelberg strategy) that induces the followers to react in a way that (approximately) minimizes the total latency in the system. In this paper, we prove that it is NP-hard to compute an optimal Stackelberg strategy and present simple strategies with provably good performance guarantees. More precisely, we give a simple algorithm that computes a strategy inducing a job assignment with total latency no more than a constant times that of the optimal assignment of all of the jobs; in the absence of centrally controlled jobs and a Stackelberg strategy, no result of this type is possible. We also prove stronger performance guarantees in the special case where every machine latency function is linear in the machine load. We consider network games with atomic players, which indicates that some players control a positive amount of flow. Instead of studying Nash equilibria as previous work has done, we consider that players with considerable market power will make decisions before the others because they can predict the decisions of players without market power. This description fits the framework of Stackelberg games, where those with market power are leaders and the rest are price-taking followers. As Stackelberg equilibria are difficult to characterize, we prove bounds on the inefficiency of the solutions that arise when the leader uses a heuristic that approximate its optimal strategy. It is well known that in a network with arbitrary (convex) latency functions that are a function of edge traffic, the worst-case ratio, over all inputs, of the system delay caused due to selfish behavior versus the system delay of the optimal centralized solution may be unbounded even if the system consists of only two parallel links. This ratio is called the price of anarchy (PoA). In this paper, we investigate ways by which one can reduce the performance degradation due to selfish behavior. We investigate two primary methods (a) Stackelberg routing strategies, where a central authority, e.g., network manager, controls a fixed fraction of the flow, and can route this flow in any desired way so as to influence the flow of selfish users; and (b) network tolls, where tolls are imposed on the edges to modify the latencies of the edges, and thereby influence the induced Nash equilibrium. We obtain results demonstrating the effectiveness of both Stackelberg strategies and tolls in controlling the price of anarchy. For Stackelberg strategies, we obtain the first results for nonatomic routing in graphs more general than parallel-link graphs, and strengthen existing results for parallel-link graphs, (i) In series-parallel graphs, we show that Stackelberg routing reduces the PoA to a constant (depending on the fraction of flow controlled). (ii) For general graphs, we obtain latency-class specific bounds on the PoA with Stackelberg routing, which give a continuous trade-off between the fraction of flow controlled and the price of anarchy, (iii) In parallel-link graphs, we show that for any given class L of latency functions, Stackelberg routing reduces the PoA to at most α + (1 - α) · ρ(L), where α is the fraction of flow controlled and ρ(L) is the PoA of class L (when α = 0). For network tolls, motivated by the known strong results for nonatomic games, we consider the more general setting of atomic splittable routing games. We show that tolls inducing an optimal flow always exist, even for general asymmetric games with heterogeneous users, and can be computed efficiently by solving a convex program. Furthermore, we give a complete characterization of flows that can be induced via tolls. These are the first results on the effectiveness of tolls for atomic splittable games.
Abstract of query paper
Cite abstracts
29590
29589
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
Abstract of query paper
Cite abstracts
29591
29590
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
We present an iterative algorithm for computing sparse solutions (or sparse approximate solutions) to linear inverse problems. The algorithm is intended to supplement the existing arsenal of techniques. It is shown to converge to the local minima of a function of the form used for picking out sparse solutions, and its connection with existing techniques explained. Finally, it is demonstrated on subset selection and deconvolution examples. The fact that the proposed algorithm is sometimes successful when existing greedy algorithms fail is also demonstrated.
Abstract of query paper
Cite abstracts
29592
29591
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
We introduce a generalization of a deterministic relaxation algorithm for edge-preserving regularization in linear inverse problems. This algorithm transforms the original (possibly nonconvex) optimization problem into a sequence of quadratic optimization problems, and has been shown to converge under certain conditions when the original cost functional being minimized is strictly convex. We prove that our more general algorithm is globally convergent (i.e., converges to a local minimum from any initialization) under less restrictive conditions, even when the original cost functional is nonconvex. We apply this algorithm to tomographic reconstruction from limited-angle data by formulating the problem as one of regularized least-squares optimization. The results demonstrate that the constraint of piecewise smoothness, applied through the use of edge-preserving regularization, can provide excellent limited-angle tomographic reconstructions. Two edge-preserving regularizers-one convex, the other nonconvex-are used in numerous simulations to demonstrate the effectiveness of the algorithm under various limited-angle scenarios, and to explore how factors, such as the choice of error norm, angular sampling rate and amount of noise, affect the reconstruction quality and algorithm performance. These simulation results show that for this application, the nonconvex regularizer produces consistently superior results.
Abstract of query paper
Cite abstracts
29593
29592
Trace semantics has been defined for various kinds of state-based systems, notably with different forms of branching such as non-determinism vs. probability. In this paper we claim to identify one underlying mathematical structure behind these "trace semantics," namely coinduction in a Kleisli category. This claim is based on our technical result that, under a suitably order-enriched setting, a final coalgebra in a Kleisli category is given by an initial algebra in the category Sets. Formerly the theory of coalgebras has been employed mostly in Sets where coinduction yields a finer process semantics of bisimilarity. Therefore this paper extends the application field of coalgebras, providing a new instance of the principle "process semantics via coinduction."
Fusion laws permit to eliminate various of the intermediate data structures that are created in function compositions. The fusion laws associated with the traditional recursive operators on datatypes cannot, in general, be used to transform recursive programs with effects. Motivated by this fact, this paper addresses the definition of two recursive operators on datatypes that capture functional programs with effects. Effects are assumed to be modeled by monads. The main goal is thus the derivation of fusion laws for the new operators. One of the new operators is called monadic unfold. It captures programs (with effects) that generate a data structure in a standard way. The other operator is called monadic hylomorphism, and corresponds to programs formed by the composition of a monadic unfold followed by a function defined by structural induction on the data structure that the monadic unfold generates.
Abstract of query paper
Cite abstracts
29594
29593
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras.
Abstract of query paper
Cite abstracts
29595
29594
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras.
Abstract of query paper
Cite abstracts
29596
29595
We present a deterministic channel model which captures several key features of multiuser wireless communication. We consider a model for a wireless network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes. This result is a natural generalization of the max-flow min-cut theorem for wireline networks. Finally to demonstrate the connections between deterministic model and Gaussian model, we look at two examples: the single-relay channel and the diamond network. We show that in each of these two examples, the capacity-achieving scheme in the corresponding deterministic model naturally suggests a scheme in the Gaussian model that is within 1 bit and 2 bit respectively from cut-set upper bound, for all values of the channel gains. This is the first part of a two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut theorem of a class of deterministic networks of which our model is a special case.
We consider a finite-field model for the wireless broadcast and additive interference network (WBAIN), both in the presence and absence of fading. We show that the single-source unicast capacity (with extension to multicast) of a WBAIN with or without fading can be upper bounded by the capacity of an equivalent broadcast erasure network. We further present a coding strategy for WBAINs with i.i.d. and uniform fading based on random linear coding at each node that achieves a rate differing from the upper bound by no more than O(1 q), where q is the field size. Using these results, we show that channel fading in conjunction with network coding can lead to large gains in the unicast (multicast) capacity as compared to no fading The multicast capacity is determined for networks that have deterministic channels with broadcasting at the transmitters and no interference at the receivers. The multicast capacity is shown to have a cut-set interpretation. It is further shown that one cannot always layer channel and network coding in such networks. The proof of the latter result partially generalizes to discrete memoryless broadcast channels and is used to bound the common rate for problems where one achieves a cut bound on throughput. A centrifugating device for biological liquids, e.g. blood, in which a rotatable container carries a specially shaped seal that surrounds and bears on a fixed assembly with a minimum area of interface between the fixed and rotating parts. This seal is disposed outside the path of the liquid to be treated. The fixed assembly, in turn, is releasably carried by a bracket, the bracket being selectively longitudinally extensible as well as selectively adjustably swingable about a vertical axis of oscillation eccentric to the centrifuge, thereby to permit exact positioning of the fixed assembly coaxially of the rotatable container. The parts are so simple and inexpensive in construction that at least some of them can be used once and thrown away. Moreover, the fixed assembly is easily insertable in sealed relationship in any of a variety of containers, by the simplest of manual assembly and disassembly operations. The capacity of the class of relay channels with sender x_ 1 , a relay sender x_ 2 , a relay receiver y_ 1 =f(x_ 1 ,x_ 2 ) , and ultimate receiver y is proved to be C = p(x_ 1 ,x_ 2 ) I(X_ 1 , X_ 2 ; Y), H(Y_ 1 |X_ 2 )+I(X_ 1 ;Y|X_ 2 ,Y_ 1 ) .
Abstract of query paper
Cite abstracts
29597
29596
Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does rely on a subset of nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most @math faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most @math misbehaving sensors. On the negative side, we prove that no deterministic protocol can identify faking sensors if their number is @math . Thus our scheme is almost optimal with respect to the number of faking sensors. We discuss application of our technique in the trusted sensor model. More precisely our results can be used to minimize the number of trusted sensors that are needed to defeat faking ones.
In an adversarial environment, various kinds of security attacks become possible if malicious nodes could claim fake locations that are different from where they are physically located. In this paper, we propose a secure localization mechanism that detects the existence of these nodes, termed as phantom nodes, without relying on any trusted entities, an approach significantly different from the existing ones. The proposed mechanism enjoys a set of nice features. First, it does not have any central point of attack. All nodes play the role of verifier, by generating local map, i.e. a view constructed based on ranging information from its neighbors. Second, this distributed and localized construction results in quite strong results: even when the number of phantom nodes is greater than that of honest nodes, we can Alter out most phantom nodes. Our analysis and simulations under realistic noisy settings demonstrate our scheme is effective in the presence of a large number of phantom nodes.
Abstract of query paper
Cite abstracts
29598
29597
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Operational transformation (OT) is an approach which allows to build real-time groupware tools. This approach requires correct transformation functions regarding two conditions called TP1 and TP2. Proving correctness of these transformation functions is very complex and error prone. In this paper, we show how a theorem prover can address this serious bottleneck. To validate our approach, we verifed correctness of state-of-art transformation functions de ned on strings of characters with surprising results. Counter-examples provided by the theorem prover helped us to design the tombstone transformation functions. These functions verify TP1 and TP2, preserve intentions and ensure multi-effect relationships. Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di. ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go of identifying the major is-m s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras ation, intention pre tion, group e&tors, groupware, distributed computing.
Abstract of query paper
Cite abstracts
29599
29598
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Psync is an IPC protocol that explicitly preserves the partial order of messages exchanged among a set of processes. A description is given of how Psync can be used to implement replicated objects in the presence of network and host failures. Unlike conventional algorithms that depend on an underlying mechanism that totally orders messages for implementing replicated objects, the authors' approach exploits the partial order provided by Psync to achieve additional concurrency. > Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely "syntactic," that is, message "semantics" is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Theoretician’s Abstract Consensus has been regarded as the fundamental problem that must be solved to implement a fault-tolerant distributed system. However, only a weaker problem than traditional consensus need be solved. We generalize the consensus problem to include both traditional consensus and this weaker version. A straightforward generalization of the Paxos consensus algorithm implements general consensus. The generalizations of consensus and of the Paxos algorithm require a mathematical detour de force into a type of object called a command-structure set. IceCube is a system for optimistic replication, supporting collaborative work and mobile computing. It lets users write to shared data with no mutual synchronisation; however replicas diverge and must be reconciled. IceCube is a general-purpose reconciliation engine, parameterised by “constraints” capturing data semantics and user intents. IceCube combines logs of disconnected actions into near-optimal reconciliation schedules that honour the constraints. IceCube features a simple, high-level, systematic API . It seamlessly integrates diverse applications, sharing various data, and run by concurrent users. This paper focus on the IceCube API and algorithms. Application experience indicates that IceCube simplifies application design, supports a wide variety of application semantics, and seamlessly integrates diverse applications. On a realistic benchmark, IceCube runs at reasonable speeds and scales to large input sets.
Abstract of query paper
Cite abstracts
29600
29599
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Two novel concurrency algorithms for abstract data types are presented that ensure serializability of transactions. It is proved that both algorithms ensure a local atomicity property called dynamic atomicity. The algorithms are quite general, permitting operations to be both partial and nondeterministic. The results returned by operations can be used in determining conflicts, thus allowing higher levels of concurrency than otherwise possible. The descriptions and proofs encompass recovery as well as concurrency control. The two algorithms use different recovery methods: one uses intentions lists, and the other uses undo logs. It is shown that conflict relations that work with one recovery method do not necessarily work with the other. A general correctness condition that must be satisfied by the combination of a recovery method and a conflict relation is identified. >
Abstract of query paper
Cite abstracts