aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0908.1457
1715823396
This paper considers the problem of efficiently transmitting quantum states through a network. It has been known for some time that without additional assumptions it is impossible to achieve this task perfectly in general -- indeed, it is impossible even for the simple butterfly network. As additional resource we allow free classical communication between any pair of network nodes. It is shown that perfect quantum network coding is achievable in this model whenever classical network coding is possible over the same network when replacing all quantum capacities by classical capacities. More precisely, it is proved that perfect quantum network coding using free classical communication is possible over a network with k source-target pairs if there exists a classical linear (or even vector-linear) coding scheme over a finite ring. Our proof is constructive in that we give explicit quantum coding operations for each network node. This paper also gives an upper bound on the number of classical communication required in terms of k , the maximal fan-in of any network node, and the size of the network.
There are several papers studying quantum network coding on the @math -pair problem in situations different from the most basic setting (perfect transmission of quantum states using only a quantum network of limited capacity). @cite_14 and @cite_9 considered approximate'' transmission of qubits in the @math -pair problem, and showed that transmission with fidelity larger than @math is possible for a class of networks. Hayashi @cite_11 showed how to achieve perfect transmission of two qubits on the butterfly network if two source nodes have prior entanglement, and if, at each edge, we can choose between sending two classical bits and sending one qubit. Leung, Oppenheim and Winter @cite_19 considered various extra resources such as free forward backward two-way classical communication and entanglement, and investigated the lower upper bounds of the rate of quantum network coding for their settings. The setting of the present paper is close to their model allowing free two-way classical communication. The difference is that Ref. @cite_19 considered asymptotically perfect transmission while this paper focuses on perfect transmission. Also, Ref. @cite_19 showed optimal rates for a few classes of networks while the present paper gives lower bounds for much wider classes of networks.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_14", "@cite_11" ], "mid": [ "2157846543", "1947282201", "", "2022879413" ], "abstract": [ "We study the problem of k-pair communication (or multiple unicast problem) of quantum information in networks of quantum channels. We consider the asymptotic rates of high fidelity quantum communication between specific sender-receiver pairs. Four scenarios of classical communication assistance (none, forward, backward, and two-way) are considered. (I) We obtain outer and inner bounds of the achievable rate regions in the most general directed networks. (II) For two particular networks (including the butterfly network), routing is proved optimal, and the free assisting classical communication can at best be used to modify the directions of quantum channels in the network. Consequently, the achievable rate regions are given by counting edge avoiding paths, and precise achievable rate regions in all four assisting scenarios can be obtained. (III) Optimality of routing can also be proved in classes of networks. The first class consists of directed unassisted networks in which (1) the receivers are information sinks, (2) the maximum distance from senders to receivers is small, and (3) a certain type of 4-cycles are absent, but without further constraints (such as on the number of communicating and intermediate parties). The second class consists of arbitrary backward-assisted networks with two sender-receiver pairs. (IV) Beyond the k-pair communication problem, observations are made on quantum multicasting and a static version of network communication related to the entanglement of assistance.", "Network coding is often explained by using a small network model called Butterfly. In this network, there are two flow paths, s_1 to t_1 and s_2 to t_2, which share a single bottleneck channel of capacity one. So, if we consider conventional flow (of liquid, for instance), then the total amount of flow must be at most one in total, say 1 2 for each path. However, if we consider information flow, then we can send two bits (one for each path) at the same time by exploiting two side links, which are of no use for the liquid-type flow, and encoding decoding operations at each node. This is known as network coding and has been quite popular since its introduction by Ahlswede, Cai, Li and Yeung in 2000. In QIP 2006, showed that quantum network coding is possible for Butterfly, namely we can send two qubits simultaneously with keeping their fidelity strictly greater than 1 2. In this paper, we show that the result can be extended to a large class of general graphs by using a completely different approach. The underlying technique is a new cloning method called entanglement-free cloning which does not produce any entanglement at all. This seems interesting on its own and to show its possibility is an even more important purpose of this paper. Combining this new cloning with approximation of general quantum states by a small number of fixed ones, we can design a quantum network coding protocol which simulates'' its classical counterpart for the same graph.", "", "We find a protocol transmitting two quantum states crossly in the butterfly network only with prior entanglement between two senders. This protocol requires only one qubit transmission or two classical bits (cbits) transmission in each channel in the butterfly network. It is also proved that it is impossible without prior entanglement. More precisely, an upper bound of average fidelity is given in the butterfly network when prior entanglement is not allowed. The presented result concerns only the butterfly network, but our techniques can be applied to a more general graph." ] }
0908.1457
1715823396
This paper considers the problem of efficiently transmitting quantum states through a network. It has been known for some time that without additional assumptions it is impossible to achieve this task perfectly in general -- indeed, it is impossible even for the simple butterfly network. As additional resource we allow free classical communication between any pair of network nodes. It is shown that perfect quantum network coding is achievable in this model whenever classical network coding is possible over the same network when replacing all quantum capacities by classical capacities. More precisely, it is proved that perfect quantum network coding using free classical communication is possible over a network with k source-target pairs if there exists a classical linear (or even vector-linear) coding scheme over a finite ring. Our proof is constructive in that we give explicit quantum coding operations for each network node. This paper also gives an upper bound on the number of classical communication required in terms of k , the maximal fan-in of any network node, and the size of the network.
As mentioned in Ref. @cite_19 , free classical communication essentially makes the underlying directed graph of the quantum network undirected since quantum teleportation enables one to send a quantum message to the reverse direction of a directed edge. In this context, our result gives a lower bound of the rate of quantum network coding that might not be optimal even if its corresponding classical coding is optimal in the directed graph. However, even in the classical case, network coding over an undirected graph is much less known than that over a directed one. In the multicast network, the gap between the rates by network coding and by routing is known to be at most two @cite_7 , while there is an example for which the min-cut rate bound cannot be achieved by network coding @cite_7 @cite_8 . Also notice that, in the @math -pair problem, it is conjectured that fractional routing achieves the optimal rate for any undirected graph (see for example Ref. @cite_1 ). However, this conjecture has been proved only for very few families of networks, and remains one of the main open problems in the field of network coding.
{ "cite_N": [ "@cite_19", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2157846543", "2121205191", "2160348245", "2011560054" ], "abstract": [ "We study the problem of k-pair communication (or multiple unicast problem) of quantum information in networks of quantum channels. We consider the asymptotic rates of high fidelity quantum communication between specific sender-receiver pairs. Four scenarios of classical communication assistance (none, forward, backward, and two-way) are considered. (I) We obtain outer and inner bounds of the achievable rate regions in the most general directed networks. (II) For two particular networks (including the butterfly network), routing is proved optimal, and the free assisting classical communication can at best be used to modify the directions of quantum channels in the network. Consequently, the achievable rate regions are given by counting edge avoiding paths, and precise achievable rate regions in all four assisting scenarios can be obtained. (III) Optimality of routing can also be proved in classes of networks. The first class consists of directed unassisted networks in which (1) the receivers are information sinks, (2) the maximum distance from senders to receivers is small, and (3) a certain type of 4-cycles are absent, but without further constraints (such as on the number of communicating and intermediate parties). The second class consists of arbitrary backward-assisted networks with two sender-receiver pairs. (IV) Beyond the k-pair communication problem, observations are made on quantum multicasting and a static version of network communication related to the entanglement of assistance.", "An outer bound on the rate region of noise-free information networks is given. This outer bound combines properties of entropy with a strong information inequality derived from the structure of the network. This blend of information theoretic and graph theoretic arguments generates many interesting results. For example, the capacity of directed cycles is characterized. Also, a gap between the sparsity of an undirected graph and its capacity is shown. Extending this result, it is shown that multicommodity flow solutions achieve the capacity in an infinite class of undirected graphs, thereby making progress on a conjecture of Li and Li. This result is in sharp contrast to the situation with directed graphs, where a family of graphs is presented in which the gap between the capacity and the rate achievable using multicommodity flows is linear in the size of the graph.", "Recent research in network coding shows that, joint consideration of both coding and routing strategies may lead to higher information transmission rates than routing only. A fundamental question in the field of network coding is: how large can the throughput improvement due to network coding be? In this paper, we prove that in undirected networks, the ratio of achievable multicast throughput with network coding to that without network coding is bounded by a constant ratio of 2, i.e., network coding can at most double the throughput. This result holds for any undirected network topology, any link capacity configuration, any multicast group size, and any source information rate. This constant bound 2 represents the tightest bound that has been proved so far in general undirected settings, and is to be contrasted with the unbounded potential of network coding in improving multicast throughput in directed networks.", "\"Eligible for the student paper award.\" In this work we improve on the bounds presented in Z. Li and B. Li (2004) for network coding gain in the undirected case. A tightened bound for the undirected multicast problem with three terminals is derived. An interesting result shows that with fractional routing, routing throughput can achieve at least 75 of the coding throughput. A tighter bound for the general multicast problem with any number of terminals shows that coding gain is strictly less than 2. Our derived bound depends on the number of terminals in the multicast network and approaches 2 for arbitrarily large number of terminals." ] }
0908.2050
2951505888
When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Without views, Gecode would either require 180 000 rather than 40 000 lines of propagator code, or would lack many efficient propagator variants. Compared to 8 000 lines of code for views, the reduction in code for propagators yields a 1750 return on investment.
On a more historical level, a derived propagator is related to the notion of . A domain is path-consistent for a set of constraints, if for any subset @math of its variables, @math and @math implies that there is a value @math such that the pair @math satisfies all the (binary) constraints between @math and @math , the pair @math satisfies all the (binary) constraints between @math and @math , and the pair @math satisfies all the (binary) constraints between @math and @math @cite_27 . If @math is domain-complete for @math , then it achieves path consistency for the constraint @math and all the @math in the decomposition model.
{ "cite_N": [ "@cite_27" ], "mid": [ "2135432705" ], "abstract": [ "Artificial intelligence tasks which can be formulated as constraint satisfaction problems, with which this paper is for the most part concerned, are usually by solved backtracking the examining the thrashing behavior that nearly always accompanies backtracking, identifying three of its causes and proposing remedies for them we are led to a class of algorithms whoch can profitably be used to eliminate local (node, arc and path) inconsistencies before any attempt is made to construct a complete solution. A more general paradigm for attacking these tasks is the altenation of constraint manipulation and case analysis producing an OR problem graph which may be searched in any of the usual ways. Many authors, particularly Montanari and Waltz, have contributed to the development of these ideas; a secondary aim of this paper is to trace that history. The primary aim is to provide an accessible, unified framework, within which to present the algorithms including a new path consistency algorithm, to discuss their relationships and the may applications, both realized and potential of network consistency algorithms." ] }
0908.2467
2149950250
The non-uniform demand network coding problem is posed as a single-source and multiple-sink network transmission problem where the sinks may have heterogeneous demands. In contrast with multicast problems, non-uniform demand problems are concerned with the amounts of data received by each sink, rather than the specifics of the received data. In this work, we enumerate non-uniform network demand scenarios under which network coding solutions can be found in polynomial time. This is accomplished by relating the demand problem with the graph coloring problem, and then applying results from the strong perfect graph theorem to identify coloring problems which can be solved in polynomial time. This characterization of efficiently-solvable non-uniform demand problems is an important step in understanding such problems, as it allows us to better understand situations under which the NP-complete problem might be tractable.
A technique which we use is that of transmitting data along paths, or through flows. This approach has been widely used in the network coding literature, and has enabled many significant results. In Jaggi @cite_8 , the polynomial-time algorithms for multicast problems rely on the concept of sending data down [perhaps overlapping] paths. @cite_16 , Fragouli and Soljanin give a decomposition of networks into flows, in order to model data transmission in a network more simply. Using this decomposition and a graph coloring formulation, alphabet size bounds for any network code are then proven. Although the flow-based and path-based approaches are similar in many ways, the two techniques differ in that the flow-based approach creates a new flow every time a piece of data is transformed by coding, whereas the path-based approach keeps track of each piece of data as it is sent individually down a path, even if any transformations get applied to the data. We shall use a path-based approach.
{ "cite_N": [ "@cite_16", "@cite_8" ], "mid": [ "2134859306", "2123562143" ], "abstract": [ "We propose a method to identify structural properties of multicast network configurations, by decomposing networks into regions through which the same information flows. This decomposition allows us to show that very different networks are equivalent from a coding point of view, and offers a means to identify such equivalence classes. It also allows us to divide the network coding problem into two almost independent tasks: one of graph theory and the other of classical channel coding theory. This approach to network coding enables us to derive the smallest code alphabet size sufficient to code any network configuration with two sources as a function of the number of receivers in the network. But perhaps the most significant strength of our approach concerns future network coding practice. Namely, we propose deterministic algorithms to specify the coding operations at network nodes without the knowledge of the overall network topology. Such decentralized designs facilitate the construction of codes that can easily accommodate future changes in the network, e.g., addition of receivers and loss of links", "The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures." ] }
0908.2467
2149950250
The non-uniform demand network coding problem is posed as a single-source and multiple-sink network transmission problem where the sinks may have heterogeneous demands. In contrast with multicast problems, non-uniform demand problems are concerned with the amounts of data received by each sink, rather than the specifics of the received data. In this work, we enumerate non-uniform network demand scenarios under which network coding solutions can be found in polynomial time. This is accomplished by relating the demand problem with the graph coloring problem, and then applying results from the strong perfect graph theorem to identify coloring problems which can be solved in polynomial time. This characterization of efficiently-solvable non-uniform demand problems is an important step in understanding such problems, as it allows us to better understand situations under which the NP-complete problem might be tractable.
We briefly mention some results regarding the multiple multicast connections problem, since achievable solutions for such problems are also achievable for the non-uniform demand problem with the same demanded rates. (Of course, the reverse is not always true.) Many of these results consider the case of two sinks. @cite_10 , after enumeration of all possible scenarios, the authors conclude that in the case of two sinks with differing rates, linear coding is sufficient. The same conclusion is made in @cite_3 , although the authors use a different approach which considers a path-based enumeration. A characterization of the achievable data rate region using network coding is given for the two sink case. For more than two sinks, conditions under which solutions exist for the multiple connections problem have not been enumerated.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2152664953", "1668074974" ], "abstract": [ "Network coding shows that the data rate can be increased if information is allowed to be encoded in the network nodes. The recent work of S.-Y.R. (see IEEE Trans. Inform. Theory, vol.49, no.2, p.371-81, 2003) and R. Koetter and M. Medard (see Proc. INFOCOM, 2002) shows that linear network coding is sufficient for a single source multicast network. The restrictiveness of the use of linear code is still an unknown in the general multisource multicast network. We characterize the achievable information rate region for a single source node multi-source multicast network with two sinks. We further show that linear coding is sufficient for achieving the maximum network capacity.", "We consider a communication network with a single source that has a set of messages and two terminals where each terminal is interested in an arbitrary subset of messages at the source. A tight capacity region for this problem is demonstrated. We show by a simple graph-theoretic procedure that any such problem can be solved by performing network coding on the subset of messages that are requested by both the terminals and that routing is sufficient for transferring the remaining messages." ] }
0908.2494
2098961201
We consider the problem of collaborative filtering from a channel coding perspective. We model the underlying rating matrix as a finite alphabet matrix with block constant structure. The observations are obtained from this underlying matrix through a discrete memoryless channel with a noisy part representing noisy user behavior and an erasure part representing missing data. Moreover, the clusters over which the underlying matrix is constant are unknown. We establish a threshold result for this model: if the largest cluster size is smaller than C1 log(mn) (where the rating matrix is of size m × n), then the underlying matrix cannot be recovered with any estimator, but if the smallest cluster size is larger than C2 log(mn), then we show a polynomial time estimator with asymptotically vanishing probability of error. In the case of uniform cluster size, not only the order of the threshold, but also the constant is identified.
In this paper, we take an alternative channel coding viewpoint of the problem. Our results differ from the above works in several aspects outlined below. We consider finite alphabet for the ratings and a different model for the rating matrix based on row and column clusters. We consider noisy user behavior, and our goal is not to complete the missing entries, but to estimate an underlying block constant" matrix (in the limit as the matrix size grows). Since we consider a finite alphabet, even in the presence of noise, error free recovery is asymptotically feasible. Hence, unlike @cite_9 , which considers real-valued matrices, we do not allow any distortion. We next outline our model and results.
{ "cite_N": [ "@cite_9" ], "mid": [ "2047071281" ], "abstract": [ "On the heels of compressed sensing, a new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries. It comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision, to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown matrix of low rank from just about log noisy samples with an error that is proportional to the noise level. We present numerical results that complement our quantitative analysis and show that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
The first article to notice the importance of priorities in information systems is @cite_21 . There, the problem of conflicting updates in (propositional) databases is solved in a manner similar to @math . The considered priorities are transitive, which in our framework is too restrictive. Also, in our framework this restriction does not bring any computational benefits (the reductions can be modified to use only transitive priorities). @cite_0 is another example of @math -like prioritized conflict resolution of first-order theories. The basic framework is defined for priorities which are weak orders. A partial order is handled by considering every extension to weak order. This approach also assumes the transitivity of the priority.
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "1534361892", "2124299249" ], "abstract": [ "We present a general framework for defining nonmonotonic systems based on the notion of preferred maximal consistent subsets of the premises. This framework subsumes David Poole's THEORIST approach to default reasoning as a particular instance. A disadvantage of THEORIST is that it does not allow to represent priorities between defaults adequately (as distinct from blocking defaults in specific situations). We therefore propose two generalizations of Poole's system: in the first generalization several layers of possible hypotheses representing different degrees of reliability are introduced. In a second further generalization a partial ordering between premises is used to distinguish between more and less reliable formulas. In both approaches a formula is provable from a theory if it is possible to construct a consistent argument for it based on the most reliable hypotheses. This allows for a simple representation of priorities between defaults.", "We suggest here a methodology for updating databases with integrity constraints and rules for deriving inexphcit information. First we consider the problem of updating arbitrary theories by inserting into them or deleting from them arbitrary sentences. The solution involves two key ideas when replacing an old theory by a new one we wish to minimize the change in the theory, and when there are several theories that involve minimal changes, we look for a new theory that reflects that ambiguity. The methodology is also adapted to updating databases, where different facts can carry different priorities, and to updating user views." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
In the context of logic programs, priorities among rules can be used to handle inconsistent logic programs (where rules imply contradictory facts). More preferred rules are satisfied, possibly at the cost of violating less important ones. In a manner analogous to Proposition , @cite_8 lifts a total order on rules to a preference on (extended) answers sets. When computing answers only maximally preferred answers sets are considered.
{ "cite_N": [ "@cite_8" ], "mid": [ "1565029141" ], "abstract": [ "We extend answer set semantics to deal with inconsistent programs (containing classical negation), by finding a \"best\" answer set. Within the context of inconsistent programs, it is natural to have a partial order on rules, representing a preference for satisfying certain rules, possibly at the cost of violating less important ones. We show that such a rule order induces a natural order on extended answer sets, the minimal elements of which we call preferred answer sets. We characterize the expressiveness of the resulting semantics and show that it can simulate negation as failure as well as disjunction. We illustrate an application of the approach by considering database repairs, where minimal repairs are shown to correspond to preferred answer sets." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
@cite_12 proposes a framework of conditioned active integrity constraints , which allows the user to specify the way some of the conflicts created with a constraint can be resolved. This framework satisfies properties @math and @math but not @math and @math . @cite_12 also describes how to translate conditioned active integrity constraints into a prioritized logic program @cite_7 , whose preferred models correspond to maximally preferred repairs.
{ "cite_N": [ "@cite_7", "@cite_12" ], "mid": [ "1988650943", "2110762258" ], "abstract": [ "Representing and reasoning with priorities are important in commonsense reasoning. This paper introduces a framework of prioritized logic programming (PLP), which has a mechanism of explicit representation of priority information in a program. When a program contains incomplete or indefinite information, PLP is useful for specifying preference to reduce non-determinism in logic programming. Moreover, PLP can realize various forms of commonsense reasoning in AI such as abduction, default reasoning, circumscription, and their prioritized variants. The proposed framework increases the expressive power of logic programming and exploits new applications in knowledge representation.", "This paper introduces active integrity constraints (AICs), an extension of integrity constraints for consistent database maintenance. An active integrity constraint is a special constraint whose body contains a conjunction of literals which must be false and whose head contains a disjunction of update actions representing actions (insertions and deletions of tuples) to be performed if the constraint is not satisfied (that is its body is true). The AICs work in a domino-like manner as the satisfaction of one AIC may trigger the violation and therefore the activation of another one. The paper also introduces founded repairs, which are minimal sets of update actions that make the database consistent, and are specified and ldquosupportedrdquo by active integrity constraints. The paper presents: 1) a formal declarative semantics allowing the computation of founded repairs and 2) a characterization of this semantics obtained by rewriting active integrity constraints into disjunctive logic rules, so that founded repairs can be derived from the answer sets of the derived logic program. Finally, the paper studies the computational complexity of computing founded repairs." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
@cite_19 uses ranking functions on facts to resolve conflicts by taking only the fact with highest rank and removing others. This approach constructs a unique repair under the assumption that no two different facts are of equal rank (satisfaction of @math ). If this assumption is not satisfied and the facts contain numeric values, a new value, called the fusion, can be calculated from the conflicting facts (then, however, the constructed instance is not necessarily a repair in the sense of Definition which means a possible loss of information).
{ "cite_N": [ "@cite_19" ], "mid": [ "2154029507" ], "abstract": [ "A virtual database system is software that provides unified access to multiple information sources. If the sources are overlapping in their contents and independently maintained, then the likelihood of inconsistent answers is high. Solutions are often based on ranking (which sorts the different answers according to recurrence) and on fusion (which synthesizes a new value from the different alternatives according to a specific formula). In this paper we argue that both methods are flawed, and we offer alternative solutions that are based on knowledge about the performance of the source data; including features such as recentness, availability, accuracy and cost. These features are combined in a flexible utility function that expresses the overall value of a data item to the user. Utility allows us to (1) define meaningful ranking on the inconsistent set of answers, and offer the topranked answer as a preferred answer; (2) determine whether a fusion value is indeed better than the initial values, by calculating its utility and comparing it to the utilities of the initial values; and (3) discover the best fusion: the fusion formula that optimizes the utility. The advantages of such performance-based and utility-driven ranking and fusion are considerable." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
A different approach based on ranking is studied in @cite_3 . The authors consider polynomial functions that are used to rank repairs. When computing preferred consistent query answers, only repairs with the highest rank are considered. The properties @math and @math are trivially satisfied, but because this form of preference information does not have natural notions of extensions and maximality, it is hard to discuss postulates @math and @math . Also, the preference among repairs in this method is not based on the way in which the conflicts are resolved.
{ "cite_N": [ "@cite_3" ], "mid": [ "2152959100" ], "abstract": [ "Recently there has been an increasing interest in integrity constraints associated with relational databases and in inconsistent databases, i.e. databases which do not satisfy integrity constraints. In the presence of inconsistencies two main techniques have been proposed: compute repairs, i.e. minimal set of insertion and deletion operations, called database repairs, and compute consistent answers, i.e. identify the sets of atoms which we can assume true, false and undefined without modifying the database. In this paper feasibility conditions and preference criteria are introduced which, associated with integrity constraints, allow to restrict the number of repairs and to increase the power of queries over inconsistent databases. Moreover, it is studied the complexity of computing repairs and the expressive power of relational queries over databases with integrity constraints, feasibility conditions and preference criteria." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
An approach where the user has a certain degree of control over the way the conflicts are resolved is presented in @cite_24 . Using repair constraints the user can restrict considered repairs to those where facts from one relation have been removed only if similar facts have been removed from some other relation. This approach satisfies @math but not @math . A method of weakening the repair constraints is proposed to get @math , however this comes at the price of losing @math .
{ "cite_N": [ "@cite_24" ], "mid": [ "1569522367" ], "abstract": [ "Data integration systems represent today a key technological infrastructure for managing the enormous amount of information even more and more distributed over many data sources, often stored in different heterogeneous formats. Several different approaches providing transparent access to the data by means of suitable query answering strategies have been proposed in the literature. These approaches often assume that all the sources have the same level of reliability and that there is no need for preferring values “extracted” from a given source. This is mainly due to the difficulties of properly translating and reformulating source preferences in terms of properties expressed over the global view supplied by the data integration system. Nonetheless preferences are very important auxiliary information that can be profitably exploited for refining the way in which integration is carried out. In this paper we tackle the above difficulties and we propose a formal framework for both specifying and reasoning with preferences among the sources. The semantics of the system is restated in terms of preferred answers to user queries, and the computational complexity of identifying these answers is investigated as well." ] }
0908.0464
2952347785
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
In @cite_18 , extend the framework of consistent query answers with techniques of probabilistic databases. Essentially, only one key dependency per relation is considered and user preference is expressed by assigning a probability value to each of mutually conflicting facts. The probability values must sum to @math over every clique in the conflict graphs. This framework generalizes the standard framework of consistent query answers: the repairs correspond to possible worlds and have an associated probability. We also note that no repairs are removed from consideration (unless the probability of the world is @math ). The query is evaluated over all repairs and the probability assigned to an answer is the sum of probabilities of worlds in which the answer is present. Although the considered databases are repairs, the use of the associated probability values makes it difficult to compare this framework with ours.
{ "cite_N": [ "@cite_18" ], "mid": [ "2166994031" ], "abstract": [ "The detection of duplicate tuples, corresponding to the same real-world entity, is an important task in data integration and cleaning. While many techniques exist to identify such tuples, the merging or elimination of duplicates can be a difficult task that relies on ad-hoc and often manual solutions. We propose a complementary approach that permits declarative query answering over duplicated data, where each duplicate is associated with a probability of being in the clean database. We rewrite queries over a database containing duplicates to return each answer with the probability that the answer is in the clean database. Our rewritten queries are sensitive to the semantics of duplication and help a user understand which query answers are most likely to be present in the clean database. The semantics that we adopt is independent of the way the probabilities are produced, but is able to effectively exploit them during query answering. In the absence of external knowledge that associates each database tuple with a probability, we offer a technique, based on tuple summaries, that automates this task. We experimentally study the performance of our rewritten queries. Our studies show that the rewriting does not introduce a significant overhead in query execution time. This work is done in the context of the ConQuer project at the University of Toronto, which focuses on the efficient management of inconsistent and dirty databases." ] }
0907.5438
2950078172
Many basic key distribution schemes specifically tuned to wireless sensor networks have been proposed in the literature. Recently, several researchers have proposed schemes in which they have used group-based deployment models and assumed predeployment knowledge of the expected locations of nodes. They have shown that these schemes achieve better performance than the basic schemes, in terms of connectivity, resilience against node capture and storage requirements. But in many situations expected locations of nodes are not available. In this paper we propose a solution which uses the basic scheme, but does not use group-based deployment model and predeployment knowledge of the locations of nodes, and yet performs better than schemes which make the aforementioned assumptions. In our scheme, groups are formed after deployment of sensor nodes, on the basis of their physical locations, and the nodes sample keys from disjoint key pools. Compromise of a node affects secure links with other nodes that are part of its group only. Because of this reason, our scheme performs better than the basic schemes and the schemes using predeployment knowledge, in terms of connectivity, storage requirement, and security. Moreover, the post-deployment key generation process completes sooner than in schemes like LEAP+.
Various key distribution schemes have been proposed in the literature for wireless sensor networks, keeping in view the resource-constrained devices used in these networks. Eschenauer and Gligor @cite_14 proposed a scheme in which for every node, keys are picked randomly (with replacement) from a key pool and assigned to it before deployment; this scheme is known as the basic or EG scheme. After key discovery, two neighbor nodes that have a common key use that as the key for secure communication. Based on this basic scheme, several schemes with enhanced security features have been suggested in @cite_4 @cite_2 @cite_3 @cite_9 @cite_0 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_3", "@cite_0", "@cite_2" ], "mid": [ "2116269350", "2117984332", "2105514582", "2143536181", "2006530018", "1973225261" ], "abstract": [ "Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.", "Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution's strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.", "A key distribution scheme for dynamic conferences is a method by which initially an (off-line) trusted server distributes private individual pieces of information to a set of users. Later any group of users of a given size (a dynamic conference) is able to compute a common secure key. In this paper we study the theory and applications of such perfectly secure systems. In this setting, any group of t users can compute a common key by each user computing using only his private piece of information and the identities of the other t − 1 group users. Keys are secure against coalitions of up to k users, that is, even if k users pool together their pieces they cannot compute anything about a key of any t-size conference comprised of other users.", "To achieve security in wireless sensor networks, it is important to he able to encrypt messages sent among sensor nodes. Keys for encryption purposes must he agreed upon by communicating nodes. Due to resource constraints, achieving such key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and public-key based schemes, are not suitable for wireless sensor networks. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large amount of memory used when the network size is large. Recently, a random key pre-distribution scheme and its improvements have been proposed. A common assumption made by these random key pre-distribution schemes is that no deployment knowledge is available. Noticing that in many practical scenarios, certain deployment knowledge may be available a priori, we propose a novel random key pre-distribution scheme that exploits deployment knowledge and avoids unnecessary key assignments. We show that the performance (including connectivity, memory usage, and network resilience against node capture) of sensor networks can he substantially improved with the use of our proposed scheme. The scheme and its detailed performance evaluation are presented in this paper.", "In this paper we propose an approach for key management in sensor networks which takes the location of sensor nodes into consideration while deciding the keys to be deployed on each node. As a result, this approach not only reduces the number of keys that have to be stored on each sensor node but also provides for the containment of node compromise. Thus compromise of a node in a location affects the communications only around that location. This approach which we call as location dependent key management does not require any knowledge about the deployment of sensor nodes. The proposed scheme starts off with loading a single key on each sensor node prior to deployment. The actual keys are then derived from this single key once the sensor nodes are deployed. The proposed scheme allows for additions of sensor nodes to the network at any point in time. We study the proposed scheme using both analysis and simulations and point out the advantages.", "To achieve security in wireless sensor networks, it is important to be able to encrypt and authenticate messages sent between sensor nodes. Before doing so, keys for performing encryption and authentication must be agreed upon by the communicating parties. Due to resource constraints, however, achieving key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and other public-key based schemes, are not suitable for wireless sensor networks due to the limited computational abilities of the sensor nodes. Predistribution of secret keys for all pairs of nodes is not viable due to the large amount of memory this requires when the network size is large.In this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an in-depth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smaller-scale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain." ] }
0907.5438
2950078172
Many basic key distribution schemes specifically tuned to wireless sensor networks have been proposed in the literature. Recently, several researchers have proposed schemes in which they have used group-based deployment models and assumed predeployment knowledge of the expected locations of nodes. They have shown that these schemes achieve better performance than the basic schemes, in terms of connectivity, resilience against node capture and storage requirements. But in many situations expected locations of nodes are not available. In this paper we propose a solution which uses the basic scheme, but does not use group-based deployment model and predeployment knowledge of the locations of nodes, and yet performs better than schemes which make the aforementioned assumptions. In our scheme, groups are formed after deployment of sensor nodes, on the basis of their physical locations, and the nodes sample keys from disjoint key pools. Compromise of a node affects secure links with other nodes that are part of its group only. Because of this reason, our scheme performs better than the basic schemes and the schemes using predeployment knowledge, in terms of connectivity, storage requirement, and security. Moreover, the post-deployment key generation process completes sooner than in schemes like LEAP+.
Du et. al. @cite_2 improve upon Blom's scheme by combining it with the random key distribution scheme. Similarly, Liu and Ning @cite_5 improve upon Blundo's @cite_9 scheme by combining it with random key distribution scheme. Both these schemes perform better than the EG scheme @cite_14 in terms of connectivity and resilience against node capture. But threshold schemes do not scale with the number of nodes in the network. For a fixed resilience against node capture, if the number of nodes is increased, then they require large memory.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_14", "@cite_2" ], "mid": [ "2122053504", "2105514582", "2116269350", "1973225261" ], "abstract": [ "Pairwise key establishment is a fundamental security service in sensor networks; it enables sensor nodes to communicate securely with each other using cryptographic techniques. However, due to the resource constraints on sensor nodes, it is not feasible to use traditional key management techniques such as public key cryptography and key distribution center (KDC). A number of key predistribution techniques have been proposed for pairwise key establishment in sensor networks recently. To facilitate the study of novel pairwise key predistribution techniques, this paper develops a general framework for establishing pairwise keys between sensor nodes using bivariate polynomials. This paper then proposes two efficient instantiations of the general framework: a random subset assignment key predistribution scheme, and a hypercube-based key predistribution scheme. The analysis shows that both schemes have a number of nice properties, including high probability, or guarantee to establish pairwise keys, tolerance of node captures, and low storage, communication, and computation overhead. To further reduce the computation at sensor nodes, this paper presents an optimization technique for polynomial evaluation, which is used to compute pairwise keys. This paper also reports the implementation and the performance of the proposed schemes on MICA2 motes running TinyOS, an operating system for networked sensors. The results indicate that the proposed techniques can be applied efficiently in resource-constrained sensor networks.", "A key distribution scheme for dynamic conferences is a method by which initially an (off-line) trusted server distributes private individual pieces of information to a set of users. Later any group of users of a given size (a dynamic conference) is able to compute a common secure key. In this paper we study the theory and applications of such perfectly secure systems. In this setting, any group of t users can compute a common key by each user computing using only his private piece of information and the identities of the other t − 1 group users. Keys are secure against coalitions of up to k users, that is, even if k users pool together their pieces they cannot compute anything about a key of any t-size conference comprised of other users.", "Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.", "To achieve security in wireless sensor networks, it is important to be able to encrypt and authenticate messages sent between sensor nodes. Before doing so, keys for performing encryption and authentication must be agreed upon by the communicating parties. Due to resource constraints, however, achieving key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and other public-key based schemes, are not suitable for wireless sensor networks due to the limited computational abilities of the sensor nodes. Predistribution of secret keys for all pairs of nodes is not viable due to the large amount of memory this requires when the network size is large.In this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an in-depth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smaller-scale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain." ] }
0907.5438
2950078172
Many basic key distribution schemes specifically tuned to wireless sensor networks have been proposed in the literature. Recently, several researchers have proposed schemes in which they have used group-based deployment models and assumed predeployment knowledge of the expected locations of nodes. They have shown that these schemes achieve better performance than the basic schemes, in terms of connectivity, resilience against node capture and storage requirements. But in many situations expected locations of nodes are not available. In this paper we propose a solution which uses the basic scheme, but does not use group-based deployment model and predeployment knowledge of the locations of nodes, and yet performs better than schemes which make the aforementioned assumptions. In our scheme, groups are formed after deployment of sensor nodes, on the basis of their physical locations, and the nodes sample keys from disjoint key pools. Compromise of a node affects secure links with other nodes that are part of its group only. Because of this reason, our scheme performs better than the basic schemes and the schemes using predeployment knowledge, in terms of connectivity, storage requirement, and security. Moreover, the post-deployment key generation process completes sooner than in schemes like LEAP+.
All the location-based schemes which depend on the knowledge of expected locations of nodes perform well, but they are all prone to estimation errors in the expected positions of the nodes. So other schemes which do not assume predeployment knowledge of the expected locations of the nodes have been proposed. In @cite_1 , Liu et.al. proposed a scheme which does not use expected locations of the nodes but still uses group-based deployment. This scheme proposes a framework, and any basic scheme like random key distribution or polynomial-based scheme can be used with this framework. The authors showed that basic schemes used with their proposed framework perform better than when used alone.
{ "cite_N": [ "@cite_1" ], "mid": [ "2156914580" ], "abstract": [ "Many key pre-distribution techniques have been developed recently to establish pairwise keys for wireless sensor networks. To further improve these schemes, researchers have proposed to take advantage of sensors' expected locations to help pre-distributing keying materials. However, it is usually very difficult, and sometimes impossible, to guarantee the knowledge of sensors' expected locations. In order to remove the dependency on expected locations, this paper proposes a practical deployment model, where sensor nodes are deployed in groups, and the nodes in the same group are close to each other after the deployment. Based on this model, the paper develops a novel group-based key pre-distribution framework, which can be combined with any of existing key pre-distribution techniques. A distinguishing property of this framework is that it does not require the knowledge of sensors' expected locations and greatly simplifies the deployment of sensor networks. The analysis also shows that the framework can substantially improve the security as well as the performance of existing key pre-distribution techniques." ] }
0907.5438
2950078172
Many basic key distribution schemes specifically tuned to wireless sensor networks have been proposed in the literature. Recently, several researchers have proposed schemes in which they have used group-based deployment models and assumed predeployment knowledge of the expected locations of nodes. They have shown that these schemes achieve better performance than the basic schemes, in terms of connectivity, resilience against node capture and storage requirements. But in many situations expected locations of nodes are not available. In this paper we propose a solution which uses the basic scheme, but does not use group-based deployment model and predeployment knowledge of the locations of nodes, and yet performs better than schemes which make the aforementioned assumptions. In our scheme, groups are formed after deployment of sensor nodes, on the basis of their physical locations, and the nodes sample keys from disjoint key pools. Compromise of a node affects secure links with other nodes that are part of its group only. Because of this reason, our scheme performs better than the basic schemes and the schemes using predeployment knowledge, in terms of connectivity, storage requirement, and security. Moreover, the post-deployment key generation process completes sooner than in schemes like LEAP+.
Further, Anjum @cite_0 removed the assumption of group-based deployment model and also removed the assumption of knowledge of expected locations of nodes. He showed that the scheme performs better than the basic scheme; but the scheme requires nodes which can transmit at different power levels. In this scheme there are some special nodes which generate different random numbers (nonces) and transmit them at different power levels. Nodes receiving the same nonce can communicate, provided they are neighbors. Our scheme is different from this scheme, since we do not require the presence of nodes which can transmit at different levels. Instead of using different power levels, our scheme uses TTL scoping. In TTL scoping, after the deployment phase, some nodes transmit a broadcast packet containing the TTL (Time to Live) field, similar to that of IP packets in data networks.
{ "cite_N": [ "@cite_0" ], "mid": [ "2006530018" ], "abstract": [ "In this paper we propose an approach for key management in sensor networks which takes the location of sensor nodes into consideration while deciding the keys to be deployed on each node. As a result, this approach not only reduces the number of keys that have to be stored on each sensor node but also provides for the containment of node compromise. Thus compromise of a node in a location affects the communications only around that location. This approach which we call as location dependent key management does not require any knowledge about the deployment of sensor nodes. The proposed scheme starts off with loading a single key on each sensor node prior to deployment. The actual keys are then derived from this single key once the sensor nodes are deployed. The proposed scheme allows for additions of sensor nodes to the network at any point in time. We study the proposed scheme using both analysis and simulations and point out the advantages." ] }
0907.5438
2950078172
Many basic key distribution schemes specifically tuned to wireless sensor networks have been proposed in the literature. Recently, several researchers have proposed schemes in which they have used group-based deployment models and assumed predeployment knowledge of the expected locations of nodes. They have shown that these schemes achieve better performance than the basic schemes, in terms of connectivity, resilience against node capture and storage requirements. But in many situations expected locations of nodes are not available. In this paper we propose a solution which uses the basic scheme, but does not use group-based deployment model and predeployment knowledge of the locations of nodes, and yet performs better than schemes which make the aforementioned assumptions. In our scheme, groups are formed after deployment of sensor nodes, on the basis of their physical locations, and the nodes sample keys from disjoint key pools. Compromise of a node affects secure links with other nodes that are part of its group only. Because of this reason, our scheme performs better than the basic schemes and the schemes using predeployment knowledge, in terms of connectivity, storage requirement, and security. Moreover, the post-deployment key generation process completes sooner than in schemes like LEAP+.
In addition, our scheme is also different in the way nodes choose their key rings. In @cite_0 , on receiving the nonce, nodes map it to some different value. In contrast, in our scheme, some nodes transmit their id, and corresponding to every id there is an associated key pool. Nodes sample keys from the key pool corresponding to the received id. The main advantage of doing this is the improved resilience against node capture. @cite_0 , all nodes receiving the same nonce use the same key for secure communication; so if any node is compromised, all the secure links formed by the nonce received by this node will be compromised. On the other hand, in our scheme, communication with other nodes is compromised , because nodes receiving the same id sample keys from the key pool instead of using the same key.
{ "cite_N": [ "@cite_0" ], "mid": [ "2006530018" ], "abstract": [ "In this paper we propose an approach for key management in sensor networks which takes the location of sensor nodes into consideration while deciding the keys to be deployed on each node. As a result, this approach not only reduces the number of keys that have to be stored on each sensor node but also provides for the containment of node compromise. Thus compromise of a node in a location affects the communications only around that location. This approach which we call as location dependent key management does not require any knowledge about the deployment of sensor nodes. The proposed scheme starts off with loading a single key on each sensor node prior to deployment. The actual keys are then derived from this single key once the sensor nodes are deployed. The proposed scheme allows for additions of sensor nodes to the network at any point in time. We study the proposed scheme using both analysis and simulations and point out the advantages." ] }
0907.4166
2950265248
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
The Bayesian setting is widely studied in the economics literature @cite_21 @cite_26 @cite_10 @cite_27 @cite_18 @cite_22 @cite_0 @cite_20 @cite_5 . In this setting, the optimal (either BIC or DSIC) mechanism can always be computed by encoding the incentive compatibility constraints in an integer program and maximizing expected revenue. However, the number of variables (and constraints) in this IP is exponential in the number of bidders, there being variables for the allocations and prices for each scenario of revealed types. Therefore, the key difficulty in the Bayesian mechanism design case is computational : Can the optimal (or approximately optimal) auction be efficiently computed and implemented?
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_21", "@cite_0", "@cite_27", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "", "1498123810", "", "", "1594643704", "2739947352", "", "2035239570", "2133387873" ], "abstract": [ "", "We study the impact of potentially binding liquidity constraints on the level of competition in a version of the simultaneous ascending bid auctions. We show that the possibility, even if arbitrarily small, of binding budget constraints can reduce competition significantly, because bidders can ‘pretend’ to be constrained, even if they are not. This effect can be significant: for many distributions of the bidders’ values, all unconstrained bidders behave as if they were liquidity constrained, even as the probability that bidders are budget constrained goes to zero. The possibility of budget constraints therefore requires special attention to the details of the auction. JEL classification number: D44 [H]", "", "", "We consider a dynamic auction problem motivated by the traditional single-leg, multi-period revenue management problem. A seller with C units to sell faces potential buyers with unit demand who arrive and depart over the course of T time periods. The time at which a buyer arrives, her value for a unit as well as the time by which she must make the purchase are private information. In this environment, we derive the revenue maximizing Bayesian incentive compatible selling mechanism.", "This paper finds an optimal mechanism for selling an indivisible good to consumers who may be budget-constrained. Unlike the case where buyers are not budget constrained, a single posted price is not typically optimal. An optimal mechanism generally consists of a continuum of lotteries indexed by the probability of comsumption and the entry fee.", "", "We show that all-pay auctions dominate first-price sealed-bid auctions when bidders face budget constraints. This ranking is explained by the fact that budget constraints bind less frequently in the all-pay auctions, which leads to more aggressive bidding in that format.", "Abstract Using a model of substitutable goods I determine generic conditions on tastes which guarantee that fixed prices are not optimal: the fully optimal tariff includes lotteries. That is, a profit maximising seller would employ a haggling strategy. We show that the fully optimal selling strategy in a class of cases requires a seller to not allow themselves to focus on one good but to remain haggling over more than one good . This throws new light on the selling strategies used in diverse industries. These insights are used to provide a counter-example to the no lotteries result of McAfee and McMillan (J. Econ. Theory 46 (1988) 335)." ] }
0907.4166
2950265248
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
Much of the literature in economics considers the case where the auctioneer has one item (or multiple copies of the item). In the absence of budget constraints, Myerson @cite_24 presents the characterization of any BIC mechanism in terms of expected allocation made to a bidder: This allocation must be monotone in the revealed valuation of the bidder, and furthermore, the expected price is given by applying the VCG calculation to the expected allocation. This yields a linear-time computable optimal revenue-maximizing auction that is both BIC and DSIC. The key issue with budget constraints is that the allocations need to be thresholded in order for the prices to be below the budgets @cite_10 @cite_18 @cite_0 . However, even in this case, the optimal BIC auction follows from a polymatroid characterization that can be solved by the Ellipsoid algorithm and an all-pay condition @cite_9 @cite_0 . By all-pay , we mean that the bidder pays a fixed amount given his revealed type, regardless of the allocation made. This also yields a DSIC mechanism that is @math approximation to optimal BIC revenue @cite_6 , but the result holds only for homogeneous items.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_6", "@cite_24", "@cite_0", "@cite_10" ], "mid": [ "", "1971667319", "2964216027", "2029050771", "1594643704", "2035239570" ], "abstract": [ "", "This note uses the Theorem of the Alternative to prove new results on the implementability of general, asymmetric auctions, and to provide simpler proofs of known results for symmetric auctions. The tradeoff is that type spaces are taken to be finite.", "In this paper, we consider the problem of designing incentive compatible auctions for multiple (homogeneous) units of a good, when bidders have private valuations and private budget constraints. When only the valuations are private and the budgets are public, [8] show that the adaptive clinching auction is the unique incentive-compatible auction achieving Pareto-optimality. They further show that this auction is not truthful with private budgets, so that there is no deterministic Pareto-optimal auction with private budgets. Our main contribution is to show the following Budget Monotonicity property of this auction: When there is only one infinitely divisible good, a bidder cannot improve her utility by reporting a budget smaller than the truth. This implies that the adaptive clinching auction is incentive compatible when over-reporting the budget is not possible (for instance, when funds must be shown upfront). We can also make reporting larger budgets suboptimal with a small randomized modification to the auction. In either case, this makes the modified auction Pareto-optimal with private budgets. We also show that the Budget Monotonicity property does not hold for auctioning indivisible units of the good, showing a sharp contrast between the divisible and indivisible cases. The Budget Monotonicity property also implies other improved results in this context. For revenue maximization, the same auction improves the best-known competitive ratio due to Abrams [1] by a factor of 4, and asymptotically approaches the performance of the optimal single-price auction. Finally, we consider the problem of revenue maximization (or social welfare) in a Bayesian setting. We allow the bidders have public size constraints (on the amount of good they are willing to buy) in addition to private budget constraints. We show a simple poly-time computable 5.83-approximation to the optimal Bayesian incentive compatible mechanism, that is implementable in dominant strategies. Our technique again crucially needs the ability to prevent bidders from over-reporting budgets via randomization. We show the approximation result via designing a rounding scheme for an LP relaxation of the problem, which may be of independent interest.", "This paper considers the problem faced by a seller who has a single object to sell to one of several possible buyers, when the seller has imperfect information about how much the buyers might be willing to pay for the object. The seller's problem is to design an auction game which has a Nash equilibrium giving him the highest possible expected utility. Optimal auctions are derived in this paper for a wide class of auction design problems.", "We consider a dynamic auction problem motivated by the traditional single-leg, multi-period revenue management problem. A seller with C units to sell faces potential buyers with unit demand who arrive and depart over the course of T time periods. The time at which a buyer arrives, her value for a unit as well as the time by which she must make the purchase are private information. In this environment, we derive the revenue maximizing Bayesian incentive compatible selling mechanism.", "We show that all-pay auctions dominate first-price sealed-bid auctions when bidders face budget constraints. This ranking is explained by the fact that budget constraints bind less frequently in the all-pay auctions, which leads to more aggressive bidding in that format." ] }
0907.4166
2950265248
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
An alternative line of work deals with the adversarial setting , where no distributional assumption is made on the bidders' private valuations. In this setting, the budget constrained auction problem is notorious mainly because standard auction concepts such as VCG, efficiency, and competitive equilibria do not directly carry over @cite_2 . Most previous results deal with the case of multiple units of a homogeneous good. In this setting, based on the random partitioning framework of Goldberg @cite_1 , Borgs @cite_16 presented a truthful auction whose revenue is asymptotically within a constant factor of the optimal revenue (see also @cite_17 ). If the focus is instead on optimizing social welfare, no non-trivial truthful mechanism can optimize social welfare @cite_16 . Therefore, the focus has been on weaker notions than efficiency, such as Pareto-optimality, where no pair of agents (including the auctioneer) can simultaneously improve their utilities by trading with each other. Dobzinski @cite_14 present an ascending price auction based on the clinching auction of Ausubel @cite_15 , which they show to be the only Pareto-optimal auction in the public budget setting. This result was extended to the private budget setting by Bhattacharya @cite_6 ; see @cite_12 for a related result.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_6", "@cite_2", "@cite_15", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2045432319", "1964005287", "2964216027", "1734031635", "", "", "1484450013", "1986345564" ], "abstract": [ "We study multi-unit auctions where the bidders have a budget constraint, a situation very common in practice that has received very little attention in the auction theory literature. Our main result is an impossibility: there are no incentive-compatible auctions that always produce a Pareto-optimal allocation. We also obtain some surprising positive results for certain special cases.", "We study a class of single round, sealed bid auctions for items in unlimited supply such as digital goods. We focus on auctions that are truthful and competitive. Truthful auctions encourage bidders to bid their utility; competitive auctions yield revenue within a constant factor of the revenue for optimal fixed pricing. We show that for any truthful auction, even a multi-price auction, the expected revenue does not exceed that for optimal fixed pricing. We also give a bound on how far the revenue for optimal fixed pricing can be from the total market utility. We show that several randomized auctions are truthful and competitive under certain assumptions, and that no truthful deterministic auction is competitive. We present simulation results which confirm that our auctions compare favorably to fixed pricing. Some of our results extend to bounded supply markets, for which we also get truthful and competitive auctions.", "In this paper, we consider the problem of designing incentive compatible auctions for multiple (homogeneous) units of a good, when bidders have private valuations and private budget constraints. When only the valuations are private and the budgets are public, [8] show that the adaptive clinching auction is the unique incentive-compatible auction achieving Pareto-optimality. They further show that this auction is not truthful with private budgets, so that there is no deterministic Pareto-optimal auction with private budgets. Our main contribution is to show the following Budget Monotonicity property of this auction: When there is only one infinitely divisible good, a bidder cannot improve her utility by reporting a budget smaller than the truth. This implies that the adaptive clinching auction is incentive compatible when over-reporting the budget is not possible (for instance, when funds must be shown upfront). We can also make reporting larger budgets suboptimal with a small randomized modification to the auction. In either case, this makes the modified auction Pareto-optimal with private budgets. We also show that the Budget Monotonicity property does not hold for auctioning indivisible units of the good, showing a sharp contrast between the divisible and indivisible cases. The Budget Monotonicity property also implies other improved results in this context. For revenue maximization, the same auction improves the best-known competitive ratio due to Abrams [1] by a factor of 4, and asymptotically approaches the performance of the optimal single-price auction. Finally, we consider the problem of revenue maximization (or social welfare) in a Bayesian setting. We allow the bidders have public size constraints (on the amount of good they are willing to buy) in addition to private budget constraints. We show a simple poly-time computable 5.83-approximation to the optimal Bayesian incentive compatible mechanism, that is implementable in dominant strategies. Our technique again crucially needs the ability to prevent bidders from over-reporting budgets via randomization. We show the approximation result via designing a rounding scheme for an LP relaxation of the problem, which may be of independent interest.", "This talk describes the auction system used by Google for allocation and pricing of TV ads. It is based on a simultaneous ascending auction, and has been in use since September 2008.", "", "", "Motivated by sponsored search auctions with hard budget constraints given by the advertisers, we study multi-unit auctions of a single item. An important example is a sponsored result slot for a keyword, with many units representing its inventory in a month, say. In this single-item multi-unit auction, each bidder has a private value for each unit, and a private budget which is the total amount of money she can spend in the auction. A recent impossibility result [, FOCS’08] precludes the existence of a truthful mechanism with Paretooptimal allocations in this important setting. We propose Sort-Cut, a mechanism which does the next best thing from the auctioneer’s point of view, that we term semi-truthful. While we are unable to give a complete characterization of equilibria for our mechanism, we prove that some equilibrium of the proposed mechanism optimizes the revenue over all Pareto-optimal mechanisms, and that this equilibrium is the unique one resulting from a natural rational bidding strategy (where every losing bidder bids at least her true value). Perhaps even more significantly, we show that the revenue of every equilibrium of our mechanism differs by at most the budget of one bidder from the optimum revenue (under some mild assumptions).", "We study the problem of maximizing revenue for auctions with multiple units of a good where bidders have hard budget constraints, first considered in [2]. The revenue obtained by an auction is compared with the optimal omniscient auction had the auctioneer known the private information of all the bidders, as in competitive analysis [7]. We show that the revenue of the optimal omniscient auction that sells items at many different prices is within a factor of 2 of the optimal omniscient auction that sells all the items at a single price, implying that our results will carry over to multiple price auctions. We give the first auction for this problem, to the best of our knowledge, that is known to obtain a constant fraction of the optimal revenue when the bidder dominance (the ratio between the maximum contribution of a single bidder in the optimal solution and the revenue of that optimal solution) is large (as high as 1 2). Our auction is also shown to remain truthful if canceled upon not meeting certain criteria. On the negative side, we show that no auction can achieve a guarantee of 1 2-e the revenue of the optimal omniscient multi-price auction. Finally, if the bidder dominance is known in advance and is less than 1 5.828, we give an auction mechanism that raises a large constant fraction of the optimal revenue when the bidder dominance is large and is asymptotically close to the optimal omniscient auction as the bidder dominance decreases. We discuss the relevance of these results for related applications." ] }
0907.4166
2950265248
In this paper, we present the first approximation algorithms for the problem of designing revenue optimal Bayesian incentive compatible auctions when there are multiple (heterogeneous) items and when bidders can have arbitrary demand and budget constraints. Our mechanisms are surprisingly simple: We show that a sequential all-pay mechanism is a 4 approximation to the revenue of the optimal ex-interim truthful mechanism with discrete correlated type space for each bidder. We also show that a sequential posted price mechanism is a O(1) approximation to the revenue of the optimal ex-post truthful mechanism when the type space of each bidder is a product distribution that satisfies the standard hazard rate condition. We further show a logarithmic approximation when the hazard rate condition is removed, and complete the picture by showing that achieving a sub-logarithmic approximation, even for regular distributions and one bidder, requires pricing bundles of items. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. We believe this approach will be useful in other Bayesian mechanism design contexts.
Finally, several researchers have considered the behavior specific types of auctions, for instance, auctions that are sequential by item and second price within each item @cite_21 @cite_4 , and ascending price auctions @cite_25 @cite_26 . The goal here is to analyze the improvement in revenue (or social welfare) by optimal sequencing, or to study incentive compatibility of commonly used ascending price mechanisms. However, analyzing the performance of sequential or ascending price auctions is difficult in general, and there is little known in terms of optimal mechanisms (or even approximately optimal mechanisms) in these settings.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_25", "@cite_26" ], "mid": [ "", "1594189438", "2102636809", "1498123810" ], "abstract": [ "", "We study sequential auctions for private value objects and unit-demand bidders using second-price sealed-bid rules. We analyze this scenario from the seller's perspective and consider several approaches to increasing the total revenue. We derive the equilibrium bidding strategies for each individual auction.We then study the problem of selecting an optimal agenda, i.e., a revenue-maximizing ordering of the auctions. We describe an efficient algorithm that finds an optimal agenda in the important special case when the revenue of each auction is guaranteed to be strictly positive. We also show that the seller can increase his revenue by canceling one or more auctions, even if the number of bidders exceeds the number of objects for sale, and analyze the bidders' behavior and the seller's profit for different cancellation rules.", "March 2002 A family of ascending package auction models is introduced in which bidders may determine their own packages on which to bid. In the proxy auction (revelation game) versions, the outcome is a point in the core of the exchange economy for the reported preferences. When payoffs are linear in money and goods are substitutes, sincere reporting constitutes a Nash equilibrium and the outcome coincides with the Vickrey auction outcome. Even when goods are not substitutes, ascending proxy auction equilibria lie in the core with respect to the true preferences. Compared to the Vickrey auction, the proxy auctions generate higher equilibrium revenues, are less vulnerable to collusion, can handle budget constraints much more robustly, and may provide better ex ante investment incentives. Working Papers Index", "We study the impact of potentially binding liquidity constraints on the level of competition in a version of the simultaneous ascending bid auctions. We show that the possibility, even if arbitrarily small, of binding budget constraints can reduce competition significantly, because bidders can ‘pretend’ to be constrained, even if they are not. This effect can be significant: for many distributions of the bidders’ values, all unconstrained bidders behave as if they were liquidity constrained, even as the probability that bidders are budget constrained goes to zero. The possibility of budget constraints therefore requires special attention to the details of the auction. JEL classification number: D44 [H]" ] }
0907.4385
2952953568
A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the --the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.
The complexity of various solution concepts in coalitional games is a well-studied topic @cite_15 @cite_9 @cite_10 @cite_18 . In particular, @cite_0 analyzes some important computational aspects of stability in WVGs, proving a number of results on the complexity of the least core and the nucleolus. The complexity of the CS-core in WVGs is studied in @cite_13 . Paper @cite_12 is similar to ours in spirit. It considers the setting where an external party intervenes in order to achieve a certain outcome using monetary payments. However, @cite_12 deals with the very different domain of cooperative games. There are also similarities between our work and the recent research on bribery in elections @cite_16 , where an external party pays voters to change their preferences in order to make a given candidate win. A companion paper @cite_3 studies the cost of stability in network flow games.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_9", "@cite_3", "@cite_0", "@cite_15", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "1852468282", "2126233665", "2167073044", "1542485252", "1549171560", "62192591", "14982402", "1975794688", "" ], "abstract": [ "Weighted voting games are a popular model of collaboration in multiagent systems. In such games, each agent has a weight (intuitively corresponding to resources he can contribute), and a coalition of agents wins if its total weight meets or exceeds a given threshold. Even though coalitional stability in such games is important, existing research has nonetheless only considered the stability of the grand coalition. In this paper, we introduce a model for weighted voting games with coalition structures. This is a natural extension in the context of multiagent systems, as several groups of agents may be simultaneously at work, each serving a different task. We then proceed to study stability in this context. First, we define the CS-core, a notion of the core for such settings, discuss its non-emptiness, and relate it to the traditional notion of the core in weighted voting games. We then investigate its computational properties. We show that, in contrast with the traditional setting, it is computationally hard to decide whether a game has a non-empty CS-core, or whether a given outcome is in the CS-core. However, we then provide an efficient algorithm that verifies whether an outcome is in the CS-core if all weights are small (polynomially bounded). Finally, we also suggest heuristic algorithms for checking the non-emptiness of the CS-core.", "Coalition formation is a key aspect of automated negotiation among self-interested agents. In order for coalitions to be stable, a key question that must be answered is how the gains from cooperation are to be distributed. Various solution concepts (such as the Shapley value, core, least core, and nucleolus) have been proposed. In this paper, we demonstrate how these concepts are vulnerable to various kinds of manipulations in open anonymous environments such as the Internet. These manipulations include submitting false names (one acting as many), collusion (many acting as one), and the hiding of skills. To address these threats, we introduce a new solution concept called the anonymity-proof core, which is robust to these manipulations. We show that the anonymity-proof core is characterized by certain simple axiomatic conditions. Furthermore, we show that by relaxing these conditions, we obtain a concept called the least anonymity-proof core, which is guaranteed to be non-empty. We also show that computational hardness of manipulation may provide an alternative barrier to manipulation.", "We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.", "", "Weighted threshold games are coalitional games in which each player has a weight (intuitively corresponding to its voting power), and a coalition is successful if the sum of its weights exceeds a given threshold. Key questions in coalitional games include finding coalitions that are stable (in the sense that no member of the coalition has any rational incentive to leave it), and finding a division of payoffs to coalition members (an imputation) that is fair. We investigate the computational complexity of such questions for weighted threshold games. We study the core, the least core, and the nucleolus, distinguishing those problems that are polynomial-time computable from those that are NP-hard, and providing pseudopolynomial and approximation algorithms for the NP-hard problems.", "Coalition formation is a key problem in automated negotiation among self-interested agents. In order for coalition formation to be successful, a key question that must be answered is how the gains from cooperation are to be distributed. Various solution concepts have been proposed, but the computational questions around these solution concepts have received little attention. We study a concise representation of characteristic functions which allows for the agents to be concerned with a number of independent issues that each coalition of agents can address. For example, there may be a set of tasks that the capacity-unconstrained agents could undertake, where accomplishing a task generates a certain amount of value (possibly depending on how well the task is accomplished). Given this representation, we show how to quickly compute the Shapley value--a seminal value division scheme that distributes the gains from cooperation fairly in a certain sense. We then show that in (distributed) marginal-contribution based value division schemes, which are known to be vulnerable to manipulation of the order in which the agents are added to the coalition, this manipulation is NP-complete. Thus, computational complexity serves as a barrier to manipulating the joining order. Finally, we show that given a value division, determining whether some subcoalition has an incentive to break away (in which case we say the division is not in the core) is NP-complete. So, computational complexity serves to increase the stability of the coalition.", "We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.", "Coalition formation is a key problem in automated negotiation among self-interested agents, and other multiagent applications. A coalition of agents can sometimes accomplish things that the individual agents cannot, or can accomplish them more efficiently. Motivating the agents to abide by a solution requires careful analysis: only some of the solutions are stable in the sense that no group of agents is motivated to break off and form a new coalition. This constraint has been studied extensively in cooperative game theory: the set of solutions that satisfy it is known as the core. The computational questions around the core have received less attention. When it comes to coalition formation among software agents (that represent real-world parties), these questions become increasingly explicit. In this paper we define a concise, natural, general representation for games in characteristic form that relies on superadditivity. In our representation, individual agents' values are given as well as values for those coalitions that introduce synergies. We show that this representation allows for efficient checking of whether a given outcome is in the core. We then show that determining whether the core is nonempty is NP-complete both with and without transferable utility. We demonstrate that what makes the problem hard in both cases is determining the collaborative possibilities (the set of outcomes possible for the grand coalition); we do so by showing that if these are given, the problem becomes solvable in time polynomial in the size of the representation in both cases. However, we then demonstrate that for a hybrid version of the problem, where utility transfer is possible only within the grand coalition, the problem remains NP-complete even when the collaborative possibilities are given. Finally, we show that for convex characteristic functions, a solution in the core can be computed efficiently (in O(nl^2) time, where n is the number of agents and l is the number of synergies), even when the collaborative possibilities are not given in advance.", "" ] }
0907.4211
2951556821
We consider a random graph on a given degree sequence @math , satisfying certain conditions. We focus on two parameters @math . Molloy and Reed proved that Q=0 is the threshold for the random graph to have a giant component. We prove that if @math then, with high probability, the size of the largest component of the random graph will be of order @math . If @math is asymptotically larger than @math then the size of the largest component is asymptotically smaller or larger than @math . Thus, we establish that the scaling window is @math .
In 2000, Aiello, Chung and Lu @cite_5 applied the results of Molloy and Reed @cite_12 @cite_14 to a model for massive networks. They also extended those results to apply to power law degree sequences with maximum degree higher than that required by @cite_12 @cite_14 . Since then, that work been used numerous times to analyze massive network models arising in a wide variety of fields such as physics, sociology and biology (see eg. @cite_11 ).
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_12", "@cite_11" ], "mid": [ "2097147952", "206074197", "2044881936", "2147824439" ], "abstract": [ "We propose a random graph model which is a special case of sparse random graphs with given degree sequences. This model involves only a small number of parameters, called logsize and log-log growth rate. These parameters capture some universal characteristics of massive graphs. Furthermore, from these parameters, various properties of the graph can be derived. For example, for certain ranges of the parameters, we will compute the expected distribution of the sizes of the connected components which almost surely occur with high probability. We will illustrate the consistency of our model with the behavior of some massive graphs derived from data in telecommunications. We will also discuss the threshold function, the giant component, and the evolution of random graphs in this model.", "", "Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λi n vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc.", "A process for finishing keratinous material, especially rendering it shrink-resistant or imparting to it durably pressed effects, comprises 1. TREATING THE MATERIAL WITH A POLYTHIOL ESTER OF THE FORMULA [OH]q(s) ¦ R1-(CO)rO(CO)sR2SH]p ¦ [COOH]q(r) WHERE R1 represents an aliphatic or araliphatic hydrocarbon radical of at least 2 carbon atoms, which may contain not more than one ether oxygen atom, R2 represents a hydrocarbon radical, P IS AN INTEGER OF FROM 2 TO 6, Q IS ZERO OR A POSITIVE INTEGER OF AT MOST 3, SUCH THAT (P + Q) IS AT MOST 6, AND R AND S EACH REPRESENT ZERO OR 1 BUT ARE NOT THE SAME, AND 2. CURING THE POLYTHIOL ESTER ON THE MATERIAL BY MEANS OF A POLYENE CONTAINING, PER AVERAGE MOLECULE, AT LEAST TWO ETHYLENIC DOUBLE BONDS EACH beta TO AN OXYGEN, NITROGEN, OR SULFUR ATOM, THE SUM OF SUCH ETHYLENIC DOUBLE BONDS IN THE POLYENE AND OF THE MERCAPTAN GROUPS IN THE POLYTHIOL ESTER BEING MORE THAN 4 AND THE COMBINED WEIGHT OF THE POLYENE AND THE POLYTHIOL ESTER BEING FROM 0.5 TO 15 BY WEIGHT OF THE KERATINOUS MATERIAL TREATED." ] }
0907.4211
2951556821
We consider a random graph on a given degree sequence @math , satisfying certain conditions. We focus on two parameters @math . Molloy and Reed proved that Q=0 is the threshold for the random graph to have a giant component. We prove that if @math then, with high probability, the size of the largest component of the random graph will be of order @math . If @math is asymptotically larger than @math then the size of the largest component is asymptotically smaller or larger than @math . Thus, we establish that the scaling window is @math .
Kang and Seierstad @cite_4 applied generating functions to study the case where @math , but is outside of the scaling window. They require a maximum degree of at most @math and that the degree sequences satisfy certain conditions that are stronger than those in @cite_12 ; one of these conditions implies that @math is bounded by a constant. Based on what is known for @math , it was natural to guess that for @math we would have @math . They proved that if @math then @math , and if @math then @math . So for the case where @math is bounded, this almost confirmed that natural guess - except that they did not cover the range where @math .
{ "cite_N": [ "@cite_4", "@cite_12" ], "mid": [ "2168564667", "2044881936" ], "abstract": [ "We consider random graphs with a fixed degree sequence. Molloy and Reed [11, 12] studied how the size of the giant component changes according to degree conditions. They showed that there is a phase transition and investigated the order of components before and after the critical phase. In this paper we study more closely the order of components at the critical phase, using singularity analysis of a generating function for a branching process which models the random graph with a given degree sequence.", "Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λi n vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc." ] }
0907.4761
1818095014
Kirchhoff's matrix-tree theorem states that the number of spanning trees of a graph G is equal to the value of the determinant of the reduced Laplacian of @math . We outline an efficient bijective proof of this theorem, by studying a canonical finite abelian group attached to @math whose order is equal to the value of same matrix determinant. More specifically, we show how one can efficiently compute a bijection between the group elements and the spanning trees of the graph. The main ingredient for computing the bijection is an efficient algorithm for finding the unique @math -parking function (reduced divisor) in a linear equivalence class defined by a chip-firing game. We also give applications, including a new and completely algebraic algorithm for generating random spanning trees. Other applications include algorithms related to chip-firing games and sandpile group law, as well as certain algorithmic problems about the Riemann-Roch theory on graphs.
The term @math -parking functions" was first introduced in @cite_7 . The reason for this terminology is that they can be considered as a natural generalization of parking functions". The theory of parking functions was first considered in connection with hash functions ( @cite_5 ). The original problem was phrased in terms of cars and parking spots. The theory of parking functions has since been developed, with links many different areas including priority queues ( @cite_4 ), representation theory ( @cite_32 ), and noncrossing partitions ( @cite_1 ). Although the explicit definition was first given in @cite_7 , the concept had appeared (sometimes in disguise) in many previous work (see, e.g., @cite_16 @cite_17 @cite_3 @cite_6 @cite_26 ). The fact that @math -parking functions provide a canonical representative for each element of the Jacobian appear implicitly in @cite_17 @cite_26 , and explicitly in @cite_7 @cite_24 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_24", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "2116480361", "2019118072", "1597381512", "1526839219", "1634042400", "2111293718", "1521121495", "2083387674", "2087726143", "2037190943", "1981082180" ], "abstract": [ "A polynomial ideal encoding topplings in the abelian sandpile model on a graph is introduced. A Grbner basis of this ideal is interpreted combinatorially in terms of well-connected subgraphs. This gives rise to algorithms to determine the identity and the operation in the group of recurrent configurations.", "Abstract Parking functions on [ n ] = 1, …, n are those functions p : [ n ] → [ n ] satisfying the condition | i : p ( i ) ⩽ r | ⩾ r for each r , and are ( n + 1) n − 1 in number. These are equinumerate with allowable input-output pairs of permutations of [ n ] in a priority queue. We present a new bijection between parking functions and allowable pairs which has many interesting invariance properties. We extend our bijection to allowable pairs of multisets and introduce valet functions as the corresponding extension of parking functions. Using our bijection, we interpret the inversion enumerator for trees in the case of allowable pairs. We end with a comparison of our bijection with other known bijections involving these combinatorial structures, including a new bijection between parking functions and labelled trees.", "For a graph G, we construct two algebras whose dimensions are both equal to the number of spanning trees of G. One of these algebras is the quotient of the polynomial ring modulo certain monomial ideal, while the other is the quotient of the polynomial ring modulo certain powers of linear forms. We describe the set of monomials that forms a linear basis in each of these two algebras. The basis elements correspond to G-parking functions that naturally came up in the abelian sandpile model. These ideals are instances of the general class of monotone monomial ideals and their deformations. We show that the Hilbert series of a monotone monomial ideal is always bounded by the Hilbert series of its deformation. Then we define an even more general class of monomial ideals associated with posets and construct free resolutions for these ideals. In some cases these resolutions coincide with Scarf resolutions. We prove several formulas for Hilbert series of monotone monomial ideals and investigate when they are equal to Hilbert series of deformations. In the appendix we discuss the abelian sandpile model.", "Abstract: We give a bijection between Eulerian planar maps with prescribed vertex degrees, and some plane trees that we call balanced Eulerian trees. To enumerate the latter, we introduce conjugation classes of planted plane trees. In particular, the result answers a question of Bender and Canfield and allows uniform random generation of Eulerian planar maps with restricted vertex degrees. Using a well known correspondence between 4-regular planar maps with n vertices and planar maps with n edges we obtain an algorithm to generate uniformly such maps with complexity O(n). Our bijection is also refined to give a combinatorial interpretation of a parameterization of Arques of the generating function of planar maps with respect to vertices and faces.", "We formulate a series of conjectures (and a few theorems) on the quotient of the polynomial ring Q [x_1, , x_n, y_1, , y_n] in two sets of variables by the ideal generated by all Sn invariant polynomials without constant term. The theory of the corresponding ring in a single set of variables X e lx1, …, xnr is classical. Introducing the second set of variables leads to a ring about which little is yet understood, but for which there is strong evidence of deep connections with many fundamental results of enumerative combinatorics, as well as with algebraic geometry and Lie theory.", "This paper encompasses a motley collection of ideas from several areas of mathematics, including, in no particular order, random walks, the Picard group, exchange rate networks, chip-firing games, cohomology, and the conductance of an electrical network. The linking threads are the discrete Laplacian on a graph and the solution of the associated Dirichlet problem . Thirty years ago, this subject was dismissed by many as a trivial specialisation of cohomology theory, but it has now been shown to have hidden depths. Plumbing these depths leads to new theoretical advances, many of which throw light on the diverse applications of the theory.", "The ‘dollar game’ represents a kind of diffusion process on a graph. Under the rules of the game some cofigurations are both stable and recurrent, and these are known as critical cofigurations. The set of critical configurations can be given the structure of an abelian group, and it turns out that the order of the group is the tree-number of the graph. Each critical configuration can be assigned a positive weight, and the generating function that enumerates critical configurations according to weight is a partial evaluation of the Tutte polynomial of the graph. It is shown that the weight enumerator can also be interpreted as a growth function, which leads to the conclusion that the (partial) Tutte polynomial itself is a growth function.", "Abstract It is well known that a finite graph can be viewed, in many respects, as a discrete analogue of a Riemann surface. In this paper, we pursue this analogy further in the context of linear equivalence of divisors. In particular, we formulate and prove a graph-theoretic analogue of the classical Riemann–Roch theorem. We also prove several results, analogous to classical facts about Riemann surfaces, concerning the Abel–Jacobi map from a graph to its Jacobian. As an application of our results, we characterize the existence or non-existence of a winning strategy for a certain chip-firing game played on the vertices of a graph.", "", "We study a general Bak-Tang-Wiesenfeld-type automaton model of self-organized criticality in which the toppling conditions depend on local height, but not on its gradient. We characterize the critical state, and determine its entropy for an arbitrary finite lattice in any dimension. The two-point correlation function is shown to satisfy a linear equation. The spectrum of relaxation times describing the approach to the critical state is also determined exactly.", "We introduce a class of deterministic lattice models of failure, Abelian avalanche (AA) models, with continuous phase variables, similar to discrete Abelian sandpile (ASP) models. We investigate analytically the structure of the phase space and statistical properties of avalanches in these models. We show that the distributions of avalanches in AA and ASP models with the same redistribution matrix and loading rate are identical. For an AA model on a graph, statistics of avalanches is linked to Tutte polynomials associated with this graph and its subgraphs. In the general case, statistics of avalanches is linked to an analog of a Tutte polynomial defined for any symmetric matrix." ] }
0907.4761
1818095014
Kirchhoff's matrix-tree theorem states that the number of spanning trees of a graph G is equal to the value of the determinant of the reduced Laplacian of @math . We outline an efficient bijective proof of this theorem, by studying a canonical finite abelian group attached to @math whose order is equal to the value of same matrix determinant. More specifically, we show how one can efficiently compute a bijection between the group elements and the spanning trees of the graph. The main ingredient for computing the bijection is an efficient algorithm for finding the unique @math -parking function (reduced divisor) in a linear equivalence class defined by a chip-firing game. We also give applications, including a new and completely algebraic algorithm for generating random spanning trees. Other applications include algorithms related to chip-firing games and sandpile group law, as well as certain algorithmic problems about the Riemann-Roch theory on graphs.
The relationship between chip-firing games and the Jacobian group is studied in @cite_3 @cite_28 . Some algorithmic aspects of the chip-firing games are studied in @cite_19 @cite_0 @cite_23 ; see for a discussion on how @cite_19 relates to our work.
{ "cite_N": [ "@cite_28", "@cite_3", "@cite_0", "@cite_19", "@cite_23" ], "mid": [ "1910083634", "2111293718", "2082513731", "", "2026915189" ], "abstract": [ "A variant of the chip-firing game on a graph is defined. It is shown that the set of configurations that are stable and recurrent for this game can be given the structure of an abelian group, and that the order of the group is equal to the tree number of the graph. In certain cases the game can be used to illuminate the structure of the group.", "This paper encompasses a motley collection of ideas from several areas of mathematics, including, in no particular order, random walks, the Picard group, exchange rate networks, chip-firing games, cohomology, and the conductance of an electrical network. The linking threads are the discrete Laplacian on a graph and the solution of the associated Dirichlet problem . Thirty years ago, this subject was dismissed by many as a trivial specialisation of cohomology theory, but it has now been shown to have hidden depths. Plumbing these depths leads to new theoretical advances, many of which throw light on the diverse applications of the theory.", "We analyse the following (solitaire) game: each node of a graph contains a pile of chips, and a move consists of selecting a node with at least as many chips on it as its degree, and letting it send one chip to each of its neighbors. The game terminates if there is no such node. We show that the finiteness of the game and the terminating configuration are independent of the moves made. If the number of chips is less than the number of edges, the game is always finite. If the number of chips is at least the number of edges, the game can be infinite for an appropriately chosen initial configuration. If the number of chips is more than twice the number of edges minus the number of nodes, then the game is always infinite. The independence of the finiteness and the terminating position follows from simple but powerful ‘exchange properties’ of the sequences of legal moves, and from some general results on ‘antimatroids with repetition’, i.e. languages having these exchange properties. We relate the number of steps in a finite game to the least positive eigenvalue of the Laplace operator of the graph.", "", "Bjorner, Lvasz, and Shor have introduced a chip firing game on graphs. This paper proves a polynomial bound on the length of the game in terms of the number of vertices of the graph provided the length is finite. The obtained bound is best possible within a constant factor." ] }
0907.4761
1818095014
Kirchhoff's matrix-tree theorem states that the number of spanning trees of a graph G is equal to the value of the determinant of the reduced Laplacian of @math . We outline an efficient bijective proof of this theorem, by studying a canonical finite abelian group attached to @math whose order is equal to the value of same matrix determinant. More specifically, we show how one can efficiently compute a bijection between the group elements and the spanning trees of the graph. The main ingredient for computing the bijection is an efficient algorithm for finding the unique @math -parking function (reduced divisor) in a linear equivalence class defined by a chip-firing game. We also give applications, including a new and completely algebraic algorithm for generating random spanning trees. Other applications include algorithms related to chip-firing games and sandpile group law, as well as certain algorithmic problems about the Riemann-Roch theory on graphs.
The Uniform Spanning Tree (UST) problem has been extensively studied in the literature and there are two known types of algorithms: determinant based algorithms (e.g. @cite_2 @cite_11 @cite_22 ), and random walk based algorithms (e.g. @cite_20 @cite_12 @cite_33 ).
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_2", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "1968057111", "", "2018691712", "2111933629", "1985234177", "2062877152" ], "abstract": [ "Let E be a finite set and B be a given set of subsets of E. Let w(B) be a positive weight of B ∈ B. In this paper we develop an algorithm to generate a random subset U ∈ B such that the P(U = B) is proportioned to w(B). If the complexity of computing sums of the weights of B contained in set-intervals in B is O(f(∥E∥)), then the algorithm developed here has complexity O(∥E∥f(∥E∥)). The algorithm is specialized to the following cases: (i) B = all subsets of E, (ii) B = all subsets of E with k elements, (iii) B = all spanning trees in a network, and (iv) B = all paths in a directed acyclic network. Different types of weight functions are considered. It is shown that this proposed algorithm unifies many existing algorithms.", "", "Abstract Dans cet article, nous proposons un algorithme de complexite polynomiale pour construire un arbre au hasard qui soit un graphe partiel d'un graphe donne. Il consiste essentielleement a construire une arborescence de rang donne sur ce graphe, l'ensemble des arborescences etant ordonne par rapport aux valeurs croissantes de la racine et a racine egale suivant l'ordre lexicographique du codage utilise pour les arborescences.", "The author describes a probabilistic algorithm that, given a connected, undirected graph G with n vertices, produces a spanning tree of G chosen uniformly at random among the spanning trees of G. The expected running time is O(n log n) per generated tree for almost all graphs, and O(n sup 3 ) for the worst graphs. Previously known deterministic algorithms are much more complicated and require O(n sup 3 ) time per generated tree. A Markov chain is called rapidly mixing if it gets close to the limit distribution in time polynomial in the log of the number of states. Starting from the analysis of the above algorithm, it is shown that the Markov chain on the space of all spanning trees of a given graph where the basic step is an edge swap is rapidly mixing. >", "A random walk on a finite graph can be used to construct a uniform random spanning tree. It is shown how random walk techniques can be applied to the study of several properties of the uniform random spanning tree: the proportion of leaves, the distribution of degrees, and the diameter.", "Colbourn, Day, and Nel developed the first algorithm requiring at mostO(n3) arithmetic operations for ranking and unranking spanning trees of a graph (nis the number of vertices of the graph). We present two algorithms for the more general problem of ranking and unranking rooted spanning arborescences of a directed graph. The first is conceptually very simple and requiresO(n3) arithmetic operations. The second approach shows that the number of arithmetic operations can be reduced to the same as that of the best known algorithms for matrix multiplication." ] }
0907.4764
2014090366
Every graph has a canonical finite abelian group attached to it. This group has appeared in the literature under a variety of names including the sandpile group, critical group, Jacobian group, and Picard group. The construction of this group closely mirrors the construction of the Jacobian variety of an algebraic curve. Motivated by this analogy, it was recently suggested by Norman Biggs that the critical group of a finite graph is a good candidate for doing discrete logarithm based cryptography. In this paper, we study a bilinear pairing on this group and show how to compute it. Then we use this pairing to find the discrete logarithm efficiently, thus showing that the associated cryptographic schemes are not secure. Our approach resembles the MOV attack on elliptic curves.
The order of the Jacobian group is the number of spanning trees of the graph ( @cite_5 ). Hence, the order of the group can be computed by the famous matrix-tree formula of Kirchhoff.
{ "cite_N": [ "@cite_5" ], "mid": [ "2111293718" ], "abstract": [ "This paper encompasses a motley collection of ideas from several areas of mathematics, including, in no particular order, random walks, the Picard group, exchange rate networks, chip-firing games, cohomology, and the conductance of an electrical network. The linking threads are the discrete Laplacian on a graph and the solution of the associated Dirichlet problem . Thirty years ago, this subject was dismissed by many as a trivial specialisation of cohomology theory, but it has now been shown to have hidden depths. Plumbing these depths leads to new theoretical advances, many of which throw light on the diverse applications of the theory." ] }
0907.4919
2115735006
The wireless medium contains domain-specific information that can be used to complement and enhance traditional security mechanisms. In this paper we propose ways to exploit the spatial variability of the radio channel response in a rich scattering environment, as is typical of indoor environments. Specifically, we describe a physical-layer authentication algorithm that utilizes channel probing and hypothesis testing to determine whether current and prior communication attempts are made by the same transmit terminal. In this way, legitimate users can be reliably authenticated and false users can be reliably detected. We analyze the ability of a receiver to discriminate between transmitters (users) according to their channel frequency responses. This work is based on a generalized channel response with both spatial and temporal variability, and considers correlations among the time, frequency and spatial domains. Simulation results, using the ray-tracing tool WiSE to generate the time-averaged response, verify the efficacy of the approach under realistic channel conditions, as well as its capability to work under unknown channel variations.
In commodity networks, such as 802.11 networks, it is easy for a device to alter its MAC address and claim to be another device by simply issuing an ifconfig command. This weakness is a serious threat, and there are numerous attacks, ranging from session hijacking @cite_1 to attacks on access control lists @cite_21 , which are facilitated by the fact that an adversarial device may masquerade as another device. In response, researchers have proposed using physical layer information to enhance wireless security. For example, spectral analysis has been used to identify the type of wireless network interface card (NIC), and thus to discriminate among users with different NICs @cite_23 . A similar method, radio frequency fingerprinting, discriminates wireless devices according to the transient behavior of their transmitted signals @cite_24 . For more general networks, the clock skew characteristic of devices has been viewed as a remote fingerprint of devices over the Internet @cite_12 . In addition, the inherent variability in the construction of various digital devices has been used to detect intrusion @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_1", "@cite_24", "@cite_23", "@cite_12" ], "mid": [ "2120188593", "19610032", "1894693606", "1510900102", "2161743771", "2104599106" ], "abstract": [ "In this paper, we present a new paradigm for security in conventional networks that has dramatic implications for improving their physical layer network security. We call this paradigm, Detecting Intrusions at Layer ONe (DILON). DILON’s enabling hypothesis is that the inherent variability in the construction of digital devices leads to significant variability in their analog signaling. This is true not only for different device models but even for nearly identical devices of the same manufacturing lot. The idea is that by oversampling digital signals to make analog measurements that constitute “voiceprints” of network devices. These form a profile that can be used for detecting MAC address spoofing, reconfiguration of network topologies, and in the long term possibly predict the failure of network devices. This paper discusses historic references and how digital networks enable new approaches as well as a number of applications.", "Polymer-containing, crosslinked, durable press 100 cotton, high-cotton blend, and other cellulosic fabrics are given a durable pucker without steaming by treatment with a rewetting agent, followed by drying, printing with a caustic printing paste permitting the printed fabric to develop a pucker at room temperature in a substantially tension-free state, and washing. Alternatively, the application of the rewetting agent can be bypassed by adding compatible wetting agents directly to the printing paste. The present process is also compatible with pigment printing processes so that caustic printing and pigment printing can be combined, the cold-puckered fabric being dried and then heated to set the pigment vehicle.", "", "Radio Frequency Fingerprinting (RFF) is a technique, which has been used to identify wireless devices. It essentially involves the detection of the transient signal and the extraction of the fingerprint. The detection phase, in our opinion, is the most challenging yet crucial part of the RFF process. Current approaches, namely Threshold and Bayesian Step Change Detector, which use amplitude characteristics of signals for transient detection, perform poorly with certain types of signals. This paper presents a new algorithm that exploits the phase characteristics for detection purposes. Validation using Bluetooth signals has resulted in a success rate of approximately 85-90 percent. We anticipate that the higher detection rate will result in a higher classification rate and thus support various device authetication schemes in the wireless domain.", "IEEE 802.11 wireless networks are plagued with problems of unauthorized access. Left undetected, unauthorized access is the precursor to additional mischief. Current approaches to detecting intruders are invasive or can be evaded by stealthy attackers. We propose the use of spectral analysis to identify a type of wireless network interface card. This mechanism can be applied to support the detection of unauthorized systems that use wireless network interface cards that are different from that of a legitimate system. The approach is passive and works in the presence of encrypted traffic.", "We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device's known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semipassive techniques when the fingerprinted device is behind a NAT or firewall, and. also when the device's system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces." ] }
0907.4919
2115735006
The wireless medium contains domain-specific information that can be used to complement and enhance traditional security mechanisms. In this paper we propose ways to exploit the spatial variability of the radio channel response in a rich scattering environment, as is typical of indoor environments. Specifically, we describe a physical-layer authentication algorithm that utilizes channel probing and hypothesis testing to determine whether current and prior communication attempts are made by the same transmit terminal. In this way, legitimate users can be reliably authenticated and false users can be reliably detected. We analyze the ability of a receiver to discriminate between transmitters (users) according to their channel frequency responses. This work is based on a generalized channel response with both spatial and temporal variability, and considers correlations among the time, frequency and spatial domains. Simulation results, using the ray-tracing tool WiSE to generate the time-averaged response, verify the efficacy of the approach under realistic channel conditions, as well as its capability to work under unknown channel variations.
More recently, the wireless channel has been explored as a new form of fingerprint for wireless security. The reciprocity and rich multipath of the ultrawideband channel has been used as a means to establish encryption keys @cite_15 . In @cite_14 , a practical scheme to discriminate between transmitters was proposed and identifies mobile devices by tracking measurements of signal strength from multiple access points. A similar approach was considered for sensor networks in @cite_4 . Concurrent to these efforts, the present authors have built a significance test that exploits the spatial variability of propagation to enhance the authentication in the stationary, time-invariant channel @cite_3 . In this paper, we have significantly expanded the method to cover a more generalized channel, where there are time variations due to changes in the environment. As in @cite_3 , however, the ends of the link remain stationary, as might be the case for a population of users sitting in a room or airport terminal. We will see that, in some cases, the time variations the authentication.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_4", "@cite_3" ], "mid": [ "2149808502", "2155899844", "1551333318", "2146080444" ], "abstract": [ "To establish a secure communications link between any two transceivers, the communicating parties require some shared secret, or key, with which to encrypt the message so that it cannot be understood by an enemy observer. Using the theory of reciprocity for antennas and electromagnetic propagation, a key distribution method is proposed that uses the ultrawideband (UWB) channel pulse response between two transceivers as a source of common randomness that is not available to enemy observers in other locations. The maximum size of a key that can be shared in this way is characterized by the mutual information between the observations of two radios, and an approximation and upper bound on mutual information is found for a general multipath channel and examples given for UWB channel models. The exchange of some information between the parties is necessary to achieve these bounds, and various information-sharing strategies are considered and their performance is simulated. A qualitative assessment of the vulnerability of such a secret sharing system to attack from a radio in a nearby location is also given.", "Wireless networks are vulnerable to many identity-based attacks in which a malicious device uses forged MAC addresses to masquerade as a specific client or to create multiple illegitimate identities. For example, several link-layer services in IEEE 802.11 networks have been shown to be vulnerable to such attacks even when 802.11i 1X and other security mechanisms are deployed. In this paper we show that a transmitting device can be robustly identified by its signalprint, a tuple of signal strength values reported by access points acting as sensors. We show that, different from MAC addresses or other packet contents, attackers do not have as much control regarding the signalprints they produce. Moreover, using measurements in a testbed network, we demonstrate that signalprints are strongly correlated with the physical location of clients, with similar values found mostly in close proximity. By tagging suspicious packets with their corresponding signalprints, the network is able to robustly identify each transmitter independently of packet contents, allowing detection of a large class of identity-based attacks with high probability.", "A sybil node impersonates other nodes by broadcasting messages with multiple node identifiers (ID). In contrast to existing solutions which are based on sharing encryption keys, we present a robust and lightweight solution for sybil attack problem based on received signal strength indicator (RSSI) readings of messages. Our solution is robust since it detects all sybil attack cases with 100 completeness and less than a few percent false positives. Our solution is lightweight in the sense that alongside the receiver we need the collaboration of one other node (i.e., only one message communication) for our protocol. We show through experiments that even though RSSI is time-varying and unreliable in general and radio transmission is non-isotropic, using ratio of RSSIs from multiple receivers it is feasible to overcome these problems.", "The wireless medium contains domain-specific information that can be used to complement and enhance traditional security mechanisms. In this paper we propose ways to exploit the fact that, in a typically rich scattering environment, the radio channel response decorrelates quite rapidly in space. Specifically, we describe a physical-layer algorithm that combines channel probing (M complex frequency response samples over a bandwidth W) with hypothesis testing to determine whether current and prior communication attempts are made by the same user (same channel response). In this way, legitimate users can be reliably authenticated and false users can be reliably detected. To evaluate the feasibility of our algorithm, we simulate spatially variable channel responses in real environments using the WiSE ray-tracing tool; and we analyze the ability of a receiver to discriminate between transmitters (users) based on their channel frequency responses in a given office environment. For several rooms in the extremities of the building we considered, we have confirmed the efficacy of our approach under static channel conditions. For example, measuring five frequency response samples over a bandwidth of 100 MHz and using a transmit power of 100 mW, valid users can be verified with 99 confidence while rejecting false users with greater than 95 confidence." ] }
0907.1005
1682180483
Browsing is a way of finding documents in a large amount of data which is complementary to querying and which is particularly suitable for multimedia documents. Locating particular documents in a very large collection of multimedia documents such as the ones available in peer to peer networks is a difficult task. However, current peer to peer systems do not allow to do this by browsing. In this report, we show how one can build a peer to peer system supporting a kind of browsing. In our proposal, one must extend an existing distributed hash table system with a few features : handling partial hashkeys and providing appropriate routing mechanisms for these hash-keys. We give such an algorithm for the particular case of the Tapestry distributed hash table. This is a work in progress as no proper validation has been done yet.
The work of Loisant @cite_6 @cite_3 is a good representative of the state of the art in multimedia browsing systems. It is based on a clustering of the documents that is done before the browsing process. This pre-analysis allows to build classes which are more relevant than the ones we use. However, in order to build that clustering, the system needs to have access to all the descriptors. For this reason, that system cannot be easily used in a peer to peer framework. Moreover, when new documents are added to the system, the clustering may have to be modified, which raises new difficulties. Another difficulty in using such a system is the mapping between clusters and logical addresses. In our system this mapping is canonical. With dynamical or ad hoc clusters, another DHT layer would be necessary to make the translation.
{ "cite_N": [ "@cite_3", "@cite_6" ], "mid": [ "2071867823", "611804136" ], "abstract": [ "We divide image querying paradigms into three categories: formal querying, interactive search, and browsing. The first two paradigms have been largely investigated in the literature, whereas the last one has not been as much studied. We propose a browsing technique based on concept lattices. The result is a kind of hypertext of images that mixes classification and visualisation issues in a high-dimensionnality space.", "Les donnees dites multimedia (images, videos) se distinguent des donnees classique par une densite variable d'information et l'impossibilite de normaliser ces donnees. Du fait de ces particularites, de nouvelles techniques d'indexation et de recherche d'information ont du etre etudiees. Il y a principalement deux problemes a resoudre pour la recherche d'information dans les collections multimedia (ou les bases de donnees multimedia) : (1) la representation des donnees et (2) le processus de recherche du point de vue de l'utilisateur. Dans le cas des bases de donnees, l'indexation est fortement liee a ces deux problemes. Dans le cas particulier des images, on distingue trois grandes classes: – la recherche par requetes formelles, heritee des bases de donnees classiques ; – la recherche avec boucle de retour, ou l'utilisateur fait partie integrante du processus de recherche ; – la navigation ou les images sont organisees en une structure preparee a l'avance, utilisee comme index et comme structure de recherche. C'est sur cette troisieme approche que nos travaux se sont portes ; nous nous sommes en effet interesses au treillis de Galois, une structure de graphe permettant d'organiser les elements d'une relation binaire. Une telle structure de navigation a plusieurs avantages sur une approche classique basee sur des requetes : en particulier, elle permet d'affranchir l'utilisateur d'une phase de redaction de requete." ] }
0907.1357
1701618047
In the verification of C programs by deductive approaches based on automated provers, some heuristics of separation analysis are proposed to handle the most difficult problems. Unfortunately, these heuristics are not sufficient when applied on industrial C programs: some valid verification conditions cannot be automatically discharged by any automated prover mainly due to their size and a high number of irrelevant hypotheses. This work presents a strategy to reduce program verification conditions by selecting their relevant hypotheses. The relevance of a hypothesis is the combination of separated static dependency analyzes based on graph constructions and traversals. The approach is applied on a benchmark issued from industrial program verification.
Strategies to simplify the prover's task have been widely studied since automated provers exist @cite_23 , mainly to propose more efficient deductive systems @cite_23 @cite_25 @cite_19 . The KeY deductive system @cite_24 is an extreme case. It is composed of a large list of special purpose rules dedicated to JML-annotated JavaCard programs. These rules make unnecessary an explicit axiomatization of data types, memory model, and program execution. Priorities between deduction rules help in effective reasoning. Beyond this, choosing rules in that framework requires as much effort as choosing axioms when targeting general purpose theorem provers.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_25", "@cite_23" ], "mid": [ "1515278398", "1931447968", "2126171211", "1972956465" ], "abstract": [ "We describe a tool called CVC Lite (CVCL), an automated theorem prover for formulas in a union of first-order theories. CVCL supports a set of theories which are useful in verification, including uninterpreted functions, arrays, records and tuples, and linear arithmetic. New features in CVCL (beyond those provided in similar previous systems) include a library API, more support for producing proofs, some heuristics for reasoning about quantifiers, and support for symbolic simulation primitives.", "For more than three and one-half decades, beginning in the early 1960s, a heavy emphasis on proof finding has been a key component of the Argonne paradigm, whose use has directly led to significant advances in automated reasoning and important contributions to mathematics and logic. The theorems studied range from the trivial to the deep, even including some that corresponded to open questions. Often the paradigm asks for a theorem whose proof is in hand but that cannot be obtained in a fully automated manner by the program in use. The theorem whose hypothesis consists solely of the Meredith single axiom for two-valued sentential (or propositional) calculus and whose conclusion is the Lukasiewicz three-axiom system for that area of formal logic was just such a theorem. Featured in this article is the methodology that enabled the program OTTER to find the first fully automated proof of the cited theorem, a proof with the intriguing property that none of its steps contains a term of the form i>n(i>n(i>t)) for any term i>t. As evidence of the power of the new methodology, the article also discusses OTTER's success in obtaining the first known proof of a theorem concerning a single axiom of Lukasiewicz.", "Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and retention of one or more needed conclusions whose complexity sharply delays their consideration. To mitigate the severity of the cited obstacle, I formulated and feature in this article the hot list strategy. The hot list strategy asks the researcher to choose, usually from among the input statements characterizing the problem under study, one or more statements that are conjectured to play a key role for assignment completion. The chosen statements – conjectured to merit revisiting, again and again – are placed in an input list of statements, called the hot list. When an automated reasoning program has decided to retain a new conclusion C – before any other statement is chosen to initiate conclusion drawing – the presence of a nonempty hot list (with an appropriate assignment of the input parameter known as heat) causes each inference rule in use to be applied to C together with the appropriate number of members of the hot list. Members of the hot list are used to complete applications of inference rules and not to initiate applications. The use of the hot list strategy thus enables an automated reasoning program to briefly consider a newly retained conclusion whose complexity would otherwise prevent its use for perhaps many CPU-hours. To give evidence of the value of the strategy, I focus on four contexts: (1) dramatically reducing the CPU time required to reach a desired goal, (2) finding a proof of a theorem that had previously resisted all but the more inventive automated attempts, (3) discovering a proof that is more elegant than previously known, and (4) answering a question that had steadfastly eluded researchers relying on an automated reasoning program. I also discuss a related strategy, the dynamic hot list strategy (formulated by my colleague W. McCune), that enables the program during a run to augment the contents of the hot list. In the Appendix, I give useful input files and interesting proofs. Because of frequent requests to do so, I include challenge problems to consider, commentary on my approach to experimentation and research, and suggestions to guide one in the use of McCune’s automated reasoning program OTTER.", "" ] }
0907.0726
2950391451
We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.
Both ATSPP and the directed latency problem are closely related to the classical Traveling Salesman Problem (TSP), which asks to find the cheapest Hamiltonian cycle in a complete undirected graph with edge costs @cite_11 @cite_4 . In general weighted graphs, TSP is not approximable. However, in most practical settings it can be assumed that edge costs satisfy the triangle inequality (i.e. @math ). Though metric TSP is still NP-hard, the well-known algorithm of Christofides @cite_0 has an approximation ratio of 3 2 . Later the analysis in @cite_8 @cite_1 showed that this approximation algorithm actually bounds the integrality gap of a linear programming relaxation for TSP known as the Held-Karp LP. This integrality gap is also known to be at least @math . Furthermore, for all @math , approximating TSP within a factor of @math is NP-hard @cite_28 . Christofides' heuristic was adapted to the problem of finding the cheapest Hamiltonian path in a metric graph with an approximation guarantee of 3 2 if at most one endpoint is specified or 5 3 if both endpoints are given @cite_3 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_28", "@cite_1", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "", "2063836932", "2059222268", "1714357736", "", "2117226423", "2324108981" ], "abstract": [ "", "Abstract In their 1971 paper on the travelling salesman problem and minimum spanning trees, Held and Karp showed that finding an optimally weighted 1-tree is equivalent to solving a linear program for the traveling salesman problem (TSP) with only node-degree constraints and subtour elimination constraints. In this paper we show that the Held-Karp 1-trees have a certain monotonicity property: given a particular instance of the symmetric TSP with triangle inequality, the cost of the minimum weighted 1-tree is monotonic with respect to the set of nodes included. As a consequence, we obtain an alternate proof of a result of Wolsey and show that linear programs with node-degree and subtour elimination constraints must have a cost at least 2 3 OPT where OPT is the cost of the optimum solution to the TSP instance.", "We show that the traveling salesman problem with triangle inequality cannot be approximated with a ratio better than @math when the edge lengths are allowed to be asymmetric and @math when the edge lengths are symmetric, unless P=NP. The best previous lower bounds were @math and @math respectively. The reduction is from Hastad’s maximum satisfiability of linear equations modulo 2, and is nonconstructive.", "We consider two questions arising in the analysis of heuristic algorithms. (i) Is there a general procedure involved when analysing a particular problem heuristic? (ii) How can heuristic procedures be incorporated into optimising algorithms such as branch and bound?", "", "Abstract : An O(n sup 3) heuristic algorithm is described for solving n-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition. The algorithm involves as substeps the computation of a shortest spanning tree of the graph G defining the TSP, and the finding of a minimum cost perfect matching of a certain induced subgraph of G. A worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3 2. This represents a 50 reduction over the value 2 which was the previously best known such ratio for the performance of other polynomial-growth algorithms for the TSP.", "History (A. Hoffman and P. Wolfe). Motivation and Modeling (R. Garfinkel). Computational Complexity (D. Johnson and C. Papadimitriou). Well-Solved Special Cases (P. Gilmore, et al). Performance Guarantees for Heuristics (D. Johnson and C. Papadimitriou). Probabilistic Analysis of Heuristics (R. Karp and J. Steele). Empirical Analysis of Heuristics (B. Golden and W. Stewart). Polyhedral Theory (M. Grotschel and M. Padberg). Polyhedral Algorithms (M. Padberg and M. Grotschel). Branch and Bound Methods (E. Balas and P. Toth). Hamiltonian Cycles (V. Chvatal). Vehicle Routing (N. Christofides). Bibliography." ] }
0907.0726
2950391451
We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.
In contrast to TSP, no constant-factor approximation for its asymmetric version is known. The current best approximation for ATSP is the very recent result of @cite_21 , which gives an @math -approximation algorithm. It also upper-bounds the integrality gap of the asymmetric Held-Karp LP relaxation by the same factor. Previous algorithms guarantee a solution of cost within @math factor of optimum @cite_26 @cite_19 @cite_5 @cite_14 . The algorithm of @cite_26 is shown to upper-bound the Held-Karp integrality gap by @math in @cite_12 , and a different proof that bounds the integrality gap of a slightly weaker LP is obtained in @cite_27 . The best known lower bound on the Held-Karp integrality gap is essentially 2 @cite_2 , and tightening these bounds remains an important open problem. ATSP is NP-hard to approximate within @math @cite_28 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_28", "@cite_21", "@cite_19", "@cite_27", "@cite_2", "@cite_5", "@cite_12" ], "mid": [ "2137717103", "2035610952", "2059222268", "1480355964", "", "1596429033", "2136668000", "2167197687", "1877952604" ], "abstract": [ "In metric asymmetric traveling salesperson problems the input is a complete directed graph in which edge weights satisfy the triangle inequality, and one is required to find a minimum weight walk that visits all vertices. In the asymmetric traveling salesperson problem (ATSP) the walk is required to be cyclic. In asymmetric traveling salesperson path problem (ATSPP), the walk is required to start at vertex sand to end at vertex t. We improve the approximation ratio for ATSP from @math to @math . This improvement is based on a modification of the algorithm of [JACM 05] that achieved the previous best approximation ratio. We also show a reduction from ATSPP to ATSP that loses a factor of at most 2 + i¾?in the approximation ratio, where i¾?> 0 can be chosen to be arbitrarily small, and the running time of the reduction is polynomial for every fixed i¾?. Combined with our improved approximation ratio for ATSP, this establishes an approximation ratio of @math for ATSPP, improving over the previous best ratio of 4log e ni¾? 2.76log 2 nof Chekuri and Pal [Approx 2006].", "We consider the asymmetric traveling salesman problem for which the triangular inequality is satisfied. For various heuristics we construct examples to show that the worst-case ratio of length of tour found to minimum length tour is (n) for n city problems. We also provide a new O([log2n]) heuristic.", "We show that the traveling salesman problem with triangle inequality cannot be approximated with a ratio better than @math when the edge lengths are allowed to be asymmetric and @math when the edge lengths are symmetric, unless P=NP. The best previous lower bounds were @math and @math respectively. The reduction is from Hastad’s maximum satisfiability of linear equations modulo 2, and is nonconstructive.", "", "", "This paper studies vehicle routing problems on asymmetric metrics. Our starting point is the directed k-TSPproblem: given an asymmetric metric (V,d), a root ri¾? Vand a target k≤ |V|, compute the minimum length tour that contains rand at least kother vertices. We present a polynomial time O(log2n·logk)-approximation algorithm for this problem. We use this algorithm for directed k-TSP to obtain an O(log2n)-approximation algorithm for the directed orienteeringproblem. This answers positively, the question of poly-logarithmic approximability of directed orienteering, an open problem from [2]. The previously best known results were quasi-polynomial time algorithms with approximation guarantees of O(log2k) for directed k-TSP, and O(logn) for directed orienteering (Chekuri & Pal [4]). Using the algorithm for directed orienteering within the framework of [2] and [1], we also obtain poly-logarithmic approximation algorithms for the directed versions of discounted-reward TSPand the vehicle routing problem with time-windows.", "The traveling salesman problem comes in two variants. The symmetric version (STSP) assumes that the cost c sub ij of going to city i to city j is equal to c sub ji , while the more general asymmetric version (ATSP) does not make this assumption. In both cases, it is usually assumed that we are in the metric case, i.e., the costs satisfy the triangle inequality: c sub ij + c sub jk spl ges c sub ik for all i, j, k. In this assumption, we improve the lower bound on the integrality ratio of the Held-Karp bound for asymmetric TSP (with triangle inequality) from 4 3 to 2.", "A directed multigraph is said to be d-regular if the indegree and outdegree of every vertex is exactly d. By Hall's theorem, one can represent such a multigraph as a combination of at most n2 cycle covers, each taken with an appropriate multiplicity. We prove that if the d-regular multigraph does not contain more than ⌊d 2⌋ copies of any 2-cycle then we can find a similar decomposition into n2 pairs of cycle covers where each 2-cycle occurs in at most one component of each pair. Our proof is constructive and gives a polynomial algorithm to find such a decomposition. Since our applications only need one such a pair of cycle covers whose weight is at least the average weight of all pairs, we also give an alternative, simpler algorithm to extract a single such pair.This combinatorial theorem then comes handy in rounding a fractional solution of an LP relaxation of the maximum Traveling Salesman Problem (TSP) problem. The first stage of the rounding procedure obtains two cycle covers that do not share a 2-cycle with weight at least twice the weight of the optimal solution. Then we show how to extract a tour from the 2 cycle covers, whose weight is at least 2 3 of the weight of the longest tour. This improves upon the previous 5 8 approximation with a simpler algorithm. Utilizing a reduction from maximum TSP to the shortest superstring problem, we obtain a 2.5-approximation algorithm for the latter problem, which is again much simpler than the previous one.For minimum asymmetric TSP, the same technique gives two cycle covers, not sharing a 2-cycle, with weight at most twice the weight of the optimum. Assuming triangle inequality, we then show how to obtain from this pair of cycle covers a tour whose weight is at most 0.842 log 2 n larger than optimal. This improves upon a previous approximation algorithm with approximation guarantee of 0.999 log 2 n. Other applications of the rounding procedure are approximation algorithms for maximum 3-cycle cover (factor 2 3, previously 3 5) and maximum asymmetric TSP with triangle inequality (factor 10 13, previously 3 4).", "The Held-Karp heuristic for the Traveling Salesman Problem (TSP) has in practice provided near-optimal lower bounds on the cost of solutions to the TSP. We analyze the structure of Held-Karp solutions in order to shed light on their quality. In the symmetric case with triangle inequality, we show that a class of instances has planar solutions. We also show that Held-Karp solutions have a certain monotonicity property. This leads to an alternate proof of a result of Wolsey, which shows that the value of Held-Karp heuristic is always at least 2 3 OPT, where OPT is the cost of the optimum TSP tour. Additionally, we show that the value of the Held-Karp heuristic is equal to that of the linear relaxation of the biconnected-graph problem when edge costs are non-negative. In the asymmetric case with triangle inequality, we show that there are many equivalent definitions of the Held-Karp heuristic, which include finding optimally weighted 1-arborescences, 1-antiarborescences, asymmetric 1-trees, and assignment problems. We prove that monotonicity holds in the asymmetric case as well. These theorems imply that the value of the Held-Karp heuristic is no less than OPT and no less than the value of the Balas-Christofides heuristic for the asymmetric TSP. For the 1,2-TSP, we show that the Held-Karp heuristic cannot do any better than 9 10 OPT, even as the number of nodes tends to infinity. Portions of this thesis are joint work with David Shmoys." ] }
0907.0726
2950391451
We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.
The path version of the problem, ATSPP, has been studied much less than ATSP, but there are some recent results concerning its approximability. An @math approximation algorithm for it was given by Lam and Newman @cite_23 , which was subsequently improved to @math by Chekuri and Pal @cite_6 . Feige and Singh @cite_14 improved upon this guarantee by a constant factor and also showed that the approximability of ATSP and ATSPP are within a constant factor of each other, i.e. an @math -approximation for one implies an @math -approximation for the other. Combined with the result of @cite_21 , this implies an @math approximation for ATSPP. However, none of these algorithms bound the integrality gap of the LP relaxation for ATSPP. This integrality gap was considered by Nagarajan and Ravi @cite_22 , who showed that it is at most @math . To the best of our knowledge, the asymmetric path version of the @math -person problem has not been studied previously. However, some work has been done on its symmetric version, where the goal is to find @math rooted cycles of minimum total cost (e.g., @cite_15 ).
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_21", "@cite_6", "@cite_23", "@cite_15" ], "mid": [ "2137717103", "1492330513", "1480355964", "330417094", "1976231547", "" ], "abstract": [ "In metric asymmetric traveling salesperson problems the input is a complete directed graph in which edge weights satisfy the triangle inequality, and one is required to find a minimum weight walk that visits all vertices. In the asymmetric traveling salesperson problem (ATSP) the walk is required to be cyclic. In asymmetric traveling salesperson path problem (ATSPP), the walk is required to start at vertex sand to end at vertex t. We improve the approximation ratio for ATSP from @math to @math . This improvement is based on a modification of the algorithm of [JACM 05] that achieved the previous best approximation ratio. We also show a reduction from ATSPP to ATSP that loses a factor of at most 2 + i¾?in the approximation ratio, where i¾?> 0 can be chosen to be arbitrarily small, and the running time of the reduction is polynomial for every fixed i¾?. Combined with our improved approximation ratio for ATSP, this establishes an approximation ratio of @math for ATSPP, improving over the previous best ratio of 4log e ni¾? 2.76log 2 nof Chekuri and Pal [Approx 2006].", "We study the directed minimum latency problem: given an n-vertex asymmetric metric (V,d) with a root vertex ri¾? V, find a spanning path originating at rthat minimizes the sum of latencies at all vertices (the latency of any vertex vi¾? Vis the distance from rto valong the path). This problem has been well-studied on symmetric metrics, and the best known approximation guarantee is 3.59 [3]. For any @math O( n^ ^3 ) @math =O( n )$, which implies (for any fixed i¾?> 0) a polynomial time O(n1 2 + i¾?)-approximation algorithm for directed latency. In the special case of metrics induced by shortest-paths in an unweighted directed graph, we give an O(log2n) approximation algorithm. As a consequence, we also obtain an O(log2n) approximation algorithm for minimizing the weighted completion time in no-wait permutation flowshop scheduling. We note that even in unweighted directed graphs, the directed latency problem is at least as hard to approximate as the well-studied asymmetric traveling salesman problem, for which the best known approximation guarantee is O(logn).", "", "Compounds corresponding to the formula: I wherein Z represents a radical which completes a condensed aromatic ring system; R1 represents an n-valent aliphatic or aromatic radical; R2 represents H, alkyl or aryl, R3 represents one or more radicals to control the diffusion properties and the activation pH; and n represents 1 or 2, are suitable ED precursor compounds for use in color-photographic recording materials. They are preferably used in a combination with reducible dye-releasers. They are also suitable as so-called scavengers.", "In the traveling salesman path problem, we are given a set of cities, traveling costs between city pairs and fixed source and destination cities. The objective is to find a minimum cost path from the source to destination visiting all cities exactly once. In this paper, we study polyhedral and combinatorial properties of a variant we call the traveling salesman walk problem, in which the objective is to find a minimum cost walk from the source to destination visiting all cities at least once. We first characterize traveling salesman walk perfect graphs, graphs for which the convex hull of incidence vectors of traveling salesman walks can be described by linear inequalities. We show these graphs have a description by way of forbidden minors and also characterize them constructively. We also address the asymmetric traveling salesman path problem (ATSPP) and give a factor @math -approximation algorithm for this problem.", "" ] }
0907.0726
2950391451
We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.
The metric minimum latency problem is NP-hard for both the undirected and directed versions since an exact algorithm for either of these could be used to efficiently solve the Hamiltonian Path problem. The first constant-factor approximation for minimum latency on undirected graphs was developed by @cite_13 . This was subsequently improved in a series of papers from 144 to 21.55 @cite_9 , then to 7.18 @cite_18 and ultimately to 3.59 @cite_20 . @cite_13 also observed that there is some constant @math such that there is no @math -approximation for minimum latency unless P = NP. For directed graphs, Nagarajan and Ravi @cite_22 gave an @math approximation algorithm that runs in time @math , where @math is the integrality gap of an LP relaxation for ATSPP. Using their @math upper bound on @math , they obtained a guarantee of @math , which is the best approximation ratio known for this problem before our present results.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_9", "@cite_13", "@cite_20" ], "mid": [ "2009279347", "1492330513", "2093376706", "2951393205", "2115556222" ], "abstract": [ "We give a 7.18-approximation algorithm for the minimum latency problem that uses only @math calls to the prize-collecting Steiner tree (PCST) subroutine of Goemans and Williamson. This improves the previous best algorithms in both performance guarantee and running time. A previous algorithm of Goemans and Kleinberg for the minimum latency problem requires an approximation algorithm for the @math -minimum spanning tree ( @math -MST) problem which is called as a black box for each value of @math . Their algorithm can achieve an approximation factor of 10.77 while making @math PCST calls, a factor of 8.98 using @math PCST calls, or a factor of @math using @math PCST calls, via the @math -MST algorithms of Garg, Arya and Ramesh, and Arora and Karakostas, respectively. Here @math denotes the number of nodes in the instance, and @math is the largest edge cost in the input. In all cases, the running time is dominated by the PCST calls. Since the PCST subroutine can be implemented to run in @math time, the overall running time of our algorithm is @math . We also give a faster randomized version of our algorithm that achieves the same approximation guarantee in expectation, but uses only @math PCST calls, and derandomize it to obtain a deterministic algorithm with factor @math , using @math PCST calls. The basic idea for our improvement is that we do not treat the @math -MST algorithm as a black box. This allows us to take advantage of some special situations in which the PCST subroutine delivers a 2-approximate @math -MST. We are able to obtain the same approximation ratio that would be given by Goemans and Kleinberg if we had access to 2-approximate @math -MSTs for all values of @math , even though we have them only for some values of @math that we are not able to specify in advance. We also extend our algorithm to a weighted version of the minimum latency problem.", "We study the directed minimum latency problem: given an n-vertex asymmetric metric (V,d) with a root vertex ri¾? V, find a spanning path originating at rthat minimizes the sum of latencies at all vertices (the latency of any vertex vi¾? Vis the distance from rto valong the path). This problem has been well-studied on symmetric metrics, and the best known approximation guarantee is 3.59 [3]. For any @math O( n^ ^3 ) @math =O( n )$, which implies (for any fixed i¾?> 0) a polynomial time O(n1 2 + i¾?)-approximation algorithm for directed latency. In the special case of metrics induced by shortest-paths in an unweighted directed graph, we give an O(log2n) approximation algorithm. As a consequence, we also obtain an O(log2n) approximation algorithm for minimizing the weighted completion time in no-wait permutation flowshop scheduling. We note that even in unweighted directed graphs, the directed latency problem is at least as hard to approximate as the well-studied asymmetric traveling salesman problem, for which the best known approximation guarantee is O(logn).", "Given a tour visiting n points in a metric space, the latency of one of these points p is the distance traveled in the tour before reaching p. The minimum latency problem (MLP) asks for a tour passing through n given points for which the total latency of the n points is minimum; in effect, we are seeking the tour with minimum average arrival time. This problem has been studied in the operations research literature, where it has also been termed the delivery-man problem and the traveling repairman problem. The approximability of the MLP was first considered by Sahni and Gonzalez in 1976; however, unlike the classical traveling salesman problem (TSP), it is not easy to give any constant-factor approximation algorithm for the MLP. Recently, (A. Blum, P. Chalasani, D. Coppersimith, W. Pulleyblank, P. Raghavan, M. Sudan, Proceedings of the 26th ACM Symposium on the Theory of Computing, 1994, pp. 163-171) gave the first such algorithm, obtaining an approximation ratio of 144. In this work, we develop an algorithm which improves this ratio to 21.55; moreover, combining our algorithm with a recent result of Garg (N. Garg, Proceedings of the 37th IEEE Symposium on Foundations of Computer Science, 1996, pp. 302-309) provides an approximation ratio of 10.78. The development of our algorithm involves a number of techniques that seem to be of interest from the perspective of the TSP and its variants more generally.", "We are given a set of points @math and a symmetric distance matrix @math giving the distance between @math and @math . We wish to construct a tour that minimizes @math , where @math is the latency of @math , defined to be the distance traveled before first visiting @math . This problem is also known in the literature as the deliveryman problem or the traveling repairman problem . It arises in a number of applications including disk-head scheduling, and turns out to be surprisingly different from the traveling salesman problem in character. We give exact and approximate solutions to a number of cases, including a constant-factor approximation algorithm whenever the distance matrix satisfies the triangle inequality.", "We give improved approximation algorithms for a variety of latency minimization problems. In particular, we give a 3.59-approximation to the minimum latency problem, improving on previous algorithms by a multiplicative factor of 2. Our techniques also give similar improvements for related problems like k-traveling repairmen and its multiple depot variant. We also observe that standard techniques can be used to speed up the previous and this algorithm by a factor of O sup spl tilde (n)." ] }
0906.5485
1615988585
Many sorts of structured data are commonly stored in a multi-relational format of interrelated tables. Under this relational model, exploratory data analysis can be done by using relational queries. As an example, in the Internet Movie Database (IMDb) a query can be used to check whether the average rank of action movies is higher than the average rank of drama movies. We consider the problem of assessing whether the results returned by such a query are statistically significant or just a random artifact of the structure in the data. Our approach is based on randomizing the tables occurring in the queries and repeating the original query on the randomized tables. It turns out that there is no unique way of randomizing in multi-relational data. We propose several randomization techniques, study their properties, and show how to find out which queries or hypotheses about our data result in statistically significant information. We give results on real and generated data and show how the significance of some queries vary between different randomizations.
Obviously, there is a large amount of statistical literature about hypothesis testing @cite_4 @cite_1 . For the particular case of data mining, many papers work on the significance of association rules and other patterns @cite_3 @cite_7 . In the recent years, the framework of randomizations has been introduced to the data mining community to test significance of patterns: the papers @cite_2 @cite_6 deal with randomizations on binary data, and the work in @cite_11 studies randomizations on real-valued data. For another type of approach to measuring @math -values for patterns, see @cite_9 . A related work that studies permutations on networks and how this affects significance of patterns is @cite_15 . Sub-sampling methods such as bootstrapping @cite_5 use randomization to study the properties of the underlying distribution instead of testing the data against some null-model. Finally, database theory studies mainly query processing and optimization in different complex data @cite_14 @cite_10 . To the best of our knowledge there is no work that directly addresses the problem presented in this paper.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2087475328", "", "2066277072", "", "", "1511277043", "1978036582", "2048570414", "2117897510", "2124533460", "", "184371133" ], "abstract": [ "Certain variants of object-oriented Datalog can be compiled to Datalog with negation. We seek to apply optimisations akin to virtual method resolution (a well-known technique in compiling Java and other OO languages) to improve efficiency of the resulting Datalog programs. The effectiveness of such optimisations strongly depends on the precision of the underlying type inference algorithm. Previous work on type inference for Datalog has focussed on Cartesian abstractions, where the type of each field is computed separately. Such Cartesian type inference is inherently imprecise in the presence of field equalities. We propose a type system where equalities are tracked, and present a type inference algorithm. The algorithm is proved sound. We also prove that it is optimal for Datalog without negation, in the sense that the inferred type is as tight as possible. Extensive experiments with our type-based optimisations, in a commercial implementation of object-oriented Datalog, confirm the benefits of this non-Cartesian type inference algorithm.", "", "Many techniques for association rule mining and feature selection require a suitable metric to capture the dependencies among variables in a data set. For example, metrics such as support, confidence, lift, correlation, and collective strength are often used to determine the interestingness of association patterns. However, many such measures provide conflicting information about the interestingness of a pattern, and the best metric to use for a given application domain is rarely known. In this paper, we present an overview of various measures proposed in the statistics, machine learning and data mining literature. We describe several key properties one should examine in order to select the right measure for a given application domain. A comparative study of these properties is made using twenty one of the existing measures. We show that each measure has different properties which make them useful for some application domains, but not for others. We also present two scenarios in which most of the existing measures agree with each other, namely, support-based pruning and table standardization. Finally, we present an algorithm to select a small set of tables such that an expert can select a desirable measure by looking at just this small set of tables.", "", "", "One of the more well-studied problems in data mining is the search for association rules in market basket data. Association rules are intended to identify patterns of the type: “A customer purchasing item A often also purchases item B.” Motivated partly by the goal of generalizing beyond market basket data and partly by the goal of ironing out some problems in the definition of association rules, we develop the notion of dependence rules that identify statistical dependence in both the presence and absence of items in itemsets. We propose measuring significance of dependence via the chi-squared test for independence from classical statistics. This leads to a measure that is upward-closed in the itemset lattice, enabling us to reduce the mining problem to the search for a border between dependent and independent itemsets in the lattice. We develop pruning strategies based on the closure property and thereby devise an efficient algorithm for discovering dependence rules. We demonstrate our algorithm‘s effectiveness by testing it on census data, text data (wherein we seek term dependence), and synthetic data.", "The problem of assessing the significance of data mining results on high-dimensional 0--1 datasets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by standard statistical tests such as chi-square, or other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are difficult to apply to sets of patterns or other complex results of data mining algorithms. In this article, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins as the given dataset, computing the results of interest on the randomized instances and comparing them to the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and spectral analysis. To generate random datasets with given margins, we use variations of a Markov chain approach which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is expected, given the row and column margins of the datasets, while for other datasets the discovered structure conveys information that is not captured by the margin counts.", "Act 1. In 1975 Jared Diamond, who has since become famous as the Pulitzer Prizewinning author of Guns, Germs, and Steel, published a paper based mainly on a big matrix of Os and is [4]. Each row of Diamond's matrix' corresponded to a bird species; each column corresponded to an island in the New Hebrides (now Vanuatu). The Is and Os recorded the presence or absence of the species on the islands. In his paper, Diamond proposed a set of seven \"community assembly rules\" that he had inferred from the patterns in his co-occurrence matrix. These rules, said Diamond, govern the way species organize themselves into communities.", "We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = ( ( F ) - ( F ) ), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.", "Summary: Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns that occur much more frequently than in randomized networks with the same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively enumerating all subgraphs with a given number of nodes in the network. The runtime of such algorithms increases strongly with network size. Here, we present a novel algorithm that allows estimation of subgraph concentrations and detection of network motifs at a runtime that is asymptotically independent of the network size. This algorithm is based on random sampling of subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in larger networks than was previously possible with exhaustive enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions. Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 1.1) and further information is available at http: www.weizmann.ac.il mcb UriAlon", "", "Randomization is an important technique for assessing the significance of data mining results. Given an input data set, a randomization method samples at random from some class of datasets that share certain characteristics with the original data. The measure of interest on the original data is then compared to the measure on the samples to assess its significance. For certain types of data, e.g., gene expression matrices, it is useful to be able to sample datasets that share row and column means and variances. Testing whether the results of a data mining algorithm on such randomized datasets differ from the results on the true dataset tells us whether the results on the true data were an artifact of the row and column means and variances, or due to some more interesting phenomena in the data. In this paper, we study the problem of generating such randomized datasets. We describe three alternative algorithms based on local transformations and Metropolis sampling, and show that the methods are efficient and usable in practice. We evaluate the performance of the methods both on real and generated data. The results indicate that the methods work efficiently and solve the defined problem." ] }
0906.3736
1966790627
We design the weights in consensus algorithms for spatially correlated random topologies. These arise with 1) networks with spatially correlated random link failures and 2) networks with randomized averaging protocols. We show that the weight optimization problem is convex for both symmetric and asymmetric random graphs. With symmetric random networks, we choose the consensus mean-square error (MSE) convergence rate as the optimization criterion and explicitly express this rate as a function of the link formation probabilities, the link formation spatial correlations, and the consensus weights. We prove that the MSE convergence rate is a convex, nonsmooth function of the weights, enabling global optimization of the weights for arbitrary link formation probabilities and link correlation structures. We extend our results to the case of asymmetric random links. We adopt as optimization criterion the mean-square deviation (MSdev) of the nodes' states from the current average state. We prove that MSdev is a convex function of the weights. Simulations show that significant performance gain is achieved with our weight design method when compared with other methods available in the literature.
We provide comprehensive simulation experiments to demonstrate the effectiveness of our approach. We provide two different models of random networks with correlated link failures; in addition, we study the broadcast gossip algorithm @cite_28 , as an example of randomized protocol with asymmetric links. In all cases, simulations confirm that our method shows significant gain compared to the methods available in the literature. Also, we show that the gain increases with the network size.
{ "cite_N": [ "@cite_28" ], "mid": [ "2161305486" ], "abstract": [ "Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and computing in an arbitrarily connected network of nodes. Specifically, we study a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithm converges almost surely to a consensus. We prove that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation. We provide theoretical and numerical results on the mean square error performance, on the convergence rate and study the effect of the ldquomixing parameterrdquo on the convergence rate of the broadcast gossip algorithm. The results indicate that the mean squared error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithm to achieve a given distance to consensus through theoretical and numerical results." ] }
0906.3736
1966790627
We design the weights in consensus algorithms for spatially correlated random topologies. These arise with 1) networks with spatially correlated random link failures and 2) networks with randomized averaging protocols. We show that the weight optimization problem is convex for both symmetric and asymmetric random graphs. With symmetric random networks, we choose the consensus mean-square error (MSE) convergence rate as the optimization criterion and explicitly express this rate as a function of the link formation probabilities, the link formation spatial correlations, and the consensus weights. We prove that the MSE convergence rate is a convex, nonsmooth function of the weights, enabling global optimization of the weights for arbitrary link formation probabilities and link correlation structures. We extend our results to the case of asymmetric random links. We adopt as optimization criterion the mean-square deviation (MSdev) of the nodes' states from the current average state. We prove that MSdev is a convex function of the weights. Simulations show that significant performance gain is achieved with our weight design method when compared with other methods available in the literature.
Related work Weight optimization for consensus with switching topologies has not received much attention in the literature. Reference @cite_25 studies the tradeoff between the convergence rate and the amount of communication that takes place in the network. This reference is mainly concerned with the design of the network topology, i.e., the design of the probabilities of reliable communication @math and the weight @math (assuming all nonzero weights are equal), assuming a communication cost @math per link and an overall network communication budget. Reference @cite_28 proposes the broadcast gossip algorithm, where at each time step, a single node, selected at random, broadcasts unidirectionally its state to all the neighbors within its wireless range. We detail the broadcast gossip in subsection . This reference optimizes the weight for the broadcast gossip algorithm assuming equal weights for all links.
{ "cite_N": [ "@cite_28", "@cite_25" ], "mid": [ "2161305486", "2145706050" ], "abstract": [ "Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and computing in an arbitrarily connected network of nodes. Specifically, we study a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithm converges almost surely to a consensus. We prove that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation. We provide theoretical and numerical results on the mean square error performance, on the convergence rate and study the effect of the ldquomixing parameterrdquo on the convergence rate of the broadcast gossip algorithm. The results indicate that the mean squared error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithm to achieve a given distance to consensus through theoretical and numerical results.", "In a sensor network, in practice, the communication among sensors is subject to: 1) errors that can cause failures of links among sensors at random times; 2) costs; and 3) constraints, such as power, data rate, or communication, since sensors and networks operate under scarce resources. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. We model the network as a Bernoulli random topology and establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail. In particular, a necessary and sufficient condition is for the algebraic connectivity of the mean graph topology to be strictly positive. With these results, we show that the topology design with random link failures, link communication costs, and a communication cost constraint is a constrained convex optimization problem that can be efficiently solved for large networks by semidefinite programming techniques. Simulations demonstrate that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the performance of a non-random network at a fraction of the communication cost." ] }
0906.3736
1966790627
We design the weights in consensus algorithms for spatially correlated random topologies. These arise with 1) networks with spatially correlated random link failures and 2) networks with randomized averaging protocols. We show that the weight optimization problem is convex for both symmetric and asymmetric random graphs. With symmetric random networks, we choose the consensus mean-square error (MSE) convergence rate as the optimization criterion and explicitly express this rate as a function of the link formation probabilities, the link formation spatial correlations, and the consensus weights. We prove that the MSE convergence rate is a convex, nonsmooth function of the weights, enabling global optimization of the weights for arbitrary link formation probabilities and link correlation structures. We extend our results to the case of asymmetric random links. We adopt as optimization criterion the mean-square deviation (MSdev) of the nodes' states from the current average state. We prove that MSdev is a convex function of the weights. Simulations show that significant performance gain is achieved with our weight design method when compared with other methods available in the literature.
The problem of optimizing the weights for consensus under a random topology, when the weights for different links may be different, has not received much attention in the literature. Authors have proposed weight choices for random or time-varying networks @cite_27 @cite_33 , but no claims to optimality are made. Reference @cite_33 proposes the Metropolis weights (MW), based on the Metropolis-Hastings algorithm for simulating a Markov chain with uniform equilibrium distribution @cite_23 . The weights choice in @cite_27 is based on the fastest mixing Markov chain problem studied in @cite_29 and uses the information about the underlying supergraph. We refer to this weight choice as the supergraph based weights (SGBW).
{ "cite_N": [ "@cite_27", "@cite_29", "@cite_33", "@cite_23" ], "mid": [ "2096188283", "2134711723", "2113542196", "2138309709" ], "abstract": [ "Average consensus and gossip algorithms have recently received significant attention, mainly because they constitute simple and robust algorithms for distributed information processing over networks. Inspired by heat diffusion, they compute the average of sensor networks measurements by iterating local averages until a desired level of convergence. Confronted with the diversity of these algorithms, the engineer may be puzzled in his choice for one of them. As an answer to his her need, we develop precise mathematical metrics, easy to use in practice, to characterize the convergence speed and the cost (time, message passing, energy...) of each of the algorithms. In contrast to other works focusing on time-invariant scenarios, we evaluate these metrics for ergodic time-varying networks. Our study is based on Oseledec's theorem, which gives an almost- sure description of the convergence speed of the algorithms of interest. We further provide upper bounds on the convergence speed. Finally, we use these tools to make some experimental observations illustrating the behavior of the convergence speed with respect to network topology and reliability in both average consensus and gossip algorithms.", "We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest eigenvalue modulus (SLEM) of the transition probability matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the SLEM, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semidefinite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, @math ) using standard numerical methods for SDPs. Larger problems can be solved by exploiting various types of symmetry and structure in the problem, and far larger problems (say, 100,000 edges) can be solved using a subgradient method we describe. We compare the fastest mixing Markov chain to those obtained using two commonly used heuristics: the maximum-degree method, and the Metropolis--Hastings algorithm. For many of the examples considered, the fastest mixing Markov chain is substantially faster than those obtained using these heuristic methods. We derive the Lagrange dual of the fastest mixing Markov chain problem, which gives a sophisticated method for obtaining (arbitrarily good) bounds on the optimal mixing rate, as well as the optimality conditions. Finally, we describe various extensions of the method, including a solution of the problem of finding the fastest mixing reversible Markov chain, on a fixed graph, with a given equilibrium distribution.", "Given a network of processes where each node has an initial scalar value, we consider the problem of computing their average asymptotically using a distributed, linear iterative algorithm. At each iteration, each node replaces its own value with a weighted average of its previous value and the values of its neighbors. We introduce the Metropolis weights, a simple choice for the averaging weights used in each step. We show that with these weights, the values at every node converge to the average, provided the innitely occurring communication graphs are jointly connected.", "SUMMARY A generalization of the sampling method introduced by (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed. For numerical problems in a large number of dimensions, Monte Carlo methods are often more efficient than conventional numerical methods. However, implementation of the Monte Carlo methods requires sampling from high dimensional probability distributions and this may be very difficult and expensive in analysis and computer time. General methods for sampling from, or estimating expectations with respect to, such distributions are as follows. (i) If possible, factorize the distribution into the product of one-dimensional conditional distributions from which samples may be obtained. (ii) Use importance sampling, which may also be used for variance reduction. That is, in order to evaluate the integral J = X) p(x)dx = Ev(f), where p(x) is a probability density function, instead of obtaining independent samples XI, ..., Xv from p(x) and using the estimate J, = Zf(xi) N, we instead obtain the sample from a distribution with density q(x) and use the estimate J2 = Y f(xj)p(x1) q(xj)N . This may be advantageous if it is easier to sample from q(x) thanp(x), but it is a difficult method to use in a large number of dimensions, since the values of the weights w(xi) = p(x1) q(xj) for reasonable values of N may all be extremely small, or a few may be extremely large. In estimating the probability of an event A, however, these difficulties may not be as serious since the only values of w(x) which are important are those for which x -A. Since the methods proposed by Trotter & Tukey (1956) for the estimation of conditional expectations require the use of importance sampling, the same difficulties may be encountered in their use. (iii) Use a simulation technique; that is, if it is difficult to sample directly from p(x) or if p(x) is unknown, sample from some distribution q(y) and obtain the sample x values as some function of the corresponding y values. If we want samples from the conditional dis" ] }
0906.4258
2952883129
Most accurate predictions are typically obtained by learning machines with complex feature spaces (as e.g. induced by kernels). Unfortunately, such decision rules are hardly accessible to humans and cannot easily be used to gain insights about the application domain. Therefore, one often resorts to linear models in combination with variable selection, thereby sacrificing some predictive power for presumptive interpretability. Here, we introduce the Feature Importance Ranking Measure (FIRM), which by retrospective analysis of arbitrary learning machines allows to achieve both excellent predictive performance and superior interpretation. In contrast to standard raw feature weighting, FIRM takes the underlying correlation structure of the features into account. Thereby, it is able to discover the most relevant features, even if their appearance in the training data is entirely prevented by noise. The desirable properties of FIRM are investigated analytically and illustrated in simulations.
Another approach is to measure the importance of a feature in terms of a sensitivity analysis @cite_3 This is both universal'' and objective''. However, it clearly does not take the indirect effects into account: for example, the change of @math may imply a change of some @math (e.g. due to correlation), which may also impact @math and thereby augment or diminish the net effect.
{ "cite_N": [ "@cite_3" ], "mid": [ "1678356000" ], "abstract": [ "Function estimation approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such TreeBoost models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed." ] }
0906.4258
2952883129
Most accurate predictions are typically obtained by learning machines with complex feature spaces (as e.g. induced by kernels). Unfortunately, such decision rules are hardly accessible to humans and cannot easily be used to gain insights about the application domain. Therefore, one often resorts to linear models in combination with variable selection, thereby sacrificing some predictive power for presumptive interpretability. Here, we introduce the Feature Importance Ranking Measure (FIRM), which by retrospective analysis of arbitrary learning machines allows to achieve both excellent predictive performance and superior interpretation. In contrast to standard raw feature weighting, FIRM takes the underlying correlation structure of the features into account. Thereby, it is able to discover the most relevant features, even if their appearance in the training data is entirely prevented by noise. The desirable properties of FIRM are investigated analytically and illustrated in simulations.
Here we follow the related but more intelligent'' idea of @cite_5 : to assess the importance of a feature by estimating its total impact on the score of a trained predictor. While @cite_5 proposes this for binary features that arise in the context of sequence analysis, the purpose of this paper is to generalize it to real-valued features and to theoretically investigate some properties of this approach. It turns out (proof in ) that under normality assumptions of the input features, FIRM generalizes , as the latter is a first order approximation of FIRM, and because FIRM also takes the correlation structure into account.
{ "cite_N": [ "@cite_5" ], "mid": [ "2027582332" ], "abstract": [ "Motivation: At the heart of many important bioinformatics problems, such as gene finding and function prediction, is the classification of biological sequences. Frequently the most accurate classifiers are obtained by training support vector machines (SVMs) with complex sequence kernels. However, a cumbersome shortcoming of SVMs is that their learned decision rules are very hard to understand for humans and cannot easily be related to biological facts. Results: To make SVM-based sequence classifiers more accessible and profitable, we introduce the concept of positional oligomer importance matrices (POIMs) and propose an efficient algorithm for their computation. In contrast to the raw SVM feature weighting, POIMs take the underlying correlation structure of k-mer features induced by overlaps of related k-mers into account. POIMs can be seen as a powerful generalization of sequence logos: they allow to capture and visualize sequence patterns that are relevant for the investigated biological phenomena. Availability: All source code, datasets, tables and figures are available at http: www.fml.tuebingen.mpg.de raetsch projects POIM. Contact: ed.refohnuarf.tsrif@grubnennoS.nereoS Supplementary information: Supplementary data are available at Bioinformatics online." ] }
0906.3643
1679155438
The Banzhaf index, Shapley-Shubik index and other voting power indices measure the importance of a player in a coalitional game. We consider a simple coalitional game called the spanning connectivity game (SCG) based on an undirected, unweighted multigraph, where edges are players. We examine the computational complexity of computing the voting power indices of edges in the SCG. It is shown that computing Banzhaf values and Shapley-Shubik indices is #P-complete for SCGs. Interestingly, Holler indices and Deegan-Packel indices can be computed in polynomial time. Among other results, it is proved that Banzhaf indices can be computed in polynomial time for graphs with bounded treewidth. It is also shown that for any reasonable representation of a simple game, a polynomial time algorithm to compute the Shapley-Shubik indices implies a polynomial time algorithm to compute the Banzhaf indices. As a corollary, computing the Shapley value is #P-complete for simple games represented by the set of minimal winning coalitions, Threshold Network Flow Games, Vertex Connectivity Games and Coalitional Skill Games.
Power indices such as the Banzhaf and Shapley-Shubik indices have been extensively used to gauge the power of a player in different coalitional games such as weighted voting games @cite_15 and corporate networks @cite_23 . These indices have recently been used in network flow games @cite_3 , where the edges in the graph have capacities and the power index of an edge signifies the influence that an edge has in enabling a flow from the source to the sink. Voting power indices have also been examined in vertex connectivity games @cite_9 on undirected, unweighted graphs; there the players are nodes, which are partitioned into primary, standard, and backbone classes.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_3", "@cite_23" ], "mid": [ "2143039680", "1570872913", "2101522887", "1977013508" ], "abstract": [ "We consider computational aspects of a game theoretic approach to network reliability. Consider a network where failure of one node may disrupt communication between two other nodes. We model this network as a simple coalitional game, called the vertex Connectivity Game (CG). In this game, each agent owns a vertex, and controls all the edges going to and from that vertex. A coalition of agents wins if it fully connects a certain subset of vertices in the graph, called the primary vertices. We show that power indices, which express an agent's ability to affect the outcome of the vertex connectivity game, can be used to identify significant possible points of failure in the communication network, and can thus be used to increase network reliability. We show that in general graphs, calculating the Banzhaf power index is #P-complete, but suggest a polynomial algorithm for calculating this index in trees. We also show a polynomial algorithm for computing the core of a CG, which allows a stable division of payments to coalition agents.", "We study the complexity of the following problem: Given two weighted voting games G? and G?? that each contain a player p, in which of these games is p's power index value higher? We study this problem with respect to both the Shapley-Shubik power index [16] and the Banzhaf power index [3,6]. Our main result is that for both of these power indices the problem is complete for probabilistic polynomial time (i.e., is PP-complete). We apply our results to partially resolve some recently proposed problems regarding the complexity of weighted voting games. We also show that, unlike the Banzhaf power index, the Shapley-Shubik power index is not #P-parsimonious-complete. This finding sets a hard limit on the possible strengthenings of a result of Deng and Papadimitriou [5], who showed that the Shapley-Shubik power index is #P-metric-complete.", "Preference aggregation is used in a variety of multiagent applications, and as a result, voting theory has become an important topic in multiagent system research. However, power indices (which reflect how much \"real power\" a voter has in a weighted voting system) have received relatively little attention, although they have long been studied in political science and economics. The Banzhaf power index is one of the most popular; it is also well-defined for any simple coalitional game. In this paper, we examine the computational complexity of calculating the Banzhaf power index within a particular multiagent domain, a network flow game. Agents control the edges of a graph; a coalition wins if it can send a flow of a given size from a source vertex to a target vertex. The relative power of each edge agent reflects its significance in enabling such a flow, and in real-world networks could be used, for example, to allocate resources for maintaining parts of the network. We show that calculating the Banzhaf power index of each agent in this network flow domain is #P-complete. We also show that for some restricted network flow domains there exists a polynomial algorithm to calculate agents' Banzhaf power indices.", "This paper proposes to rely on power indices to measure the amount of control held by individual shareholders in corporate networks. The value of the indices is determined by a complex voting game viewed as the composition of interlocked weighted majority games; the compound game reflects the structure of shareholdings. The paper describes an integrated algorithmic approach which allows to deal efficiently with the complexity of computing power indices in shareholding networks, irrespective of their size or structure. In particular, the approach explicitly accounts for the presence of float and of cyclic shareholding relationships. It has been successfully applied to the analysis of real-world financial networks." ] }
0906.3643
1679155438
The Banzhaf index, Shapley-Shubik index and other voting power indices measure the importance of a player in a coalitional game. We consider a simple coalitional game called the spanning connectivity game (SCG) based on an undirected, unweighted multigraph, where edges are players. We examine the computational complexity of computing the voting power indices of edges in the SCG. It is shown that computing Banzhaf values and Shapley-Shubik indices is #P-complete for SCGs. Interestingly, Holler indices and Deegan-Packel indices can be computed in polynomial time. Among other results, it is proved that Banzhaf indices can be computed in polynomial time for graphs with bounded treewidth. It is also shown that for any reasonable representation of a simple game, a polynomial time algorithm to compute the Shapley-Shubik indices implies a polynomial time algorithm to compute the Banzhaf indices. As a corollary, computing the Shapley value is #P-complete for simple games represented by the set of minimal winning coalitions, Threshold Network Flow Games, Vertex Connectivity Games and Coalitional Skill Games.
The study of cooperative games in combinatorial domains is widespread in operations research @cite_22 @cite_14 . Spanning network games have been examined previously @cite_20 @cite_1 but they are treated differently, with weighted graphs and as players (not , as here). The SCG is related to the all-terminal reliability model, a non-game-theoretic model that is relevant in broadcasting @cite_7 @cite_8 . Whereas the reliability of a network concerns the overall probability of a network being connected, this paper concentrates on resource allocation to the edges. A game-theoretic approach can provide fair and stable outcomes in a strategic setting.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_20" ], "mid": [ "1562095019", "1999587457", "2115826669", "1991536882", "1969364062", "" ], "abstract": [ "Preface. 1. Cooperative Games and Solution Concepts. 2. Linear Programming Games. 3. Assignment Games and Permutation Games. 4. Sequencing Games and Generalizations. 5. Travelling Salesman Games and Routing Games. 6. Minimum Cost Spanning Tree Games. 7. Location Games. References. Index.", "This paper surveys the research area of cooperative games associated with several types of operations research problems in which various decision makers (players) are involved.Cooperating players not only face a joint optimisation problem in trying, e.g., to minimise total joint costs, but also face an additional allocation problem in how to distribute these joint costs back to the individual players.This interplay between optimisation and allocation is the main subject of the area of operations research games.It is surveyed on the basis of a distinction between the nature of the underlying optimisation problem: connection, routing, scheduling, production and inventory.", "The class of @math -complete problems is a class of computationally eqivalent counting problems (defined by the author in a previous paper) that are at least as difficult as the @math -complete problems. Here we show, for a large number of natural counting problems for which there was no previous indication of intractability, that they belong to this class. The technique used is that of polynomial time reduction with oracles via translations that are of algebraic or arithmetic nature.", "This paper presents an overview of results related to the computational complexity of network reliability analysis problems. Network reliability analysis problems deal with the determination of reliability measures for stochastic networks. We show how these problems are related to the more familiar computational network problems of recognizing certain subnetworks, finding optimal subnetworks, and counting certain subnetworks. We use these relationships to show that the k-terminal, the 2-terminal, and the all-terminal network reliability analysis problems are at least as hard as the renowned set of computationally difficult problems, NP-Complete. Finally, we discuss the impact of these results on how one should approach problem solving in this area.", "Spanning network games, which are a generalization of minimum cost spanning tree games, were introduced by Granot and Maschler (1991), who showed that these games are always monotonic. In this paper a subclass of spanning network games is introduced, namely simplex games, and it is shown that every monotonic game is a simplex game. Hence, the class of spanning network games coincides with the class of monotonic games.", "" ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
One of the first attempts to provide information retrieval functionality such as keyword and proximity searches by using user defined operators, is described in @cite_1 . Some years later, the first IR system over a DBMS was presented @cite_5 . Relevance ranking queries were implemented using unchanged SQL on an AT &T DBC-1012 parallel machine for TREC-3. They found that the DBMS overhead was somewhat high, but tolerable for a large scale machine, emphasizing that using a DBMS can spread the workload across large numbers of processors. Recently, several approaches to merge DB's structured data management and IR unstructured text search facilities have been proposed. According to @cite_4 , they can be classified in four different categories:
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_4" ], "mid": [ "1492184023", "29603796", "2141834653" ], "abstract": [ "In this our first year of TREC participation, we implemented an IR system using an AT&T DBC-1012 Model 4 parallel relational database machine. We started with the premise that a relational system could be used to implement an IR system. After implementing a prototype to verify that premise, we then began to investigate the performance of a parallel relational database system for this application. We only used the category B data, but our initial results are encouraging as processing load was balanced across the processors for a variety of different queries. We also tested the effect of query reduction on accuracy and found that queries can be reduced prior to their implementation without incurring a significant loss in precision recall. This reduction also serves to improve run-time performance. Finally, in a separate set of work, we implemented Damashek's n-gram algorithm for n=3 and were able to show similar results as found when n=5.", "", "While information retrieval(IR) and databases(DB) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care systems, bulletin boards, XML data management, and digital libraries. Recently DB-IR integration issue has been budded in the research field. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, towards DB-IR integration, a transaction on adding or updating a document should include maintenance of the postings lists accompanied by the document - hence per-document basis transactional index maintenance. In this paper, performance of a few strategies for per-document basis transaction for inserting documents -- direct index update, stand-alone auxiliary index and pulsing auxiliary index - will be evaluated. The result tested on the KRISTAL-IRMS shows that the pulsing auxiliary strategy, where long postings lists in the auxiliary index are in-place updated to the main index whereas short lists are directly updated in the auxiliary index, can be a challenging candidate for text field indexing in DB-IR integration." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
This approach integrates DB and IR engines at the application level @cite_22 . Query evaluation and indexing is provided by the IR engine, while the DBMS manages the documents and other metadata. According to @cite_4 the basic drawback of this approach is the difficulty to synchronize the DBMS document contents and the IR's index.
{ "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2141834653", "2294626234" ], "abstract": [ "While information retrieval(IR) and databases(DB) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care systems, bulletin boards, XML data management, and digital libraries. Recently DB-IR integration issue has been budded in the research field. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, towards DB-IR integration, a transaction on adding or updating a document should include maintenance of the postings lists accompanied by the document - hence per-document basis transactional index maintenance. In this paper, performance of a few strategies for per-document basis transaction for inserting documents -- direct index update, stand-alone auxiliary index and pulsing auxiliary index - will be evaluated. The result tested on the KRISTAL-IRMS shows that the pulsing auxiliary strategy, where long postings lists in the auxiliary index are in-place updated to the main index whereas short lists are directly updated in the auxiliary index, can be a challenging candidate for text field indexing in DB-IR integration.", "Databases (DB) and information retrieval (IR) have evolved as separate fields. However, modern applications such as customer support, health care, and digital libraries require capabilities for both data and text management. In such settings, traditional DB queries, in SQL or XQuery, are not flexible enough to handle applicationspecific scoring and ranking. IR systems, on the other hand, lack efficient support for handling structured parts of the data and metadata, and do not give the application developer adequate control over the ranking function. This paper analyzes the requirements of advanced text- and data-rich applications for an integrated platform. The core functionality must be manageable, and the API should be easy to program against. A particularly important issue that we highlight is how to reconcile flexibility in scoring and ranking models with optimizability, in order to accommodate a wide variety of target applications efficiently. We discuss whether such a system needs to be designed from scratch, or can be incrementally built on top of existing architectures. The results of our analyses are cast into a series of challenges to the DB and IR communities." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
Most DBMS offer extensible architectures using a high level interface, which can be used to integrate IR functionalities. Although such extensions can be easily implemented, it is not recommended according to @cite_9 when high performance is desired. Systems based on this approach, include @cite_25 (a scalable IR system for frequently changing data sets), @cite_26 (a collaborative customer support application, where a DBMS holds all the data and an external server maintains the index), @cite_7 (an Oracle based engine for XML and plain text data with top-k retrieval) and @cite_18 (a hypermedia retrieval engine using probabilistic Datalog).
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_9", "@cite_25" ], "mid": [ "2151155666", "2141788816", "2137523786", "2166756008", "2025578095" ], "abstract": [ "HySpirit is a retrieval engine for hypermedia retrieval integrating concepts from information retrieval (IR) and deductive databases. The logical view on IR models retrieval as uncertain inference, for which we use probabilistic reasoning. Since the expressiveness of classical IR models is not sufficient for hypermedia retrieval, HySpirit is based on a probabilistic version of Datalog. In hypermedia retrieval, different nodes may contain contradictory information; thus, we introduce probabilistic four-valued Datalog. In order to support fact queries as well as content-based retrieval, HySpirit is based on an open world assumption, but allows for predicate-specific closed world assumptions. For performing efficient retrieval on large databases, our system provides access to external data. We demonstrate the application of HySpirit by giving examples for retrieval on images, structured documents and large databases.", "For applications that involve rapidly changing textual data and also require traditional DBMS capabilities, current systems are unsatisfactory. We describe a hybrid IR-DB system that serves as the basis for the QUIQ-Connect product, a collaborative customer support application. We present a novel query paradigm and system architecture, along with performance results.", "This paper presents a novel engine, coined TopX, for efficient ranked retrieval of XML documents over semistructured but nonschematic data collections. The algorithm follows the paradigm of threshold algorithms for top-k query processing with a focus on inexpensive sequential accesses to index lists and only a few judiciously scheduled random accesses. The difficulties in applying the existing top-k algorithms to XML data lie in 1) the need to consider scores for XML elements while aggregating them at the document level, 2) the combination of vague content conditions with XML path conditions, 3) the need to relax query conditions if too few results satisfy all conditions, and 4) the selectivity estimation for both content and structure conditions and their impact on evaluation strategies. TopX addresses these issues by precomputing score and path information in an appropriately designed index structure, by largely avoiding or postponing the evaluation of expensive path conditions so as to preserve the sequential access pattern on index lists, and by selectively scheduling random accesses when they are cost-beneficial. In addition, TopX can compute approximate top-k results using probabilistic score estimators, thus speeding up queries with a small and controllable loss in retrieval precision.", "We propose the notion of tight-coupling [K. , (1999)] to add new data types into the DBMS engine. In this paper, we introduce the Odysseus ORDBMS and present its tightly-coupled IR features (US patented). We demonstrate a Web search engine capable of managing 20 million Web pages in a non-parallel configuration using Odysseus.", "Our current concern is a scalable infrastructure for information retrieval (IR) with up-to-date retrieval results in the presence of frequent, continuous updates. Timely processing of updates is important with novel application domains, e.g., e-commerce. We want to use off-the-self hardware and software as much as possible. These issues are challenging, given the additional requirement that the resulting system must scale well. We have built PowerDB-IR, a system that has the characteristics sought. This paper describes its design, implementation, and evaluation. PowerDB-IR is a coordination layer for a database cluster. The rationale behind a database cluster is to 'scale-out', i.e., to add further cluster nodes, whenever necessary for better performance. We build on IR-to-database mappings and service decomposition to support high-level parallelism. We follow a three-tier architecture with the database cluster as the bottom layer for storage management. The middle tier provides IR-specific processing and update services. PowerDB-IR has the following features: It allows to insert and retrieve documents concurrently, and it ensures freshness with almost no overhead. Alternative physical data organization schemes provide adequate performance for different workloads. Query processing techniques for the different data organizations efficiently integrate the ranked retrieval results from the cluster nodes. We have run extensive experiments with our prototype using commercial database systems and middleware software products. The main result is that PowerDB-IR shows surprisingly ideal scalability and low response times." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
In this approach, new data types and functionality for IR features are integrated into the core of the DBMS engine or the reverse (IRMS Information Retrieval & Management System) @cite_4 . Tight coupled systems include @cite_9 , an engine build over an ORDBMS engine, and @cite_3 , a column oriented storage management based system.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_3" ], "mid": [ "2166756008", "2141834653", "2134912727" ], "abstract": [ "We propose the notion of tight-coupling [K. , (1999)] to add new data types into the DBMS engine. In this paper, we introduce the Odysseus ORDBMS and present its tightly-coupled IR features (US patented). We demonstrate a Web search engine capable of managing 20 million Web pages in a non-parallel configuration using Odysseus.", "While information retrieval(IR) and databases(DB) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care systems, bulletin boards, XML data management, and digital libraries. Recently DB-IR integration issue has been budded in the research field. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, towards DB-IR integration, a transaction on adding or updating a document should include maintenance of the postings lists accompanied by the document - hence per-document basis transactional index maintenance. In this paper, performance of a few strategies for per-document basis transaction for inserting documents -- direct index update, stand-alone auxiliary index and pulsing auxiliary index - will be evaluated. The result tested on the KRISTAL-IRMS shows that the pulsing auxiliary strategy, where long postings lists in the auxiliary index are in-place updated to the main index whereas short lists are directly updated in the auxiliary index, can be a challenging candidate for text field indexing in DB-IR integration.", "The Matrix Framework is a recent proposal by Information Retrieval (IR) researchers to flexibly represent information retrieval models and concepts in a single multi-dimensional array framework. We provide computational support for exactly this framework with the array database system SRAM (Sparse Relational Array Mapping), that works on top of a DBMS. Information retrieval models can be specified in its comprehension-based array query language, in a way that directly corresponds to the underlying mathematical formulas. SRAM efficiently stores sparse arrays in (compressed) relational tables and translates and optimizes array queries into relational queries. In this work, we describe a number of array query optimization rules. To demonstrate their effect on text retrieval, we apply them in the TREC TeraByte track (TREC-TB) efficiency task, using the Okapi BM25 model as our example. It turns out that these optimization rules enable SRAM to automatically translate the BM25 array queries into the relational equivalent of inverted list processing including compression, score materialization and quantization, such as employed by custom-built IR systems. The use of the high-performance MonetDB X100 relational backend, that provides transparent database compression, allows the system to achieve very fast response times with good precision and low resource usage." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
This approach suggests developing new DB-IR architectures from scratch @cite_15 @cite_22 aiming at providing structural data independence, generalized scoring, and flexible and powerful query languages.
{ "cite_N": [ "@cite_15", "@cite_22" ], "mid": [ "2155228754", "2294626234" ], "abstract": [ "This paper summarizes the salient aspects of the SIGMOD 2005 panel on \"Databases and Information Retrieval: Rethinking the Great Divide\". The goal of the panel was to discuss whether we should rethink data management systems architectures to truly merge Database (DB) and Information Retrieval (IR) technologies. The panel had very high attendance and generated lively discussions.", "Databases (DB) and information retrieval (IR) have evolved as separate fields. However, modern applications such as customer support, health care, and digital libraries require capabilities for both data and text management. In such settings, traditional DB queries, in SQL or XQuery, are not flexible enough to handle applicationspecific scoring and ranking. IR systems, on the other hand, lack efficient support for handling structured parts of the data and metadata, and do not give the application developer adequate control over the ranking function. This paper analyzes the requirements of advanced text- and data-rich applications for an integrated platform. The core functionality must be manageable, and the API should be easy to program against. A particularly important issue that we highlight is how to reconcile flexibility in scoring and ranking models with optimizability, in order to accommodate a wide variety of target applications efficiently. We discuss whether such a system needs to be designed from scratch, or can be incrementally built on top of existing architectures. The results of our analyses are cast into a series of challenges to the DB and IR communities." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
The approach that we investigate in this paper falls more into the loose coupling approach. No special data types are introduced and the retrieval models are implemented on top (at a separate API that connects through jdbc to the DBMS). However we do exploit the SQL:1999 ARRAY type, allowing the storage of a collection of values directly in a column of a table, and the (8.2 and above) data type that is useful for storing semi-structural data and variable in number fields. To the best of our knowledge, the only related work is that of @cite_9 and @cite_3 . The difference with our work is that adopts a tight-coupling approach where the DBMS is extended with new data types, while implements an inverted file-like data structure at the physical layer. Specifically Oddyseus adds a B-tree at the posting list of each term in order to speedup the lookup of document identifiers and the evaluation of multi-word queries. However detailed experimental results, regarding the space overhead and the speedup of this approach, are not reported.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2166756008", "2134912727" ], "abstract": [ "We propose the notion of tight-coupling [K. , (1999)] to add new data types into the DBMS engine. In this paper, we introduce the Odysseus ORDBMS and present its tightly-coupled IR features (US patented). We demonstrate a Web search engine capable of managing 20 million Web pages in a non-parallel configuration using Odysseus.", "The Matrix Framework is a recent proposal by Information Retrieval (IR) researchers to flexibly represent information retrieval models and concepts in a single multi-dimensional array framework. We provide computational support for exactly this framework with the array database system SRAM (Sparse Relational Array Mapping), that works on top of a DBMS. Information retrieval models can be specified in its comprehension-based array query language, in a way that directly corresponds to the underlying mathematical formulas. SRAM efficiently stores sparse arrays in (compressed) relational tables and translates and optimizes array queries into relational queries. In this work, we describe a number of array query optimization rules. To demonstrate their effect on text retrieval, we apply them in the TREC TeraByte track (TREC-TB) efficiency task, using the Okapi BM25 model as our example. It turns out that these optimization rules enable SRAM to automatically translate the BM25 array queries into the relational equivalent of inverted list processing including compression, score materialization and quantization, such as employed by custom-built IR systems. The use of the high-performance MonetDB X100 relational backend, that provides transparent database compression, allows the system to achieve very fast response times with good precision and low resource usage." ] }
0906.3112
1516174723
One of the distinctive features of Information Retrieval systems comparing to Database Management systems, is that they offer better compression for posting lists, resulting in better I O performance and thus faster query evaluation. In this paper, we introduce database representations of the index that reduce the size (and thus the disk I Os) of the posting lists. This is not achieved by redesigning the DBMS, but by exploiting the non 1NF features that existing Object-Relational DBM systems (ORDBMS) already offer. Specifically, four different database representations are described and detailed experimental results for one million pages are reported. Three of these representations are one order of magnitude more space efficient and faster (in query evaluation) than the plain relational representation.
In comparison to @cite_27 , this paper (a) contains a detailed discussion of all related works, (b) introduces and investigates an additional database representation (that yields smaller in size tables), (c) reports experimental results over a one order of magnitude bigger corpus, and (d) reports experimental results for document-based access tasks.
{ "cite_N": [ "@cite_27" ], "mid": [ "2156811961" ], "abstract": [ "Engineering a Web search engine offering effective and efficient information retrieval is a challenging task. Mitos is a recently developed search engine that offers a wide spectrum of functionalities. A rather unusual design choice is that its index is based on an object-relational database system, instead of the classical inverted file. This paper discusses the benefits and the drawbacks of this choice (compared to inverted files), proposes three different database representations, and reports comparative experimental results. Two of these representations are one order of magnitude more space economical and two orders of magnitude faster in query evaluation, than the plain relational representation." ] }
0906.2135
2146686806
Aggregations of Web resources are increasingly important in scholarship as it adopts new methods that are data-centric, collaborative, and networked-based. The same notion of aggregations of resources is common to the mashed-up, socially networked information environment of Web 2.0. We present a mechanism to identify and describe aggregations of Web resources that has resulted from the Open Archives Initiative - Object Reuse and Exchange (OAI-ORE) project. The OAI-ORE specications are based on the principles of the Architecture of the World Wide Web, the Semantic Web, and the Linked Data eort. Therefore, their incorporation into the cyberinfrastructure that supports eScholarship will ensure the integration of the products of scholarly research into the Data Web.
Some Web navigator approaches work in an opposite granular direction, supporting of a single Web resource (i.e., an HTML page) into multiple resources. This can be done automatically, such as for segmented display on limited devices such as PDAs @cite_40 or for recovering structured records from Web pages @cite_2 . Decomposition can also be done manually, such as for reuse and sharing of parts of a Web page (e.g., ClipMarks http: clipmarks.com ). All these approaches, manually or automatically, can be thought of as adding (or inferring) HTML anchors where none exist. These approaches assign identities to the newly created resources (fragments of the original resource), but they provide no approach to describe the original resource as an aggregation of these new resources, nor do they allow expressing relationships among them.
{ "cite_N": [ "@cite_40", "@cite_2" ], "mid": [ "2020254000", "2166407869" ], "abstract": [ "We consider the problem of segmenting a webpage into visually and semantically cohesive pieces. Our approach is based on formulating an appropriate optimization problem on weighted graphs, where the weights capture if two nodes in the DOM tree should be placed together or apart in the segmentation; we present a learning framework to learn these weights from manually labeled data in a principled manner. Our work is a significant departure from previous heuristic and rule-based solutions to the segmentation problem. The results of our empirical analysis bring out interesting aspects of our framework, including variants of the optimization problem and the role of learning.", "Extraction of information from unstructured or semistructured Web documents often requires a recognition and delimitation of records. (By “record” we mean a group of information relevant to some entity.) Without first chunking documents that contain multiple records according to record boundaries, extraction of record information will not likely succeed. In this paper we describe a heuristic approach to discovering record boundaries in Web documents. In our approach, we capture the structure of a document as a tree of nested HTML tags, locate the subtree containing the records of interest, identify candidate separator tags within the subtree using five independent heuristics, and select a consensus separator tag based on a combined heuristic. Our approach is fast (runs linearly for practical cases within the context of the larger data-extraction problem) and accurate (100 in the experiments we conducted)." ] }
0906.2135
2146686806
Aggregations of Web resources are increasingly important in scholarship as it adopts new methods that are data-centric, collaborative, and networked-based. The same notion of aggregations of resources is common to the mashed-up, socially networked information environment of Web 2.0. We present a mechanism to identify and describe aggregations of Web resources that has resulted from the Open Archives Initiative - Object Reuse and Exchange (OAI-ORE) project. The OAI-ORE specications are based on the principles of the Architecture of the World Wide Web, the Semantic Web, and the Linked Data eort. Therefore, their incorporation into the cyberinfrastructure that supports eScholarship will ensure the integration of the products of scholarly research into the Data Web.
In the course of the OAI-ORE effort, we also attempted to model aggregations as Atom feeds, not entries @cite_6 . We ultimately decided that was the wrong granularity, especially since common Web 2.0 reuse scenarios, including use with the Atom Publishing Protocol, work at the level of Atom entries. The Atom Syndication Format was preferred over the various RSS formats in anticipation of using the Atom Publishing Protocol @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_6" ], "mid": [ "151954861", "1664332877" ], "abstract": [ "The Atom Publishing Protocol (AtomPub) is an application-level protocol for publishing and editing Web resources. The protocol is based on HTTP transfer of Atom-formatted representations. The Atom format is documented in the Atom Syndication Format. [STANDARDS-TRACK]", "The OAI Object Reuse and Exchange (OAI-ORE) framework recasts the repository-centric notion of digital object to a bounded aggregation of Web resources. In this manner, digital library content is more integrated with the Web architecture, and thereby more accessible to Web applications and clients. This generalized notion of an aggregation that is independent of repository containment conforms more closely with notions in eScience and eScholarship, where content is distributed across multiple services and databases. We provide a motivation for the OAI-ORE project, review previous interoperability efforts, describe draft ORE specifications and report on promising results from early experimentation that illustrate improved interoperability and reuse of digital objects." ] }
0906.2274
2162318391
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
The 2D histogram based on intensity and gradient magnitude was introduced in a seminal paper by Kindlmann and Durkin @cite_5 , and extended to multi-dimensional transfer functions by @cite_16 . Lundstr " @cite_2 introduced local histograms, which utilize a priori knowledge about spatial relationships to automatically differentiate between different tissue types. S @cite_7 introduced the LH histogram to classify material boundaries.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_7", "@cite_2" ], "mid": [ "2161955943", "2104451730", "2127534511", "1498299056" ], "abstract": [ "Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes generating an informative rendering challenging. In particular, the specification of the transfer function-the mapping from data values to renderable optical properties-is frequently a time consuming and unintuitive task. Ideally, the data being visualized should itself suggest an appropriate transfer function that brings out the features of interest without obscuring them with elements of little importance. We demonstrate that this is possible for a large class of scalar volume data, namely that where the regions of interest are the boundaries between different materials. A transfer function which makes boundaries readily visible can be generated from the relationship between three quantities: the data value and its first and second directional derivatives along the gradient direction. A data structure we term the histogram volume captures the relationship between these quantities throughout the volume in a position independent, computationally efficient fashion. We describe the theoretical importance of the quantities measured by the histogram volume, the implementation issues in its calculation, and a method for semiautomatic transfer function generation through its analysis. We conclude with results of the method on both idealized synthetic data as well as real world datasets.", "Most direct volume renderings produced today employ one-dimensional transfer functions, which assign color and opacity to the volume based solely on the single scalar quantity which comprises the dataset. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract specific material boundaries and convey subtle surface properties. However, identifying good transfer functions is difficult enough in one dimension, let alone two or three dimensions. This paper demonstrates an important class of three-dimensional transfer functions for scalar data (based on data value, gradient magnitude, and a second directional derivative), and describes a set of direct manipulation widgets which make specifying such transfer functions intuitive and convenient. We also describe how to use modern graphics hardware to interactively render with multi-dimensional transfer functions. The transfer functions, widgets, and hardware combine to form a powerful system for interactive volume exploration.", "A crucial step in volume rendering is the design of transfer functions that highlights those aspects of the volume data that are of interest to the user. For many applications, boundaries carry most of the relevant information. Reliable detection of boundaries is often hampered by limitations of the imaging process, such as blurring and noise. We present a method to identify the materials that form the boundaries. These materials are then used in a new domain that facilitates interactive and semiautomatic design of appropriate transfer functions. We also show how the obtained boundary information can be used in region-growing-based segmentation.", "Direct Volume Rendering (DVR) is known to be of diagnostic value in the analysis of medical data sets. However, its deployment in everyday clinical use has so far been limited. Two major challenges are that the current methods for Transfer Function (TF) construction are too complex and that the tissue separation abilities of the TF need to be extended. In this paper we propose the use of histogram analysis in local neighborhoods to address both these conflicting problems. To reduce TF construction difficulty, we introduce Partial Range Histograms in an automatic tissue detection scheme, which in connection with Adaptive Trapezoids enable efficient TF design. To separate tissues with overlapping intensity ranges, we propose a fuzzy classification based on local histograms as a second TF dimension. This increases the power of the TF, while retaining intuitive presentation and interaction." ] }
0906.2274
2162318391
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
Tzeng et al. @cite_10 suggest an interactive visualization system which allows the user to mark regions of interest by roughly painting the boundaries on a few slice images. During painting, the marked regions are used to train a neural network for multi-dimensional classification. Del adapt this approach to specify transfer functions in an augmented reality environment for medical applications @cite_6 . @cite_15 apply general regression neural networks to classify each point of a dataset into a certain class. This information is later used for assigning optical properties (e.g. color). @cite_14 use different methods to classify each point of a dataset. They use this classification information later to assign optical properties to voxels. While these approaches utilize neural networks to assign optical properties, the method presented here aims at classifying datasets into categories. The category information is subsequently used as an knowledge to visualize the dataset.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_10", "@cite_6" ], "mid": [ "2106321956", "2467542519", "1551898434", "2073616632" ], "abstract": [ "Classification of three-dimensional MRI images plays an important role in the volume reconstruction for a variety of medical image analysis, computer-aided diagnosis, three-dimensional reconstruction and visualization applications. By using density, gradient magnitude, and curvature as input parameters, and by defining a set of objective enter ions for the classification evaluation, a novel automatic classification method for volume reconstruction of MRI images by using GRNN is presented in this paper, experiment results demonstrate that the proposed method can not only efficient classify MRI images, but also alleviate people from the time-consuming parameter tune process of previous classification method.", "This paper analyzes how to introduce machine learning algorithms into the process of direct volume rendering. A conceptual framework for the optical property function elicitation process is proposed and particularized for the use of attribute-value classifiers. The process is evaluated in terms of accuracy and speed using four different off-the-shelf classifiers (J48, Naive Bayes, Simple Logistic and ECOC-Adaboost). The empirical results confirm the classification of biomedical datasets as a tough problem where an opportunity for further research emerges.", "In the traditional volume visualization paradigm, the user specifies a transfer function that assigns each scalar value to a color and opacity by defining an opacity and a color map function. The transfer function has two limitations. First, the user must define curves based on histogram and value rather than seeing and working with the volume itself. Second, the transfer function is inflexible in classifying regions of interest, where values at a voxel such as intensity and gradient are used to differentiate material, not talking into account additional properties such as texture and position. We describe an intuitive user interface for specifying the classification functions that consists of the users painting directly on sample slices of the volume. These painted regions are used to automatically define high-dimensional classification functions that can be implemented in hardware for interactive rendering. The classification of the volume is iteratively improved as the user paints samples, allowing intuitive and efficient viewing of materials of interest.", "In the visualization of 3D medical data, the appropriateness of the achieved result is highly dependent on the application. Therefore, an intuitive interaction with the user is of utter importance in order to determine the particular aim of the visualization. In this paper, we present a novel approach for the visualization of 3D medical data with volume rendering combined with AR-based user interaction. The utilization of augmented reality (AR), with the assistance of a set of simple tools, allows the direct manipulation in 3D of the rendered data. The proposed method takes into account regions of interest defined by the user and employs this information to automatically generate an adequate transfer function. Machine learning techniques are utilized for the automatic creation of transfer functions, which are to be used during the classification stage of the rendering pipeline. The validity of the proposed approach for medical applications is illustrated." ] }
0906.2274
2162318391
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
@cite_3 classify CT scans of the brain into pathological classes (normal, blood, stroke) using a method firmly rooted in Bayes decision theory.
{ "cite_N": [ "@cite_3" ], "mid": [ "2135649554" ], "abstract": [ "We present a principled method of obtaining a weighted similarity metric for 3D image retrieval, firmly rooted in Bayes decision theory. The basic idea is to determine a set of most discriminative features by evaluating how well they perform on the task of classifying images according to predefined semantic categories. We propose this indirect method as a rigorous way to solve the difficult feature selection problem that comes up in most content based image retrieval tasks. The method is applied to normal and pathological neuroradiological CT images, where we take advantage of the fact that normal human brains present an approximate bilateral symmetry which is often absent in pathological brains. The quantitative evaluation of the retrieval system shows promising results." ] }
0906.2274
2162318391
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
@cite_4 also describe a 3D classification method, but their work is focused on material fractions, not on the whole dataset. They fit the arch model to the LH histogram, parameterizing a single arch function by expected pure material intensities at opposite sides of the edge (L,H) and a scale parameter. As a peak in the LH-histogram represents one type of transition, the cluster membership is used to classify edge voxels as transition types.
{ "cite_N": [ "@cite_4" ], "mid": [ "2108703859" ], "abstract": [ "A fully automated method is presented to classify 3-D CT data into material fractions. An analytical scale-invariant description relating the data value to derivatives around Gaussian blurred step edges - arch model - is applied to uniquely combine robustness to noise, global signal fluctuations, anisotropic scale, noncubic voxels, and ease of use via a straightforward segmentation of 3-D CT images through material fractions. Projection of noisy data value and derivatives onto the arch yields a robust alternative to the standard computed Gaussian derivatives. This results in a superior precision of the method. The arch-model parameters are derived from a small, but over-determined, set of measurements (data values and derivatives) along a path following the gradient uphill and downhill starting at an edge voxel. The model is first used to identify the expected values of the two pure materials (named and ) and thereby classify the boundary. Second, the model is used to approximate the underlying noise-free material fractions for each noisy measurement. An iso-surface of constant material fraction accurately delineates the material boundary in the presence of noise and global signal fluctuations. This approach enables straightforward segmentation of 3-D CT images into objects of interest for computer-aided diagnosis and offers an easy tool for the design of otherwise complicated transfer functions in high-quality visualizations. The method is applied to segment a tooth volume for visualization and digital cleansing for virtual colonoscopy." ] }
0906.2274
2162318391
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
@cite_9 conduct classification by using a quadratic form distance functions on a special type of histogram (shell and sector model) of the physical shape of the objects.
{ "cite_N": [ "@cite_9" ], "mid": [ "1593151701" ], "abstract": [ "Classification is one of the basic tasks of data mining in modern database applications including molecular biology, astronomy, mechanical engineering, medical imaging or meteorology. The underlying models have to consider spatial properties such as shape or extension as well as thematic attributes. We introduce 3D shape histograms as an intuitive and powerful similarity model for 3D objects. Particular flexibility is provided by using quadratic form distance functions in order to account for errors of measurement, sampling, and numerical rounding that all may result in small displacements and rotations of shapes. For query processing, a general filter-refinement architecture is employed that efficiently supports similarity search based on quadratic forms. An experimental evaluation in the context of molecular biology demonstrates both, the high classification accuracy of more than 90 and the good performance of the approach." ] }
0906.2212
2154109810
Heterogeneous networks play a key role in the evolution of communities and the decisions individuals make. These networks link different types of entities, for example, people and the events they attend. Network analysis algorithms usually project such networks unto simple graphs composed of entities of a single type. In the process, they conflate relations between entities of different types and loose important structural information.We develop a mathematical framework that can be used to compactly represent and analyze heterogeneous networks that combine multiple entity and link types.We generalize Bonacich centrality, which measures connectivity between nodes by the number of paths between them, to heterogeneous networks and use this measure to study network structure. Specifically, we extend the popular modularity-maximization method for community detection to use this centrality metric. We also rank nodes based on their connectivity to other nodes. One advantage of this centrality metric is that it has a tunable parameter we can use to set the length scale of interactions. By studying how rankings change with this parameter allows us to identify important nodes in the network.We apply the proposed method to analyze the structure of several heterogeneous networks. We show that exploiting additional sources of evidence corresponding to links between, as well as among, different entity types yields new insights into network structure.
Liben-Nowell and Kleinberg @cite_19 have shown that Katz measure is the most effective measure for the link prediction task, better than hitting time, PageRank @cite_27 and its variants. Unlike the Katz score, Bonacich centrality @cite_26 , remained relatively unknown in the computer science community. It parametrizes the Katz score with @math , a parameter that gives the weight of distant links, and also sets the scale of the centrality measure. We showed the benefit of using this parameter in the analysis of network structure.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_26" ], "mid": [ "2148847267", "1854214752", "2087194317" ], "abstract": [ "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc.", "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.", "Although network centrality is generally assumed to produce power, recent research shows that this is not the case in exchange networks. This paper proposes a generalization of the concept of centrality that accounts for both the usual positive relationship between power and centrality and 's recent exceptional results." ] }
0906.2212
2154109810
Heterogeneous networks play a key role in the evolution of communities and the decisions individuals make. These networks link different types of entities, for example, people and the events they attend. Network analysis algorithms usually project such networks unto simple graphs composed of entities of a single type. In the process, they conflate relations between entities of different types and loose important structural information.We develop a mathematical framework that can be used to compactly represent and analyze heterogeneous networks that combine multiple entity and link types.We generalize Bonacich centrality, which measures connectivity between nodes by the number of paths between them, to heterogeneous networks and use this measure to study network structure. Specifically, we extend the popular modularity-maximization method for community detection to use this centrality metric. We also rank nodes based on their connectivity to other nodes. One advantage of this centrality metric is that it has a tunable parameter we can use to set the length scale of interactions. By studying how rankings change with this parameter allows us to identify important nodes in the network.We apply the proposed method to analyze the structure of several heterogeneous networks. We show that exploiting additional sources of evidence corresponding to links between, as well as among, different entity types yields new insights into network structure.
There has been some work in motif-based communities in complex networks @cite_17 which like our work extends traditional notion of modularity introduced by Girvan and Newman @cite_20 . The underlying motivation for motif-based community detection is that the high density of edges within a community determines correlations between nodes going beyond nearest-neighbours,'' which is also our motivation for applying centrality-based modularity to community detection. Though the motivation of this method is to determine the correlations between nodes beyond nearest neighbors, yet it does impose a limit on the proximity of neighbors to be taken into consideration dependent on the size of the motifs. The method we propose, on the other hand, imposes no such limit on proximity. On the contrary, it considers the correlation between nodes in a more global sense. The measure of global correlation evaluated using the b-centrality metric would be equal to the weighted average of correlations when motifs of different sizes are taken. B-centrality enables us to calculate this complex term quickly and efficiently.
{ "cite_N": [ "@cite_20", "@cite_17" ], "mid": [ "1971421925", "2157762631" ], "abstract": [ "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "Community definitions usually focus on edges, inside and between the communities. However, the high density of edges within a community determines correlations between nodes going beyond nearest neighbors, and which are indicated by the presence of motifs. We show how motifs can be used to define general classes of nodes, including communities, by extending the mathematical expression of Newman?Girvan modularity. We construct then a general framework and apply it to some synthetic and real networks." ] }
0906.2311
1945336742
In this paper we study the connectivity problem for wireless networks under the Signal to Interference plus Noise Ratio (SINR) model. Given a set of radio transmitters distributed in some area, we seek to build a directed strongly connected communication graph, and compute an edge coloring of this graph such that the transmitter-receiver pairs in each color class can communicate simultaneously. Depending on the interference model, more or less colors, corresponding to the number of frequencies or time slots, are necessary. We consider the SINR model that compares the received power of a signal at a receiver to the sum of the strength of other signals plus ambient noise . The strength of a signal is assumed to fade polynomially with the distance from the sender, depending on the so-called path-loss exponent ?. We show that, when all transmitters use the same power, the number of colors needed is constant in one-dimensional grids if ?> 1 as well as in two-dimensional grids if ?> 2. For smaller path-loss exponents and two-dimensional grids we prove upper and lower bounds in the order of @math and ?(logn loglogn) for ?= 2 and ?(n 2 ?? 1) for ?< 2 respectively. If nodes are distributed uniformly at random on the interval [0,1], a regular coloring of @math colors guarantees connectivity, while ?(loglogn) colors are required for any coloring.
The seminal work of Gupta and Kumar @cite_7 initiated the study of the capacity of wireless networks. The authors bounded the throughput capacity in the best-case (i.e., optimal configurations) for the protocol and the physical models for @math .
{ "cite_N": [ "@cite_7" ], "mid": [ "2137775453" ], "abstract": [ "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ] }
0906.2311
1945336742
In this paper we study the connectivity problem for wireless networks under the Signal to Interference plus Noise Ratio (SINR) model. Given a set of radio transmitters distributed in some area, we seek to build a directed strongly connected communication graph, and compute an edge coloring of this graph such that the transmitter-receiver pairs in each color class can communicate simultaneously. Depending on the interference model, more or less colors, corresponding to the number of frequencies or time slots, are necessary. We consider the SINR model that compares the received power of a signal at a receiver to the sum of the strength of other signals plus ambient noise . The strength of a signal is assumed to fade polynomially with the distance from the sender, depending on the so-called path-loss exponent ?. We show that, when all transmitters use the same power, the number of colors needed is constant in one-dimensional grids if ?> 1 as well as in two-dimensional grids if ?> 2. For smaller path-loss exponents and two-dimensional grids we prove upper and lower bounds in the order of @math and ?(logn loglogn) for ?= 2 and ?(n 2 ?? 1) for ?< 2 respectively. If nodes are distributed uniformly at random on the interval [0,1], a regular coloring of @math colors guarantees connectivity, while ?(loglogn) colors are required for any coloring.
Non-uniform power assignments can clearly outperform a uniform assignment @cite_0 @cite_4 and increase the capacity of the network, therefore the majority of the work on capacity and scheduling addressed non-uniform power. Recent work @cite_13 compares the uniform power assignment with power control when the area where nodes, whereas @cite_11 @cite_16 give upper and lower bounds for power-controlled oblivious scheduling. As mentioned in the introduction, @cite_15 were the first to raise the question of the complexity of connectivity in the SINR model. While their work applies for networks with devices that can adjust their transmission power, we address networks composed of devices that transmit with the same power.
{ "cite_N": [ "@cite_4", "@cite_0", "@cite_15", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2098480450", "2586251408", "", "1750839748", "2396986403", "2091755831" ], "abstract": [ "We define and study the scheduling complexity in wireless networks, which expresses the theoretically achievable efficiency of MAC layer protocols. Given a set of communication requests in arbitrary networks, the scheduling complexity describes the amount of time required to successfully schedule all requests. The most basic and important network structure in wireless networks being connectivity, we study the scheduling complexity of connectivity, i.e., the minimal amount of time required until a connected structure can be scheduled. In this paper, we prove that the scheduling complexity of connectivity grows only polylogarithmically in the number of nodes. Specifically, we present a novel scheduling algorithm that successfully schedules a strongly connected set of links in time O(logn) even in arbitrary worst-case networks. On the other hand, we prove that standard MAC layer or scheduling protocols can perform much worse. Particularly, any protocol that either employs uniform or linear (a node’s transmit power is proportional to the minimum power required to reach its intended receiver) power assignment has a Ω(n) scheduling complexity in the worst case, even for simple communication requests. In contrast, our polylogarithmic scheduling algorithm allows many concurrent transmission by using an explicitly formulated non-linear power assignment scheme. Our results show that even in large-scale worst-case networks, there is no theoretical scalability problem when it comes to scheduling transmission requests, thus giving an interesting complement to the more pessimistic bounds for the capacity in wireless networks. All results are based on the physical model of communication, which takes into account that the signal-tonoise plus interference ratio (SINR) at a receiver must be above a certain threshold if the transmission is to be received correctly.", "In this paper we shed new light on the fundamental gap between graph-based models used by protocol designers and fading channel models used by communication theorists in wireless networks. We experimentally demonstrate that graph-based models capture real-world phenomena inadequately. Consequentially, we advocate studying models beyond graphs even for protocol-design. In the main part of the paper we present an archetypal multi-hop situation. We show that the theoretical limits of any protocol which obeys the laws of graph-based models can be broken by a protocol explicitly defined for the physical model. Finally, we discuss possible applications, from data gathering to media access control.", "", "We consider the scheduling of arbitrary wireless links in the physical model of interference to minimize the time for satisfying all requests. We study here the combined problem of scheduling and power control, where we seek both an assignment of power settings and a partition of the links so that each set satisfies the signal-to-interference-plus-noise (SINR) constraints. We give an algorithm that attains an approximation ratio of O(log n ċ log log Δ), where n is the number of links and Δ is the ratio between the longest and the shortest link length. Under the natural assumption that lengths are represented in binary, this gives the first approximation ratio that is polylogarithmic in the size of the input. The algorithm has the desirable property of using an oblivious power assignment, where the power assigned to a sender depends only on the length of the link. We give evidence that this dependence on Δ is unavoidable, showing that any reasonably behaving oblivious power assignment results in a Ω(log log Δ)-approximation. These results hold also for the (weighted) capacity problem of finding a maximum (weighted) subset of links that can be scheduled in a single time slot. In addition, we obtain improved approximation for a bidirectional variant of the scheduling problem, give partial answers to questions about the utility of graphs for modeling physical interference, and generalize the setting from the standard 2-dimensional Euclidean plane to doubling metrics. Finally, we explore the utility of graph models in capturing wireless interference.", "The throughput capacity of arbitrary wireless networks under the physical Signal to Interference Plus Noise Ratio (SINR) model has received much attention in recent years. In this paper, we investigate the question of how much the worst-case performance of uniform and non-uniform power assignments differ under constraints such as a bound on the area where nodes are distributed or restrictions on the maximum power available. We determine the maximum factor by which a non-uniform power assignment can outperform the uniform case in the SINR model. More precisely, we prove that in one-dimensional settings the capacity of a non-uniform assignment exceeds a uniform assignment by at most a factor of (O( L_ ) ) when the length of the network is (L_ ). In two-dimensional settings, the uniform assignment is at most a factor of (O( P_ ) ) worse than the non-uniform assignment if the maximum power is (P_ ). We provide algorithms that reach this capacity in both cases. These bounds are tight in the sense that previous work gave examples of networks where the lack of power control causes a performance loss in the order of these factors. To complement our theoretical results and to evaluate our algorithms with concrete input networks, we carry out simulations on random wireless networks. The results demonstrate that the link sets generated by the algorithms contain around 20–35 of all links. As a consequence, engineers and researchers may prefer the uniform model due to its simplicity if this degree of performance deterioration is acceptable.", "In the interference scheduling problem, one is given a set of n communication requests described by pairs of points from a metric space. The points correspond to devices in a wireless network. In the directed version of the problem, each pair of points consists of a dedicated sending and a dedicated receiving device. In the bidirectional version the devices within a pair shall be able to exchange signals in both directions. In both versions, each pair must be assigned a power level and a color such that the pairs in each color class (representing pairs communicating in the same time slot) can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference Plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. This is commonly referred to as the \"physical model\" and is the established way of modelling interference in the engineering community. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests. We study oblivious power assignments in which the power value of a pair only depends on the distance between the points of this pair. We prove that oblivious power assignments cannot yield approximation ratios better than Ω(n) for the directed version of the problem, which is the worst possible performance guarantee as there is a straightforward algorithm that achieves an O(n)-approximation. For the bidirectional version, however, we can show the existence of a universally good oblivious power assignment: For any set of n bidirectional communication requests, the so-called \"square root assignment\" admits a coloring with at most polylog(n) times the minimal number of colors. The proof for the existence of this coloring is non-constructive. We complement it by an approximation algorithm for the coloring problem under the square root assignment. This way, we obtain the first polynomial time algorithm with approximation ratio polylog(n) for interference scheduling in the physical model." ] }
0906.0684
2031742812
Consider a dataset of n(d) points generated independently from R^d according to a common p.d.f. f_d with support(f_d) = [0,1]^d and sup f_d([0,1]^d) growing sub-exponentially in d. We prove that: (i) if n(d) grows sub-exponentially in d, then, for any query point q^d in [0,1]^d and any epsilon>0, the ratio of the distance between any two dataset points and q^d is less that 1+epsilon with probability -->1 as d-->infinity; (ii) if n(d)>[4(1+epsilon)]^d for large d, then for all q^d in [0,1]^d (except a small subset) and any epsilon>0, the distance ratio is less than 1+epsilon with limiting probability strictly bounded away from one. Moreover, we provide preliminary results along the lines of (i) when f_d=N(mu_d,Sigma_d).
A vast literature exists on the development of data structures and algorithms for nearest neighbor search, for brevity, see the discussion and citations in @cite_10 .
{ "cite_N": [ "@cite_10" ], "mid": [ "2118522967" ], "abstract": [ "Effective distance functions in high dimensional data space are very important in solutions for many data mining problems. Recent research has shown that if the Pearson variation of the distance distribution converges to zero with increasing dimensionality, the distance function will become unstable (or meaningless) in high dimensional space, even with the commonly used Lp metric in the Euclidean space. This result has spawned many studies the along the same lines. However, the necessary condition for unstability of a distance function, which is required for function design, remains unknown. In this paper, we shall prove that several important conditions are in fact equivalent to unstability. Based on these theoretical results, we employ some effective and valid indices for testing the stability of a distance function. In addition, this theoretical analysis inspires us that unstable phenomena are rooted in variation of the distance distribution. To demonstrate the theoretical results, we design a meaningful distance function, called the shrinkage-divergence proximity (SDP), based on a given distance function. It is shown empirically that the SDP significantly outperforms other measures in terms of stability in high dimensional data space, and is thus more suitable for distance-based clustering applications." ] }
0906.0252
2052030301
In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by
@cite_7 apply the data-centric storage to continuous single query processing. The query processing using the data-centric storage runs as follows. For storing data, each sensor node sends collected data to sensor nodes, where the target sensor nodes are determined by the value of the data element. For processing queries, the server sends a query to only those sensor nodes that have the result data of the query. In the same work, study an index structure using an order-preserving hash function for distributing data. That is, nodes that are physically adjacent have the adjacent value ranges of data stored in the nodes.As a result, the method reduces the query processing cost by reducing the average number of hops for sending queries and query results. @cite_8 consider storing data in local sensors (unlike the data-centric approach) and propose building an R-tree-like index (called ) based on the range of sensing values. Both of these works focus on single query processing. Hence, they are not applicable for recent query processing environments that register many queries and process them concurrently.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "1498366873", "2154721480" ], "abstract": [ "In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multi-dimensional range query. An example is: List all events whose temperature lies between 50° and 60°, and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O(√N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs.", "We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices." ] }
0906.0252
2052030301
In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by
In the partitioning method, the server partitions the individual query regions into overlapping regions and non-overlapping regions. Then, it sends the partitioned regions and the original queries to sensor nodes, which store them. Query processing is done for each partitioned region, and the query results are merged in the server or sensor nodes. @cite_11 and @cite_5 use this method to process range queries on the information of sensor nodes. This method has the advantage that the result of merging the results of processing each partition is the same as the result of processing the original queries and, therefore, no false alarm'' will happen. It, however, has the disadvantage that, if there are a large number of overlapping query conditions, then the number of partitions to be stored in certain sensor nodes increases and, thus, the necessary storage increases as well.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2536443012", "2130203819" ], "abstract": [ "Providing efficient data services is one of the fundamental requirements for wireless sensor networks. The data service paradigm requires that the application submit its requests as queries and the sensor network transmits the requested data to the application. While most existing work in this area focuses on data aggregation, not much attention has been paid to query aggregation. For many applications, especially ones with high query rates, query aggregation is very important. We study a query aggregation-based approach for providing efficient data services. In particular: (1) we propose a multi-layered overlay-based framework consisting of a query manager and access points (nodes), where the former provides the query aggregation plan and the latter executes the plan; (2) we design an effective query aggregation algorithm to reduce the number of duplicate overlapping queries and save overall energy consumption in the sensor network Our performance evaluations show that by applying our query aggregation algorithm, the overall energy consumption can be significantly reduced and the sensor network lifetime can be prolonged correspondingly.", "The widespread dissemination of small-scale sensor nodes has sparked interest in a powerful new database abstraction for sensor networks: Clients “program” the sensors through queries in a high-level declarative language permitting the system to perform the low-level optimizations necessary for energy-efficient query processing. In this paper we consider multi-query optimization for aggregate queries on sensor networks. We develop a set of distributed algorithms for processing multiple queries that incur minimum communication while observing the computational limitations of the sensor nodes. Our algorithms support incremental changes to the set of active queries and allow for local repairs to routes in response to node failures. A thorough experimental analysis shows that our approach results in significant energy savings, compared to previous work." ] }
0906.0252
2052030301
In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by
As the scale of sensor networks increases, the hierarchical structure is used more in real applications than the flat structure in which all sensor nodes have the same capability @cite_16 .
{ "cite_N": [ "@cite_16" ], "mid": [ "2123033018" ], "abstract": [ "The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations." ] }
0906.0252
2052030301
In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by
Representative examples of such hierarchical wireless sensor networks are PASTA(Power Aware Sensing, Tracking and Analysis) @cite_17 mentioned in COSMOS @cite_17 and SOHAN @cite_10 . PASTA is used in military applications for enemy movement surveillance and is configured with the server and about 400 intermediate tier nodes each clustering about 20 sensor nodes. SOHAN is used in traffic congestion monitoring applications to measure the traffic volume using roadside sensor nodes and is configured with the server and about 50 intermediate tier nodes each clustering about 200 sensor nodes.
{ "cite_N": [ "@cite_10", "@cite_17" ], "mid": [ "2147365937", "1877537717" ], "abstract": [ "This paper describes the design and implementation of a novel 802.11-based self-organizing hierarchical ad-hoc wireless network (SOHAN), and presents some initial experimental results obtained from a proof-of-concept prototype. The proposed network has a three-tier hierarchy consisting of low-power mobile nodes (MNs) at the lowest layer, forwarding nodes (FNs) with higher power and multi-hop routing capability at the middle layer, and wired access points (APs) without power constraints at the highest layer. Specifics of new protocols used for bootstrapping, node discovery, and multi-hop routing are presented, and the overall operation of the complete hierarchical ad-hoc network is explained. A prototype implementation of the SOHAN network is outlined in terms of major hardware and software components, and initial experimental results are given.", "Clustering is an important characteristic of most sensor applications. In this paper we define COSMOS, the Cluster-based Heterogeneous Model for Sensor networks. The model assumes a hierarchical network architecture comprising of a large number of low cost sensors with limited computation capability, and fewer number of powerful clusterheads, uniformly distributed in a two dimensional terrain. The sensors are organized into single hop clusters, each managed by a clusterhead. The clusterheads are organized in a mesh-like topology. All sensors in a cluster are time synchronized, whereas the clusterheads communicate asynchronously. The sensors are assumed to have multiple power states and a wake-up mechanism to facilitate power management. To illustrate algorithm design using our model, we discuss implementation of algorithms for sorting and summing in sensor networks." ] }
0906.0252
2052030301
In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by
We expect that hierarchical sensor networks will be increasingly more utilized in the future as the scale and the requirement of applications increase. However, there has not been any research done on processing multiple queries talking advantage of the characteristics that sensor nodes at different tiers have different capabilities. @cite_6 investigated how and on which node to process each operation during query processing in a hierarchical sensor network. This research, however, mainly deals with single query processing and, thus, is difficult to apply to multiple query processing. In this paper, we propose a method for processing multiple queries effectively by utilizing the characteristics of hierarchical sensor networks, i.e., the multi-tier structure made of sensor nodes with different resources and computing power.
{ "cite_N": [ "@cite_6" ], "mid": [ "2140701158" ], "abstract": [ "In sensor networks, data acquisition frequently takes place at low-capability devices. The acquired data is then transmitted through a hierarchy of nodes having progressively increasing network band-width and computational power. We consider the problem of executing queries over these data streams, posed at the root of the hierarchy. To minimize data transmission, it is desirable to perform \"in-network\" query processing: do some part of the work at intermediate nodes as the data travels to the root. Most previous work on in-network query processing has focused on aggregation and inexpensive filters. In this paper, we address in-network processing for queries involving possibly expensive conjunctive filters, and joins. We consider the problem of placing operators along the nodes of the hierarchy so that the overall cost of computation and data transmission is minimized. We show that the problem is tractable, give an optimal algorithm, and demonstrate that a simpler greedy operator placement algorithm can fail to find the optimal solution. Finally we define a number of interesting variations of the basic operator placement problem and demonstrate their hardness." ] }
0906.1019
2952207628
We compare the expected efficiency of revenue maximizing (or optimal ) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with k + log_ e (e-1) k + 1 bidders is at least as much as the efficiency of the efficiency maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Theta(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k+1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + t Theta(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is pointwise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
Bulow and Klemperer @cite_5 characterized the revenue sub-optimality of @math . They showed that @math (with one extra bidder) has at least as much expected revenue as @math . Their result can be interpreted in a bi-criteria sense; VCG auctions with one extra bidder simultaneously maximize both revenue and efficiency. For the case of @math identical items, they show that @math additional bidders are needed for the result to hold.
{ "cite_N": [ "@cite_5" ], "mid": [ "2160747366" ], "abstract": [ "Which is the more profitable way to sell a company: an auction with no reserve price or an optimally structured negotiation with one less bidder? The authors show, under reasonable assumptions, that the auction is always preferable when bidders' signals are independent. For affiliated signals, the result holds under certain restrictions on the seller's choice of negotiating mechanism. The result suggests that the value of negotiating skill is small relative to the value of additional competition. The paper also shows how the analogies between monopoly theory and auction theory can help derive new results in auction theory. Copyright 1996 by American Economic Association." ] }
0906.1019
2952207628
We compare the expected efficiency of revenue maximizing (or optimal ) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with k + log_ e (e-1) k + 1 bidders is at least as much as the efficiency of the efficiency maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Theta(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k+1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + t Theta(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is pointwise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
In @cite_0 , Roughgarden and Sundararajan gave the approximation factor of the optimal revenue that is obtained by @math . They show that, for @math identical items and @math bidders with unit demand, the revenue of @math is at least @math times the revenue of @math . Neeman @cite_7 also studied the percentage of revenue which @math makes compared to @math in the single item case. @cite_7 used a numerical analysis approach and assumed that the distribution @math is any general distribution (not restricted to regular or MHR as in @cite_0 ) but with a bounded support.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2127723069", "2066957841" ], "abstract": [ "We study the simultaneous optimization of efficiency and revenue in pay-per-click keyword auctions in a Bayesian setting. Our main result is that the efficient keyword auction yields near-optimal revenue even under modest competition. In the process, we build on classical results in auction theory to prove that increasing the number of bidders by the number of slots outweighs the benefit of employing the optimal reserve price.", "Abstract We study the performance of the English auction under different assumptions about the seller's degree of “Bayesian sophistication.” We define the effectiveness of an auction as the ratio between the expected revenue it generates for the seller and the expected valuation of the object to the bidder with the highest valuation (total surplus). We identify tight lower bounds on the effectiveness of the English auction for general private-values environments, and for private-values environments where bidders' valuations are non-negatively correlated. For example, when the seller faces 12 bidders who the seller believes have non-negatively correlated valuations whose expectations are at least as high as 60 of the maximal possible valuation, an English auction with no reserve price generates an expected price that is more than 80 of the value of the object to the bidder with the highest valuation." ] }
0906.1019
2952207628
We compare the expected efficiency of revenue maximizing (or optimal ) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with k + log_ e (e-1) k + 1 bidders is at least as much as the efficiency of the efficiency maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Theta(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k+1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + t Theta(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is pointwise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
In another related work looking at simultaneously optimization of both revenue and efficiency, Likhodedov and Sandholm @cite_2 gave a mechanism which maximizes efficiency, given a lower bound constraint on the total revenue.
{ "cite_N": [ "@cite_2" ], "mid": [ "1964975266" ], "abstract": [ "We study selling one indivisible object to multiple potential buyers. Depending on the objective of the seller, different selling mechanisms are desirable. The Vickrey auction with a truthful reserve price is optimal when the objective is efficiency (i.e., allocating the object to the party who values it the most). The Myerson auction is optimal when the objective is the seller's expected utility. These two objectives are generally in conflict, and cannot be maximized with one mechanism. In many real-world settings---such as privatization and competing electronic marketplaces---it is not clear that the objective should be either efficiency or seller's expected utility. Typically, one of these objectives should weigh more than the other, but both are important. We account for importance of both objectives by designing a new deterministic auction mechanism that maximizes expected social welfare subject to a minimum constraint on the seller's expected utility. This way the seller can expect to do well enough for himself, while maintaining the attractive properties of the mechanism." ] }
0905.3946
1570801079
The focus of the tool FTOS is to alleviate designers' burden by offering code generation for non-functional aspects including fault-tolerance mechanisms. One crucial aspect in this context is to ensure that user-selected mechanisms for the system model are sufficient to resist faults as specified in the underlying fault hypothesis. In this paper, formal approaches in verification are proposed to assist the claim. We first raise the precision of FTOS into pure mathematical constructs, and formulate the deterministic assumption, which is necessary as an extension of Giotto-like systems (e.g., FTOS) to equip with fault-tolerance abilities. We show that local properties of a system with the deterministic assumption will be preserved in a modified synchronous system used as the verification model. This enables the use of techniques known from hardware verification. As for implementation, we develop a prototype tool called FTOS-Verify, deploy it as an Eclipse add-on for FTOS, and conduct several case studies.
Existing work on design or verification for fault-tolerance mechanisms can be categorized into two different categories. Within the first category, researchers are focusing on verifying the applicability of a single fault-tolerance mechanism based on a concrete fault-model. For the second category researchers are offering languages or methodologies for the use of verification, e.g., @cite_7 @cite_11 . Nevertheless, the above work does not place their focus on automatic model generation for verification. As the system models and the fault models influence the applicability of the mechanism, when they are modified, the corresponding verification models are required to be reconstructed. Construction and modification manually can be time consuming or error prone; in FTOS-Verify, the verification model is generated automatically as the model changes.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2081756744", "2113486906" ], "abstract": [ "This paper presents an approach for the specification and the verification of the correctness of fault tolerant system designs achieved by the application of fault tolerant techniques. The approach is based on process algebras, equivalence theory and temporal logic. The behaviour of the system in the absence of faults is formally specified and faults are assumed as random events which interfere with the system by modifying its behaviour. The fault tolerant technique is formalized by a context that specifies how replicas of the system cooperate to deal with faults. The system design is proved to behave correctly under a given fault hypothesis by proving the observational equivalence between the system design specification and the fault-free system specification. Additionally, model checking of a temporal logic formula which gives an abstract notion of correct behaviour can be applied to verify the correctness of the design. The opportunities given by the expression of the fault hypothesis using temporal logic are discussed. The actual usability of the approach in real case studies is supported by the availability of automatic tools for equivalence checking and for proving the temporal logic properties by model checking.", "PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model \"reliable computing platform\" (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. We outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications. >" ] }
0905.3946
1570801079
The focus of the tool FTOS is to alleviate designers' burden by offering code generation for non-functional aspects including fault-tolerance mechanisms. One crucial aspect in this context is to ensure that user-selected mechanisms for the system model are sufficient to resist faults as specified in the underlying fault hypothesis. In this paper, formal approaches in verification are proposed to assist the claim. We first raise the precision of FTOS into pure mathematical constructs, and formulate the deterministic assumption, which is necessary as an extension of Giotto-like systems (e.g., FTOS) to equip with fault-tolerance abilities. We show that local properties of a system with the deterministic assumption will be preserved in a modified synchronous system used as the verification model. This enables the use of techniques known from hardware verification. As for implementation, we develop a prototype tool called FTOS-Verify, deploy it as an Eclipse add-on for FTOS, and conduct several case studies.
There are other model based tools for embedded control with verification abilities integrated, e.g., @cite_14 , but verifying fault-tolerance mechanisms is not their primary focus. Also we find in most cases, the deployment from the model is synchronous, where FTOS is focusing on the deployment over either synchronous or asynchronous systems.
{ "cite_N": [ "@cite_14" ], "mid": [ "2167700052" ], "abstract": [ "Formal verification of adaptive systems allows rigorously proving critical requirements. However, design-level models are in general too complex to be handled by verification tools directly. To counter this problem, we propose to reduce model complexity on design-model level in order to facilitate model-based verification. First, we transfer existing compositional reasoning techniques for foundational models used in verification tools to design-level models. Second, we develop new compositional strategies exploiting the special features of adaptive models. Based on these results, we establish a framework for modular model-based verification of adaptive systems by model checking." ] }
0905.2657
1740288049
Increasingly, business projects are ephemeral. New Business Intelligence tools must support ad-lib data sources and quick perusal. Meanwhile, tag clouds are a popular community-driven visualization technique. Hence, we investigate tag-cloud views with support for OLAP operations such as roll-ups, slices, dices, clustering, and drill-downs. As a case study, we implemented an application where users can upload data and immediately navigate through its ad hoc dimensions. To support social networking, views can be easily shared and embedded in other Web sites. Algorithmically, our tag-cloud views are approximate range top-k queries over spontaneous data cubes. We present experimental evidence that iceberg cuboids provide adequate online approximations. We benchmark several browser-oblivious tag-cloud layout optimizations.
According to , it is difficult to navigate an OLAP schema without help; they have proposed a keyword-driven OLAP model @cite_14 . There are several OLAP visualization techniques including the Cube Presentation Model (CPM) @cite_17 , Multiple Correspondence Analysis (MCA) @cite_26 and other interactive systems @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_26", "@cite_17" ], "mid": [ "", "1518375897", "2159814723", "1974674182" ], "abstract": [ "", "Business data collection is growing exponentially in recent years. A variety of industries and businesses have adopted new technologies of data storages such as data warehouses. On Line Analytical Processing (OLAP) has become an important tool for executives, managers, and analysts to explore, analyze, and extract interesting patterns from enormous amount of data stored in data warehouses and multidimensional databases. However, it is difficult for human analysts to interpret and extract meaningful information from large amount of data if the data is presented in textual form as relational tables. Visualization and interactive tools employ graphical display formats that help analysts to understand and extract useful information fast from huge data sets. This paper presents a new visual interactive exploration technique for an analysis of multidimensional databases. Users can gain both overviews and refine views on any particular region of interest of data cubes through the combination of interactive tools and navigational functions such as drilling down, rolling up, and slicing. Our technique allows users who are not experts in OLAP technology to explore and analyze OLAP data cubes and data warehouses without generating sophisticated queries. Furthermore, the visualization in our technique displays the exploration path enhancing the user's understanding of the exploration.", "In the On Line Analytical Processing (OLAP) context, exploration of huge and sparse data cubes is a tedious task which does not always lead to efficient results. In this paper, we couple OLAP with the Multiple Correspondence Analysis (MCA) in order to enhance visual representations of data cubes and thus, facilitate their interpretations and analysis. We also provide a quality criterion to measure the relevance of obtained representations. The criterion is based on a geometric neighborhood concept and a similarity metric between cells of a data cube. Experimental results on real data proved the interest and the efficiency of our approach.", "Data visualization is one of the major issues of database research. OLAP a decision support technology, is clearly in the center of this effort. Thus far, visualization has not been incorporated in the abstraction levels of DBMS architecture (conceptual, logical, physical); neither has it been formally treated in this context. In this paper we start by reconsidering the separation of the aforementioned abstraction levels to take visualization into consideration. Then, we present the Cube Presentation Model (CPM), a novel presentational model for OLAP screens. The proposal lies on the fundamental idea of separating the logical part of a data cube computation from the presentational part of the client tool. Then, CPM can be naturally mapped on the Table Lens, which is an advanced visualization technique from the Human-Computer Interaction area, particularly tailored for cross-tab reports. Based on the particularities of Table Lens, we propose automated proactive support to the user for the interaction with an OLAP screen. Finally, we discuss implementation and usage issues in the context of an academic prototype system (CubeView) that we have implemented." ] }
0905.2657
1740288049
Increasingly, business projects are ephemeral. New Business Intelligence tools must support ad-lib data sources and quick perusal. Meanwhile, tag clouds are a popular community-driven visualization technique. Hence, we investigate tag-cloud views with support for OLAP operations such as roll-ups, slices, dices, clustering, and drill-downs. As a case study, we implemented an application where users can upload data and immediately navigate through its ad hoc dimensions. To support social networking, views can be easily shared and embedded in other Web sites. Algorithmically, our tag-cloud views are approximate range top-k queries over spontaneous data cubes. We present experimental evidence that iceberg cuboids provide adequate online approximations. We benchmark several browser-oblivious tag-cloud layout optimizations.
Tag clouds have been popularized by the Web site Flickr launched in 2004. Several optimization opportunities exist: similar tags can be clustered together @cite_6 , tags can be pruned automatically @cite_28 or by user intervention @cite_25 , tags can be indexed @cite_25 , and so on. Tag clouds can be adapted to spatio-temporal data @cite_22 @cite_9 .
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_9", "@cite_6", "@cite_25" ], "mid": [ "2035953875", "204790106", "1983936280", "2115035636", "2030684081" ], "abstract": [ "Cloudalicious is an online visualization tool that has been designed to give insight into how ?tag clouds?, or folksonomies, develop over time. A folksonomy is an organic system of text labels attributed to an object by the users of that object. The most common object so far to be the subject of this tagging has been the online bookmark. Stabilization of a URL's tag cloud over time is the clearest result of this type of visualization. Any diagonal movement on the graphs, indicative of a change in the tags being used to describe a URL, should garner further discussion.", "Tagging-based systems enable users to categorize web resources by means of tags (freely chosen keywords), in order to refinding these resources later. Tagging is implicitly also a social indexing process, since users share their tags and resources, constructing a social tag index, so-called folksonomy. At the same time of tagging-based system, has been popularised an interface model for visual information retrieval known as Tag-Cloud. In this model, the most frequently used tags are displayed in alphabetical order. This paper presents a novel approach to Tag-Cloud’s tags selection, and proposes the use of clustering algorithms for visual layout, with the aim of improve browsing experience. The results suggest that presented approach reduces the semantic density of tag set, and improves the visual consistency of Tag-Cloud layout.", "We describe a framework for automatically selecting a summary set of photos from a large collection of geo-referenced photographs. Such large collections are inherently difficult to browse, and become excessively so as they grow in size, making summaries an important tool in rendering these collections accessible. Our summary algorithm is based on spa-tial patterns in photo sets, as well as textual-topical patterns and user (photographer) identity cues. The algorithm can be expanded to support social, temporal, and other factors. The summary can thus be biased by the content of the query, the user making the query, and the context in which the query is made.A modified version of our summarization algorithm serves as a basis for a new map-based visualization of large collections of geo-referenced photos, called Tag Maps. Tag Maps visualize the data by placing highly representative textual tags on relevant map locations in the viewed region, effectively providing a sense of the important concepts embodied in the collection.An initial evaluation of our implementation on a set of geo-referenced photos shows that our algorithm and visualization perform well, producing summaries and views that are highly rated by users.", "Tag clouds provide an aggregate of tag-usage statistics. They are typically sent as in-line HTML to browsers. However, display mechanisms suited for ordinary text are not ideal for tags, because font sizes may vary widely on a line. As well, the typical layout does not account for relationships that may be known between tags. This paper presents models and algorithms to improve the display of tag clouds that con- sist of in-line HTML, as well as algorithms that use nested tables to achieve a more general 2-dimensional layout in which tag relationships are considered. The first algorithms leverage prior work in typesetting and rectangle packing, whereas the second group of algorithms leverage prior work in Electronic Design Automation. Experiments show our algorithms can be efficiently implemented and perform well.", "We describe a social bookmarking service de-signed for a large enterprise. We discuss design principles addressing online identity, privacy, information discovery (including search and pivot browsing), and service extensi-bility based on a web-friendly architectural style. In addi-tion we describe the key design features of our implementa-tion. We provide the results of an eight week field trial of this enterprise social bookmarking service, including a de-scription of user activities, based on log file analysis. We share the results of a user survey focused on the benefits of the service. The feedback from the user trial, comprising survey results, log file analysis and informal communica-tions, is quite positive and suggests several promising en-hancements to the service. Finally, we discuss potential extension and integration of social bookmarking services with other corporate collaborative applications." ] }
0905.3755
2949900368
We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver.
Many decompositions have been given for a wide range of global constraint. However, decomposition in general tends to hinder propagation. For instance, @cite_14 shows that the decomposition of constraints into binary inequalities hinders propagation. On the other hand, there are global constraints where decompositions have been given that do not hinder propagation. For example, Beldiceanu identify conditions under which global constraints specified as automata can be decomposed into signature and transition constraints without hindering propagation @cite_18 . As a second example, many global constraints can be decomposed using and which can themselves often be propagated effectively using simple decompositions @cite_15 @cite_12 @cite_24 . As a third example, decompositions of the and constraints have been given that do not hinder propagation @cite_30 @cite_32 @cite_1 @cite_0 @cite_25 . As a fourth example, decompositions of the constraint have been shown to be effective @cite_16 . Finally, the constraint can be decomposed into ternary constraints without hindering propagation @cite_26 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_26", "@cite_1", "@cite_32", "@cite_24", "@cite_0", "@cite_15", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "2610909219", "", "", "1511616386", "2124803405", "2137811792", "2610525591", "2963402450", "1567850663", "", "1779773102", "1546090803" ], "abstract": [ "", "", "", "We present a comprehensive study of the use of value precedence constraints to break value symmetry. We first give a simple encoding of value precedence into ternary constraints that is both efficient and effective at breaking symmetry. We then extend value precedence to deal with a number of generalizations like wreath value and partial interchangeability. We also show that value precedence is closely related to lexicographical ordering. Finally, we consider the interaction between value precedence and symmetry breaking constraints for variable symmetries.", "A wide range of constraints can be compactly specified using automata or formal languages. In a sequence of recent papers, we have shown that an effective means to reason with such specifications is to decompose them into primitive constraints (Quimper & Walsh 2006; 2007). We can then, for instance, use state of the art SAT solvers and profit from their advanced features like fast unit propagation, clause learning, and conflict-based search heuristics. This approach holds promise for solving combinatorial problems in scheduling, rostering, and configuration, as well as problems in more diverse areas like bioinformatics, software testing and natural language processing. In addition, decomposition may be an effective method to propagate other global constraints.", "A wide range of constraints can be specified using automata or formal languages. The GRAMMAR constraint restricts the values taken by a sequence of variables to be a string from a given context-free language. Based on an AND OR decomposition, we show that this constraint can be converted into clauses in conjunctive normal form without hindering propagation. Using this decomposition, we can propagate the GRAMMAR constraint in O(n3) time. The decomposition also provides an efficient incremental propagator. Down a branch of the search tree of length k, we can enforce GAC k times in the same O(n3) time. On specialized languages, running time can be even better. For example, propagation of the decomposition requires just O(n|δ|) time for regular languages where |δ| is the size of the transition table of the automaton recognizing the regular language. Experiments on a shift scheduling problem with a constraint solver and a state of the art SAT solver show that we can solve problems using this decomposition that defeat existing constraint solvers.", "A wide range of counting and occurrence constraints can be specified with just two global primitives: the RANGE constraint, which computes the range of values used by a sequence of variables, and the ROOTS constraint, which computes the variables mapping onto a set of values. We focus here on the ROOTS constraint. We show that propagating the ROOTS constraint completely is intractable. We therefore propose a decomposition which can be used to propagate the constraint in linear time. Interestingly, for all uses of the ROOTS constraint we have met. this decomposition does not destroy the global nature of the constraint as we still prune all possible values. In addition, even when the ROOTS constraint is intractable to propagate completely, we can enforce bound consistency in linear time simply by enforcing bound consistency on the decomposition. Finally, we show that specifying counting and occurrence constraints using ROOTS is effective and efficient in practice on two benchmark problems from CSPLib.", "We study the CARDPATH constraint. This ensures a given constraint holds a number of times down a sequence of variables. We show that SLIDE, a special case of CARDPATH where the slid constraint must hold always, can be used to encode a wide range of sliding sequence constraints including CARDPATH itself. We consider how to propagate SLIDE and provide a complete propagator for CARDPATH. Since propagation is NP-hard in general, we identify special cases where propagation takes polynomial time. Our experiments demonstrate that using SLIDE to encode global constraints can be as efficient and effective as specialised propagators.", "We propose a simple declarative language for specifying a wide range of counting and occurrence constraints. This specification language is executable since it immediately provides a polynomial propagation algorithm. To illustrate the capabilities of this language, we specify a dozen global constraints taken from the literature. We observe one of three outcomes: we achieve generalized arc-consistency; we do not achieve generalized arc-consistency, but achieving generalized arc-consistency is NP-hard; we do not achieve generalized arc-consistency, but specialized propagation algorithms can do so in polynomial time. Experiments demonstrate that this specification language is both efficient and effective in practice.", "", "We introduce the weighted CFG constraint and propose a propagation algorithm that enforces domain consistency in O(n3|G|) time. We show that this algorithm can be decomposed into a set of primitive arithmetic constraints without hindering propagation.", "We recently proposed a simple declarative language for specifying a wide range of counting and occurrence constraints. The language uses just two global primitives: the Range constraint, which computes the range of values used by a set of variables, and the Roots constraint, which computes the variables mapping onto particular values. In order for this specification language to be executable, propagation algorithms for the Range and Roots constraints should be developed. In this paper, we focus on the study of the Range constraint. We propose an efficient algorithm for propagating the Range constraint. We also show that decomposing global counting and occurrence constraints using Range is effective and efficient in practice." ] }
0905.3107
2952674952
It is well-known that, given a probability distribution over @math characters, in the worst case it takes ( (n n)) bits to store a prefix code with minimum expected codeword length. However, in this paper we first show that, for any @math n^ 1 c n @math c @math 1 $ time.
A technique we will use to obtain our result is the wavelet tree of @cite_3 , and more precisely the multiary variant due to @cite_11 . The latter represents a sequence @math over an alphabet @math of size @math such that the following operations can be carried out in @math time on the RAM model with a computer word of length @math : (1) Given @math , retrieve @math ; (2) given @math and @math , compute @math , the number of occurrences of @math in @math ; (3) given @math and @math , compute @math , the position in @math of the @math -th occurrence of @math . The wavelet tree requires @math bits of space, where @math is the empirical zero-order entropy of @math , defined as @math , where @math is the number of occurrences of @math in @math . Thus @math is a lower bound to the output size of any zero-order compressor applied to @math . It will be useful to write @math .
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2134696992", "2107082304" ], "abstract": [ "We present a novel implementation of compressed suffix arrays exhibiting new tradeoffs between search time and space occupancy for a given text (or sequence) of n symbols over an alphabet σ, where each symbol is encoded by lgvσv bits. We show that compressed suffix arrays use just nH h + σ bits, while retaining full text indexing functionalities, such as searching any pattern sequence of length m in O(m lg vσv + polylog(n)) time. The term H h ≤ lg vσv denotes the hth-order empirical entropy of the text, which means that our index is nearly optimal in space apart from lower-order terms, achieving asymptotically the empirical entropy of the text (with a multiplicative constant 1). If the text is highly compressible so that H n = o(1) and the alphabet size is small, we obtain a text index with o(m) search time that requires only o(n) bits. Further results and tradeoffs are reported in the paper.", "Given a sequence S e s1s2…sn of integers smaller than r e O(polylog(n)), we show how S can be represented using nH0(S) p o(n) bits, so that we can know any sq, as well as answer rank and select queries on S, in constant time. H0(S) is the zero-order empirical entropy of S and nH0(S) provides an information-theoretic lower bound to the bit storage of any sequence S via a fixed encoding of its symbols. This extends previous results on binary sequences, and improves previous results on general sequences where those queries are answered in O(log r) time. For larger r, we can still represent S in nH0(S) p o(n log r) bits and answer queries in O(log r log log n) time. Another contribution of this article is to show how to combine our compressed representation of integer sequences with a compression boosting technique to design compressed full-text indexes that scale well with the size of the input alphabet Σ. Specifically, we design a variant of the FM-index that indexes a string T[1, n] within nHk(T) p o(n) bits of storage, where Hk(T) is the kth-order empirical entropy of T. This space bound holds simultaneously for all k ≤ α logvΣv n, constant 0 Compared to all previous works, our index is the first that removes the alphabet-size dependance from all query times, in particular, counting time is linear in the pattern length. Still, our index uses essentially the same space of the kth-order entropy of the text T, which is the best space obtained in previous work. We can also handle larger alphabets of size vΣv e O(nβ), for any 0" ] }
0905.3348
2952664570
An important aspect of mechanism design in social choice protocols and multiagent systems is to discourage insincere and manipulative behaviour. We examine the computational complexity of false-name manipulation in weighted voting games which are an important class of coalitional voting games. Weighted voting games have received increased interest in the multiagent community due to their compact representation and ability to model coalitional formation scenarios. Bachrach and Elkind in their AAMAS 2008 paper examined divide and conquer false-name manipulation in weighted voting games from the point of view of Shapley-Shubik index. We analyse the corresponding case of the Banzhaf index and check how much the Banzhaf index of a player increases or decreases if it splits up into sub-players. A pseudo-polynomial algorithm to find the optimal split is also provided. Bachrach and Elkind also mentioned manipulation via merging as an open problem. In the paper, we examine the cases where a player annexes other players or merges with them to increase their Banzhaf index or Shapley-Shubik index payoff. We characterize the computational complexity of such manipulations and provide limits to the manipulation. The annexation non-monotonicity paradox is also discovered in the case of the Banzhaf index. The results give insight into coalition formation and manipulation.
Weighted voting games date back at least to John von Neumann and Oskar Morgenstern who developed their theory in their monumental book @cite_35 . WVGs and voting power indices have been analyzed extensively in the game theory literature for instance in @cite_20 @cite_39 . They have been applied to various economic and political bodies such as the EU Council of Ministers and the IMF @cite_26 . Power indices such as the Banzhaf index and the Shapley-Shubik index originated in such a setting in order to gauge the decision making ability of players. These indices have now been utilized in different domains such as networks @cite_16 . Simple games and weighted voting games are known by different names in other literatures and communities. There is considerable work on similar models in threshold logic @cite_17 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_39", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2144846366", "2125150859", "1576108793", "2143039680", "1977911709", "1609370892" ], "abstract": [ "This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published \"Theory of Games and Economic Behavior.\" In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences.", "In general in an organisation whose system of governance involves weighted voting, a member's weight in terms of the number of votes and the formal power it represents differ. Power indices provide a means of analysing this difference. The paper uses new algorithms for computing power indices for large games. Three analyses are carried out: (1) the distribution of Banzhaf voting power among members in 1999; the results show that the United States has considerably more power over ordinary decisions than its weight of 17 but that the use of special supermajorities limits its power; (2) the effect of varying the majority requirement on the power of the IMF to act and the powers of members to prevent and initiate action (Coleman indices); the results show the effect of supermajorities severely limits the power to act and therefore renders the voting system ineffective in democratic terms, also the sovereignty of the United States within the IMF is effectively limited to just the power of veto; (3) the paper proposes the determination of the weights instrumentally by means of an iterative algorithm to give the required power distribution; this would be a useful procedure for determining appropriate changes in weights consequent on changes to individual countries' quotas; this is applied to the 1999 data. Policy recommendations are, first, that the IMF use only simple majority voting, and discontinue using special supermajorities, and, second, allocate voting weight instrumentally using power indices.", "Coalition formation is an important capability for automated negotiation among self-interested agents. In order for coalitions to be stable, a key question that must be answered is how the gains from cooperation are to be distributed. Coalitional game theory provides a number of solution concepts for this. However, recent research has revealed that these traditional solution concepts are vulnerable to various manipulations in open anonymous environments such as the Internet. To address this, previous work has developed a solution concept called the anonymity-proof core, which is robust against such manipulations. That work also developed a method for compactly representing the anonymity-proof core. However, the required computational and representational costs are still huge. In this paper, we develop a new solution concept which we call the anonymity-proof Shapley value. We show that the anonymity-proof Shapley value is characterized by certain simple axiomatic conditions, always exists, and is uniquely determined. The computational and representational costs of the anonymity-proof Shapley value are drastically smaller than those of existing anonymity-proof solution concepts.", "We consider computational aspects of a game theoretic approach to network reliability. Consider a network where failure of one node may disrupt communication between two other nodes. We model this network as a simple coalitional game, called the vertex Connectivity Game (CG). In this game, each agent owns a vertex, and controls all the edges going to and from that vertex. A coalition of agents wins if it fully connects a certain subset of vertices in the graph, called the primary vertices. We show that power indices, which express an agent's ability to affect the outcome of the vertex connectivity game, can be used to identify significant possible points of failure in the communication network, and can thus be used to increase network reliability. We show that in general graphs, calculating the Banzhaf power index is #P-complete, but suggest a polynomial algorithm for calculating this index in trees. We also show a polynomial algorithm for computing the core of a CG, which allows a stable division of payments to coalition agents.", "The Banzhaf index of power in a voting situation depends on the number of ways in which each voter can effect a “swing” in the outcome. It is comparable---but not actually equivalent---to the better-known Shapley-Shubik index, which depends on the number of alignments or “orders of support” in which each voter is pivotal. This paper investigates some properties of the Banzhaf index, the main topics being its derivation from axioms and its behavior in weighted-voting models when the number of small voters tends to infinity. These matters have previously been studied from the Shapley-Shubik viewpoint, but the present work reveals some striking differences between the two indices. The paper also attempts to promote better communication and less duplication of mathematical effort by identifying and describing several other theories, formally equivalent to Banzhaf’s, that are found in fields ranging from sociology to electrical engineering. An extensive bibliography is provided.", "" ] }
0905.1995
1522187131
The existence of incentive-compatible computationally-efficient protocols for combinatorial auctions with decent approximation ratios is the paradigmatic problem in computational mechanism design. It is believed that in many cases good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. However, to date, researchers lack the machinery to prove such results. In this paper, we present a new approach that we believe holds great promise for making progress on this important problem. We take the first steps towards the development of new technologies for lower bounding the VC-dimension of k-tuples of disjoint sets. We apply this machinery to prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. These results hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits.
Combinatorial auctions have been extensively studied in both the economics and the computer science literature @cite_44 @cite_18 @cite_34 . It is known that if the preferences of the bidders are unrestricted then no constant approximation ratios are achievable (in polynomial time) @cite_36 @cite_46 . Hence, much research has been devoted to the exploration of restrictions on bidders' preferences that allow for good approximations, , for , and , preferences approximation ratios have been obtained @cite_16 @cite_2 @cite_12 @cite_35 @cite_15 @cite_33 . In contrast, the known approximation algorithms for these classes have approximation ratios @cite_29 @cite_16 @cite_5 . It is believed that this gap may be due to the computational burden imposed by the truthfulness requirement. However, to date, this belief remains unproven. In particular, no lower bounds for truthful mechanisms for combinatorial auctions are known.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_15", "@cite_36", "@cite_29", "@cite_16", "@cite_44", "@cite_2", "@cite_5", "@cite_46", "@cite_34", "@cite_12" ], "mid": [ "2126307323", "2110383956", "1989453388", "", "2157377553", "1580387990", "2126085282", "", "2035337032", "1965161364", "2161272173", "", "1995275762" ], "abstract": [ "Combinatorial allocation problems require allocating items to players in a way that maximizes the total utility. Two such problems received attention recently, and were addressed using the same linear programming (LP) relaxation. In the Maximum Submodular Welfare (SMW) problem, utility functions of players are submodular, and for this case Dobzinski and Schapira [SODA 2006] showed an approximation ratio of 1 - 1 e. In the Generalized Assignment Problem (GAP) utility functions are linear but players also have capacity constraints. GAP admits a (1 - 1 e)- approximation as well, as shown by Fleischer, Goemans, Mirrokni and Sviridenko [SODA 2006]. In both cases, the approximation ratio was in fact shown for a more general version of the problem, for which improving 1 - 1 e is NPhard. In this paper, we show how to improve the 1 - 1 e approximation ratio, both for SMW and for GAP. A common theme in both improvements is the use of a new and optimal Fair Contention Resolution technique. However, each of the improvements involves a different rounding procedure for the above mentioned LP. In addition, we prove APX-hardness results for SMW (such results were known for GAP). An important feature of our hardness results is that they apply even in very restricted settings, e.g. when every player has nonzero utility only for a constant number of items.", "Many auctions involve the sale of a variety of distinct assets. Examples are airport time slots, delivery routes, network routing, and furniture. Because of complementarities or substitution effects between the different assets, bidders have preferences not just for particular items but for sets of items. For this reason, economic efficiency is enhanced if bidders are allowed to bid on bundles or combinations of different assets. This paper surveys the state of knowledge about the design of combinatorial auctions and presents some new insights. Periodic updates of portions of this survey will be posted to this journal's Online Supplements web page at http: joc.pubs.informs.org OnlineSupplements.html.", "In the Submodular Welfare Problem, m items are to be distributed among n players with utility functions wi: 2[m] → R+. The utility functions are assumed to be monotone and submodular. Assuming that player i receives a set of items Si, we wish to maximize the total utility ∑i=1n wi(Si). In this paper, we work in the value oracle model where the only access to the utility functions is through a black box returning wi(S) for a given set S. Submodular Welfare is in fact a special case of the more general problem of submodular maximization subject to a matroid constraint: max f(S): S ∈ I , where f is monotone submodular and I is the collection of independent sets in some matroid. For both problems, a greedy algorithm is known to yield a 1 2-approximation [21, 16]. In special cases where the matroid is uniform (I = S: |S| ≤ k) [20] or the submodular function is of a special type [4, 2], a (1-1 e)-approximation has been achieved and this is optimal for these problems in the value oracle model [22, 6, 15]. A (1-1 e)-approximation for the general Submodular Welfare Problem has been known only in a stronger demand oracle model [4], where in fact 1-1 e can be improved [9]. In this paper, we develop a randomized continuous greedy algorithm which achieves a (1-1 e)-approximation for the Submodular Welfare Problem in the value oracle model. We also show that the special case of n equal players is approximation resistant, in the sense that the optimal (1-1 e)-approximation is achieved by a uniformly random solution. Using the pipage rounding technique [1, 2], we obtain a (1-1 e)-approximation for submodular maximization subject to any matroid constraint. The continuous greedy algorithm has a potential of wider applicability, which we demonstrate on the examples of the Generalized Assignment Problem and the AdWords Assignment Problem.", "", "Some important classical mechanisms considered in Microeconomics and Game Theory require the solution of a difficult optimization problem. This is true of mechanisms for combinatorial auctions, which have in recent years assumed practical importance, and in particular of the gold standard for combinatorial auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these mechanisms - in particular, their truth revelation properties - assumes that the optimization problems are solved precisely. In reality, these optimization problems can usually be solved only in an approximate fashion. We investigate the impact on such mechanisms of replacing exact solutions by approximate ones. Specifically, we look at a particular greedy optimization method, which has empirically been shown to perform well. We show that the GVA payment scheme does not provide for a truth revealing mechanism. We introduce another scheme that does guarantee truthfulness for a restricted class of players. We demonstrate the latter property by identifying sufficient conditions for a combinatorial auction to be truth-revealing, conditions which have applicability beyond the specific auction studied here.", "This paper discusses two advancements in the theory of designing truthful randomized mechanisms. Our first contribution is a new framework for developing truthful randomized mechanisms. The framework enables the construction of mechanisms with polynomially small failure probability. This is in contrast to previous mechanisms that fail with constant probability. Another appealing feature of the new framework is that bidding truthfully is a stronglydominant strategy. The power of the framework is demonstrated by an @math -mechanism for combinatorial auctions that succeeds with probability @math . The other major result of this paper is an O(logmloglogm) randomized truthful mechanism for combinatorial auction with subadditivebidders. The best previously-known truthful mechanism for this setting guaranteed an approximation ratio of @math . En route, the new mechanism also provides the best approximation ratio for combinatorial auctions with submodularbidders currently achieved by truthful mechanisms.", "We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items @math and in the number of bidders n, even though the \"input size\" is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O(√ m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2-approximation for the more restricted case of \"XOS bidders\", a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems.", "", "We explore the allocation problem in combinatorial auctions with submodular bidders. We provide an e e-1 approximation algorithm for this problem. Moreover, our algorithm applies to the more general class of XOS bidders. By presenting a matching unconditional lower bound in the communication model, we prove that the upper bound is tight for the XOS class.Our algorithm improves upon the previously known 2-approximation algorithm. In fact, we also exhibit another algorithm which obtains an approximation ratio better than 2 for submodular bidders, even in the value queries model.Throughout the paper we highlight interesting connections between combinatorial auctions with XOS and submodular bidders and various other combinatorial optimization problems. In particular, we discuss coverage problems and online problems.", "We present a new framework for the design of computationally-efficient and incentive-compatible mechanisms for combinatorial auctions. The mechanisms obtained via this framework are randomized, and obtain incentive compatibility in the universal sense (in contrast to the substantially weaker notion of incentive compatibility in expectation). We demonstrate the usefulness of our techniques by exhibiting two mechanisms for combinatorial auctions with general bidder preferences. The first mechanism obtains an optimal O(m)-approximation to the optimal social welfare for arbitrary bidder valuations. The second mechanism obtains an O(log^2m)-approximation for a class of bidder valuations that contains the important class of submodular bidders. These approximation ratios greatly improve over the best (known) deterministic incentive-compatible mechanisms for these classes.", "We show that any communication finding a value-maximizing allocation in a private-information economy must also discover supporting prices (in general personalized and nonlinear). In particular, to allocate L indivisible items between two agents, a price must be revealed for each of the 2L-1 bundles. We prove that all monotonic prices for an agent must be used, hence exponential communication in L is needed. Furthermore, exponential communication is needed just to ensure a higher share of surplus than that realized by auctioning all items as a bundle, or even a higher expected surplus (for some probability distribution over valuations). When the utilities are submodular, efficiency still requires exponential communication (and fully polynomial approximation is impossible). When the items are identical, arbitrarily good approximation is obtained with exponentially less communication than exact efficiency.", "", "We consider the problem of maximizing welfare when allocating m items to n players with subadditive utility functions. Our main result is a way of rounding any fractional solution to a linear programming relaxation to this problem so as to give a feasible solution of welfare at least half that of the value of the fractional solution. This approximation ratio of 1 2 improves over an Ω(1 log m) ratio of Dobzinski, Nisan and Schapira [STOC 2005]. We also show an approximation ratio of 1 - 1 e when utility functions are fractionally subadditive. A result similar to this last result was previously obtained by Dobzinski and Schapira [Soda 2006], but via a different rounding technique that requires the use of a so called \"XOS oracle\".The randomized rounding techniques that we use are oblivious in the sense that they only use the primal solution to the linear program relaxation, but have no access to the actual utility functions of the players. This allows us to suggest new incentive compatible mechanisms for combinatorial auctions, extending previous work of Lavi and Swamy [FOCS 2005]." ] }
0905.1995
1522187131
The existence of incentive-compatible computationally-efficient protocols for combinatorial auctions with decent approximation ratios is the paradigmatic problem in computational mechanism design. It is believed that in many cases good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. However, to date, researchers lack the machinery to prove such results. In this paper, we present a new approach that we believe holds great promise for making progress on this important problem. We take the first steps towards the development of new technologies for lower bounding the VC-dimension of k-tuples of disjoint sets. We apply this machinery to prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. These results hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits.
Vickrey-Clarke-Groves (VCG) mechanisms @cite_6 @cite_28 @cite_20 , named after their three inventors, are the fundamental technique in mechanism design for inducing truthful behaviour of strategic agents. Nisan and Ronen @cite_23 @cite_39 were the first to consider the computational issues associated with the VCG technique. In particular, @cite_23 defines the notion of VCG-Based mechanisms. VCG-based mechanisms have proven to be useful in designing approximation algorithms for combinatorial auctions @cite_16 @cite_37 . In fact, the best known (deterministic) truthful approximation ratios for combinatorial auctions were obtained via VCG-based mechanisms @cite_16 @cite_37 (with the notable exception of an algorithm in @cite_8 for the case that many duplicates of each item exist). Moreover, Lavi, Mu'alem and Nisan @cite_25 have shown that in certain interesting cases VCG-based mechanisms are essentially the only truthful mechanisms (see also @cite_42 ).
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_28", "@cite_42", "@cite_6", "@cite_39", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2134540007", "2087770658", "", "2026375929", "2015007620", "2012634103", "7806799", "2126085282", "2110882703", "" ], "abstract": [ "This paper analyzes ex post equilibria in the VCG combinatorial auctions. If Σ is a family of bundles of goods, the organizer may restrict the bundles on which the participants submit bids, and the bundles allocated to them, to be in Σ .T heΣ -VCG combinatorial auctions obtained in this way are known to be truth-telling mechanisms. In contrast, this paper deals with non-restricted VCG auctions, in which the buyers choose strategies that involve bidding only on bundles in Σ , and these strategies form an equilibrium. We fully characterize those Σ that induce an equilibrium in every VCG auction, and we refer to the associated equilibrium as a bundling equilibrium. The main motivation for studying all these equilibria, and not just the domination equilibrium, is that they afford a reduction of the communication complexity. We analyze the tradeoff between communication complexity and economic efficiency of bundling equilibrium.", "This paper deals with multi-unit combinatorial auctions where there are n types of goods for sale, and for each good there is some fixed number of units. We focus on the case where each bidder desires a relatively small number of units of each good. In particular, this includes the case where each good has exactly k units, and each bidder desires no more than a single unit of each good. We provide incentive compatible mechanisms for combinatorial auctions for the general case where bidders are not limited to single minded valuations. The mechanisms we give have approximation ratios close to the best possible for both on-line and off-line scenarios. This is the first result where non-VCG mechanisms are derived for non-single minded bidders for a natural model of combinatorial auctions.", "", "We characterize truthful mechanisms in two multi-parameter domains. The first characterization shows that every mechanism for combinatorial auctions with two subadditive bidders that always allocates all items is an affine maximizer. The second result shows that every truthful machine scheduling mechanism for 2 unrelated machines that yields a finite approximation of the minimum makespan, must be task independent. That is, the mechanism must determine the allocation of each job separately. The characterizations improve our understanding of these multi-parameter settings and have new implications regarding the approximability of central problems in algorithmic mechanism design.", "", "We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own self-interest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents' interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice. Journal of Economic Literature Classification Numbers: C60, C72, D61, D70, D80.", "", "We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items @math and in the number of bidders n, even though the \"input size\" is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O(√ m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2-approximation for the more restricted case of \"XOS bidders\", a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems.", "This paper analyzes incentive compatible (truthful) mechanisms over restricted domains of preferences, the leading example being combinatorial auctions. Our work generalizes the characterization of Roberts (1979) who showed that truthful mechanisms over unrestricted domains with at least 3 possible outcomes must be \"affine maximizers\". We show that truthful mechanisms for combinatorial auctions (and related restricted domains) must be \"almost affine maximizers\" if they also satisfy an additional requirement of \"independence of irrelevant alternatives\". This requirement is without loss of generality for unrestricted domains as well as for auctions between two players where all goods must be allocated. This implies unconditional results for these cases, including a new proof of Roberts' theorem. The computational implications of this characterization are severe, as reasonable \"almost affine maximizers\" are shown to be as computationally hard as exact optimization. This implies the near-helplessness of such truthful polynomial-time auctions in all cases where exact optimization is computationally intractable.", "" ] }
0905.1995
1522187131
The existence of incentive-compatible computationally-efficient protocols for combinatorial auctions with decent approximation ratios is the paradigmatic problem in computational mechanism design. It is believed that in many cases good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. However, to date, researchers lack the machinery to prove such results. In this paper, we present a new approach that we believe holds great promise for making progress on this important problem. We take the first steps towards the development of new technologies for lower bounding the VC-dimension of k-tuples of disjoint sets. We apply this machinery to prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. These results hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits.
Dobzinski and Nisan @cite_14 tackled the problem of proving inapproximability results for VCG-based mechanisms by taking a @cite_17 @cite_40 approach. Hence, in the settings considered in @cite_14 , it is assumed that each bidder has an (in the number of items). However, real-life considerations render problematic the assumption that bidders' preferences are exponential in size. Our intractability results deal with bidder preferences that are , and therefore relate to . Thus, our techniques enable us to prove lower bounds even for the important case in which bidders' preferences can be concisely represented.
{ "cite_N": [ "@cite_40", "@cite_14", "@cite_17" ], "mid": [ "", "2118827435", "2002501531" ], "abstract": [ "", "We consider computationally-efficient incentive-compatiblemechanisms that use the VCG payment scheme, and study how well theycan approximate the social welfare in auction settings. We present anovel technique for setting lower bounds on the approximation ratioof this type of mechanisms. Specifically, for combinatorial auctionsamong submodular (and thus also subadditive) bidders we prove an Ω(m1 6) lower bound, which is close to the knownupper bound of O(m1 2), and qualitatively higher than theconstant factor approximation possible from a purely computationalpoint of view.", "Let M e 0, 1, 2, ..., m —1 , N e 0, 1, 2,..., n —1 , and f:M × N → 0, 1 a Boolean-valued function. We will be interested in the following problem and its related questions. Let i e M , j e N be integers known only to two persons P 1 and P 2 , respectively. For P 1 and P 2 to determine cooperatively the value f ( i, j ), they send information to each other alternately, one bit at a time, according to some algorithm. The quantity of interest, which measures the information exchange necessary for computing f , is the minimum number of bits exchanged in any algorithm. For example, if f ( i, j ) e ( i + j ) mod 2. then 1 bit of information (conveying whether i is odd) sent from P 1 to P 2 will enable P 2 to determine f ( i, j ), and this is clearly the best possible. The above problem is a variation of a model of Abelson [1] concerning information transfer in distributive computions." ] }
0905.1995
1522187131
The existence of incentive-compatible computationally-efficient protocols for combinatorial auctions with decent approximation ratios is the paradigmatic problem in computational mechanism design. It is believed that in many cases good approximations for combinatorial auctions may be unattainable due to an inherent clash between truthfulness and computational efficiency. However, to date, researchers lack the machinery to prove such results. In this paper, we present a new approach that we believe holds great promise for making progress on this important problem. We take the first steps towards the development of new technologies for lower bounding the VC-dimension of k-tuples of disjoint sets. We apply this machinery to prove the first computational-complexity inapproximability results for incentive-compatible mechanisms for combinatorial auctions. These results hold for the important class of VCG-based mechanisms, and are based on the complexity assumption that NP has no polynomial-size circuits.
The VC framework has received much attention in past decades (see, , @cite_10 @cite_26 @cite_22 and references therein), and many generalizations of the VC dimension have been proposed and studied (, @cite_30 ). To the best of our knowledge, none of these generalizations captures the case of @math -tuples of disjoint subsets of a universe considered in this paper. In addition, no connection was previously made between the the VC dimension and the approximability of combinatorial auctions.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_10", "@cite_22" ], "mid": [ "2059958442", "", "2017753243", "2097261271" ], "abstract": [ "Answering a question of Erdos, Sauer [4] and independently Perles and Shelah [5] found the maximal cardinality of a collection F of subsets of a set: N of cardinality n such that for every subset M @? N of cardinality m | C @? M: C @e F | < 2^m. Karpovsky and Milman [3] generalised this result. Here we give a short proof of these results and further extensions.", "", "Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.", "In this article we introduce a new combinatorial parameter which generalizes the VC dimension and the fat-shattering dimension, and extends beyond the function-class setup. Using this parameter we establish entropy bounds for subsets of the n-dimensional unit cube, and in particular, we present new bounds on the empirical covering numbers and gaussian averages associated with classes of functions in terms of the fat-shattering dimension." ] }
0905.2367
2952853720
Guidelines and consistency rules of UML are used to control the degrees of freedom provided by the language to prevent faults. Guidelines are used in specific domains (e.g., avionics) to recommend the proper use of technologies. Consistency rules are used to deal with inconsistencies in models. However, guidelines and consistency rules use informal restrictions on the uses of languages, which makes checking difficult. In this paper, we consider these problems from a language-theoretic view. We propose the formalism of C-Systems, short for "formal language control systems". A C-System consists of a controlled grammar and a controlling grammar. Guidelines and consistency rules are formalized as controlling grammars that control the uses of UML, i.e. the derivations using the grammar of UML. This approach can be implemented as a parser, which can automatically verify the rules on a UML user model in XMI format. A comparison to related work shows our contribution: a generic top-down and syntax-based approach that checks language level constraints at compile-time.
Most checking tools use specific semantics of UML diagrams. They have the flavor of model checking, e.g., Egyed's UML Analyzer @cite_27 @cite_5 and OCL (Object Constraint Language) @cite_2 . At first, developers design UML diagrams as a model. Then, we specify the consistency rules as OCL or similar expressions. Certain algorithms are executed to detect counterexamples that violate the rules @cite_6 . Note that these techniques do not discriminate the rules on the model level and those concerning the language level features.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_6", "@cite_2" ], "mid": [ "2143320215", "2162686404", "2030037781", "" ], "abstract": [ "Large design models contain thousands of model elements. Designers easily get overwhelmed maintaining the consistency of such design models over time. Not only is it hard to detect new inconsistencies while the model changes but it also hard to keep track of known inconsistencies. The UML Analyzer tool identifies inconsistencies instantly with design changes and it keeps track of all inconsistencies over time. It does not require consistency rules with special annotations. Instead, it treats consistency rules as black-box entities and observes their behavior during their evaluation. The UML Analyzer tool is integrated with the UML modeling tool IBM Rational Rose^TM for broad applicability and usability. It is highly scalable and was evaluated on dozens of design models.", "Changes are inevitable during software development and so are their unintentional side effects. The focus of this paper is on UML design models, where unintentional side effects lead to inconsistencies. We demonstrate that a tool can assist the designer in discovering unintentional side effects, locating choices for fixing inconsistencies, and then in changing the design model. Our techniques are \"on-line, \" applied as the designer works, and non-intrusive, without overwhelming the designer. This is a significant improvement over the state-of-the-art. Our tool is fully integrated with the design tool IBM Rational Rosetrade. It was empirically evaluated on 48 case studies.", "The topic of UML model consistency is becoming increasingly important. Having a tool that checks the consistency of UML models is very useful. Using the XMI standard, the consistent models can be transferred from the checker tool to any other UML tool. By means of practical examples, this paper shows that using a framework based on OCL is a valuable approach when checking UML models. The results obtained in the examples highlight some shortcomings in the UML denition and prove that OCL oers the support needed in managing tool peculiarities.", "" ] }
0904.4058
2953067865
In this paper, we question the common practice of assigning security impact ratings to OS updates. Specifically, we present evidence that ranking updates by their perceived security importance, in order to defer applying some updates, exposes systems to significant risk. We argue that OS vendors and security groups should not focus on security updates to the detriment of other updates, but should instead seek update technologies that make it feasible to distribute updates for all disclosed OS bugs in a timely manner.
Security researchers have surveyed known vulnerabilities, computing statistics involving various dates, such as dates of first disclosure and of exploit availability. Rescorla @cite_5 analyzed vulnerability disclosure rates to suggest that popular software contains many more vulnerabilities than have been discovered so far.
{ "cite_N": [ "@cite_5" ], "mid": [ "2114712239" ], "abstract": [ "Despite the large amount of effort that goes toward finding and patching security holes, the available data does not show a clear improvement in software quality as a result. This article aims to measure the effect of vulnerability finding. Any attempt to measure this kind of effect is inherently rough, depending as it does on imperfect data and several simplifying assumptions. Because I'm looking for evidence of usefulness, where possible, I bias such assumptions in favor of a positive result - explicitly calling out those assumptions biased in the opposite direction. Thus, the analysis in this article represents the best-case scenario, consistent with the data and my ability to analyze it, for the vulnerability finding's usefulness" ] }
0904.4708
1790353435
Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The successfulness' of projects is defined in terms of the classifier confidence with which it predicts that they could be ported in popular OSS projects (such as FreeBSD, Gentoo Portage).
. The most popular model for measuring information systems' (IS) success is the one proposed by DeLone and McLean @cite_4 . They actually introduce six interrelated factors of success: 1) system quality, 2) information quality, 3) use, 4) user satisfaction, 5) individual impact, and 6) organizational impact. Based on this approach, Seddon @cite_7 reexamined the factors that can measure success and concluded that the related factors are system quality, information quality, perceived usefulness, user satisfaction and IS use. Based on those approaches, a number of measures that can be used to assess the success in FLOSS is presented by in @cite_10 . These measures are defined based on the results of a statistical analysis applied to a subject of project data in FLOSS. Specifically the empirical study was based on a subset of SourceForge projects. In this paper we propose another such measure, that can be added to the and the factors of the proposed models of success.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_7" ], "mid": [ "2101825147", "2057012437", "" ], "abstract": [ "Information systems success is one of the most widely used dependent variables in information systems (IS) research, but research on free libre and open source software (FLOSS) often fails to appropriately conceptualize this important concept. In this article, we reconsider what success means within a FLOSS context. We first review existing models of IS success and success variables used in FLOSS research and assess them for their usefulness, practicality and fit to the FLOSS context. Then, drawing on a theoretical model of group effectiveness in the FLOSS development process, as well as an on-line discussion with developers, we present additional concepts that are central to an appropriate understanding of success for FLOSS. In order to examine the practicality and validity of this conceptual scheme, the second half of our article presents an empirical study that demonstrates operationalizations of the chosen measures and assesses their internal validity. We use data from SourceForge to measure the project's effectiveness in team building, the speed of the project at responding to bug reports and the project's popularity. We conclude by discussing the implications of this study for our proposed extension of IS success in the context of FLOSS development and highlight future directions for research. Copyright © 2006 John Wiley & Sons, Ltd.", "A large number of studies have been conducted during the last decade and a half attempting to identify those factors that contribute to information systems success. However, the dependent variable in these studies-I S success-has been an elusive one to define. Different researchers have addressed different aspects of success, making comparisons difficult and the prospect of building a cumulative tradition for I S research similarly elusive. To organize this diverse research, as well as to present a more integrated view of the concept of I S success, a comprehensive taxonomy is introduced. This taxonomy posits six major dimensions or categories of I S success-SYSTEM QUALITY, INFORMATION QUALITY, USE, USER SATISFACTION, INDIVIDUAL IMPACT, and ORGANIZATIONAL IMPACT. Using these dimensions, both conceptual and empirical studies are then reviewed a total of 180 articles are cited and organized according to the dimensions of the taxonomy. Finally, the many aspects of I S success are drawn together into a descriptive model and its implications for future I S research are discussed.", "" ] }
0904.3528
1870880458
Finite objects and more specifically finite games are formalized using induction, whereas infinite objects are formalized using coinduction. In this article, after an introduction to the concept of coinduction, we revisit on infinite (discrete) extensive games the basic notions of game theory. Among others, we introduce a definition of Nash equilibrium and a notion of subgame perfect equilibrium for infinite games. We use those concepts to analyze well known infinite games, like the dollar auction game and the centipede game and we show that human behaviors that are often considered as illogic are perfectly rational, if one admits that human agents reason coinductively.
Extensive games are not used in proving properties of automata and protocols @cite_10 @cite_19 . This work has only loose connection with @cite_34 @cite_1 , which is more interested by the complexity of the algorithms, especially those which compute equilibria, than by their correction, and does not deal with infinite games.
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_34", "@cite_10" ], "mid": [ "2044667552", "1577115042", "", "1516458431" ], "abstract": [ "How long does it take until economic agents converge to an equilibrium? By studying the complexity of the problem of computing a mixed Nash equilibrium in a game, we provide evidence that there are games in which convergence to such an equilibrium takes prohibitively long. Traditionally, computational problems fall into two classes: those that have a polynomial-time algorithm and those that are NP-hard. However, the concept of NP-hardness cannot be applied to the rare problems where \"every instance has a solution\"---for example, in the case of games Nash's theorem asserts that every game has a mixed equilibrium (now known as the Nash equilibrium, in honor of that result). We show that finding a Nash equilibrium is complete for a class of problems called PPAD, containing several other known hard problems; all problems in PPAD share the same style of proof that every instance has a solution.", "Game-playing is an approach to write security proofs that are easy to verify. In this approach, security definitions and intractable problems are written as programs called games and reductionist security proofs are sequences of game transformations. This bias towards programming languages suggests the implementation of a tool based on compiler techniques (syntactic program transformations) to build security proofs, but it also raises the question of the soundness of such a tool. In this paper, we advocate the formalization of game-playing in a proof assistant as a tool to build security proofs. In a proof assistant, starting from just the formal definition of a probabilistic programming language, all the properties required in game-based security proofs can be proved internally as lemmas whose soundness is ensured by proof theory. Concretely, we show how to formalize the game-playing framework of Bellare and Rogaway in the Coq proof assistant, how to prove formally reusable lemmas such as the fundamental lemma of game-playing, and how to use them to formally prove the PRP PRF Switching Lemma.", "", "Thomas has presented a novel proof of the closure of ω-regular languages under complementation, using weak alternating automata. This note describes a formalization of this proof in the theorem prover Isabelle HOL. As an application we have developed a certified translation procedure for PTL formulas to weak alternating automata inside the theorem prover." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
BiFeO @math (BFO) is one of the most studied (probably most studied) multiferroic material. BFO is known to be multiferroic (or more precisely: AFM and ferroelectric) already since the early 1960s @cite_36 . However, for a long time it was not considered as a very promising material for applications, since the electric polarization was believed to be rather small @cite_97 and the AFM order does not lead to a net magnetization @cite_65 @cite_85 .
{ "cite_N": [ "@cite_36", "@cite_97", "@cite_85", "@cite_65" ], "mid": [ "151224569", "2031750418", "1993550580", "2065166753" ], "abstract": [ "", "Single crystals of BiFeO3 of perovskite structure were grown with dimensions of greater than 1 mm from a Bi2O3 flux. Dielectric hysteresis loops were measured on these crystals. While the loops were not fully saturated, they confirm that BiFeO3 is ferroelectric.", "New information on the magnetic ordering of the iron ions in bismuth ferrite BiFeO3 was obtained by a study with a high-resolution time-of-flight neutron diffractometer. The observed splitting of magnetic diffraction maxima could be interpreted in terms of a magnetic cycloidal spiral with a long period of 620+or-20 AA, which is unusual for perovskites.", "The temperature dependence of structural and magnetic order parameters of ferroelectric-antiferromagnetic BiFeO3 with rhombohedral perovskite structure is investigated by means of neutron diffraction on a powder sample. The tilt angle of oxygen octahedra decreases from 12.5 degrees at 4.2K to 11.4 degrees at 878K. Associated with ferroelectricity the cation displacements diminish essentially with increasing temperature. Changes of cation shifts, tilt angle, distortion and strain of octahedra are discussed with respect to phase transitions." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
This has changed drastically, following a publication in Science in 2003 (Ref. @cite_58 ), which to great extent has triggered the intensive experimental and theoretical computational research on BFO during the last 5--6 years. In this study, a large spontaneous electric polarization in combination with a substantial magnetization was observed above room temperature in thin films of BFO grown epitaxially on SrTiO @math substrates. The presence of both magnetism and ferroelectricity above room temperature, together with potential coupling between the two order parameters, makes BFO the prime candidate for device applications based on multiferroic materials.
{ "cite_N": [ "@cite_58" ], "mid": [ "2130156865" ], "abstract": [ "Enhancement of polarization and related properties in heteroepitaxially constrained thin films of the ferroelectromagnet, BiFeO 3 , is reported. Structure analysis indicates that the crystal structure of film is monoclinic in contrast to bulk, which is rhombohedral. The films display a room-temperature spontaneous polarization (50 to 60 microcoulombs per square centimeter) almost an order of magnitude higher than that of the bulk (6.1 microcoulombs per square centimeter). The observed enhancement is corroborated by first-principles calculations and found to originate from a high sensitivity of the polarization to small changes in lattice parameters. The films also exhibit enhanced thickness-dependent magnetism compared with the bulk. These enhanced and combined functional responses in thin film form present an opportunity to create and implement thin film devices that actively couple the magnetic and ferroelectric order parameters." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
Whereas the large electric polarization was later confirmed independently, and explained by first principles calculations, the origin of the strong magnetization reported in @cite_58 is still unclear and, to the best of our knowledge, it has never been reproduced in an independent study. It is generally assumed that the magnetization reported in Ref. @cite_58 is related to extrinsic effects such as defects or small amounts of impurity phases.
{ "cite_N": [ "@cite_58" ], "mid": [ "2130156865" ], "abstract": [ "Enhancement of polarization and related properties in heteroepitaxially constrained thin films of the ferroelectromagnet, BiFeO 3 , is reported. Structure analysis indicates that the crystal structure of film is monoclinic in contrast to bulk, which is rhombohedral. The films display a room-temperature spontaneous polarization (50 to 60 microcoulombs per square centimeter) almost an order of magnitude higher than that of the bulk (6.1 microcoulombs per square centimeter). The observed enhancement is corroborated by first-principles calculations and found to originate from a high sensitivity of the polarization to small changes in lattice parameters. The films also exhibit enhanced thickness-dependent magnetism compared with the bulk. These enhanced and combined functional responses in thin film form present an opportunity to create and implement thin film devices that actively couple the magnetic and ferroelectric order parameters." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
The large electric polarization, which appeared to be at odds with bulk single crystal measurements from 1970 @cite_97 , was originally assumed to be due to epitaxial strain, which results from the lattice constant mismatch between BFO and the substrate material SrTiO @math . It is known that epitaxial strain can have drastic effects on the properties of thin film ferroelectrics. For example, it can lead to a substantial enhancement of electric polarization and can even induce ferroelectricity at room temperature in otherwise non-ferroelectric SrTiO @math @cite_30 @cite_50 .
{ "cite_N": [ "@cite_30", "@cite_97", "@cite_50" ], "mid": [ "2125713092", "2031750418", "2032018678" ], "abstract": [ "Biaxial compressive strain has been used to markedly enhance the ferroelectric properties of BaTiO 3 thin films. This strain, imposed by coherent epitaxy, can result in a ferroelectric transition temperature nearly 500°C higher and a remanent polarization at least 250 higher than bulk BaTiO 3 single crystals. This work demonstrates a route to a lead-free ferroelectric for nonvolatile memories and electro-optic devices.", "Single crystals of BiFeO3 of perovskite structure were grown with dimensions of greater than 1 mm from a Bi2O3 flux. Dielectric hysteresis loops were measured on these crystals. While the loops were not fully saturated, they confirm that BiFeO3 is ferroelectric.", "Systems with a ferroelectric to paraelectric transition in the vicinity of room temperature are useful for devices. Adjusting the ferroelectric transition temperature (Tc) is traditionally accomplished by chemical substitution—as in BaxSr1-xTiO3, the material widely investigated for microwave devices in which the dielectric constant (er) at GHz frequencies is tuned by applying a quasi-static electric field1,2. Heterogeneity associated with chemical substitution in such films, however, can broaden this phase transition by hundreds of degrees3, which is detrimental to tunability and microwave device performance. An alternative way to adjust Tc in ferroelectric films is strain4,5,6,7,8. Here we show that epitaxial strain from a newly developed substrate can be harnessed to increase Tc by hundreds of degrees and produce room-temperature ferroelectricity in strontium titanate, a material that is not normally ferroelectric at any temperature. This strain-induced enhancement in Tc is the largest ever reported. Spatially resolved images of the local polarization state reveal a uniformity that far exceeds films tailored by chemical substitution. The high er at room temperature in these films (nearly 7,000 at 10 GHz) and its sharp dependence on electric field are promising for device applications1,2." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
According to the so-called Modern theory of polarization'', the electric polarization of a bulk periodic system is defined via the Berry phase of the corresponding wavefunctions @cite_72 @cite_76 . Since this geometrical phase is only well defined modulo @math , the polarization is only well-defined modulo so-called polarization quanta'', given by @math , where @math is the electronic charge, @math a primitive lattice vector ( @math ), @math the unit cell volume, and @math is a spin degeneracy factor ( @math for a non-spinpolarized system, @math for a spin-polarized system). If the expression for the polarization is recast as a sum over Wannier centers'' @cite_72 , a translation of one of the occupied Wannier states from one unit cell to the next corresponds to a change in polarization by exactly one quantum''. The multivaluedness thus reflects the arbitrary choice of basis vectors when describing an infinite periodic structure.
{ "cite_N": [ "@cite_76", "@cite_72" ], "mid": [ "2151895170", "2074950413" ], "abstract": [ "Macroscopic electric polarization is a fundamental concept in the physics of matter, upon which the phenomenological description of dielectrics is based (Landau and Lifshitz, 1984). Notwithstanding, this concept has long evaded even a precise microscopic definition. A typical incorrect statement — often found in textbooks — is that the macroscopic polarization of a solid is the dipole of a unit cell. It is easy to realize that such a quantity is neither measurable nor model-independent: the dipole of a periodic charge distribution is in fact ill defined (Martin, 1974), except in the extreme Clausius-Mossotti model, in which the total charge is unambiguously decomposed into an assembly of ocalized and neutral charge distributions. One can adopt an alternative viewpoint by considering a macroscopic and finite piece of matter and defining its polarization P as the dipole per unit volume: 1", "We consider the change in polarization P which occurs upon making an adiabatic change in the Kohn-Sham Hamiltonian of the solid. A simple expression for P is derived in terms of the valence-band wave functions of the initial and final Hamiltonians. We show that physically P can be interpreted as a displacement of the center of charge of the Wannier functions. The formulation is successfully applied to compute the piezoelectric tensor of GaAs in a first-principles pseudopotential calculation." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
In spite of this multivaluedness of the bare polarization for a specific atomic configuration, differences in polarization are well defined quantities, provided the corresponding configurations can be transformed into each other in a continuous way and the system remains insulating along the entire transformation path'' @cite_76 .
{ "cite_N": [ "@cite_76" ], "mid": [ "2151895170" ], "abstract": [ "Macroscopic electric polarization is a fundamental concept in the physics of matter, upon which the phenomenological description of dielectrics is based (Landau and Lifshitz, 1984). Notwithstanding, this concept has long evaded even a precise microscopic definition. A typical incorrect statement — often found in textbooks — is that the macroscopic polarization of a solid is the dipole of a unit cell. It is easy to realize that such a quantity is neither measurable nor model-independent: the dipole of a periodic charge distribution is in fact ill defined (Martin, 1974), except in the extreme Clausius-Mossotti model, in which the total charge is unambiguously decomposed into an assembly of ocalized and neutral charge distributions. One can adopt an alternative viewpoint by considering a macroscopic and finite piece of matter and defining its polarization P as the dipole per unit volume: 1" ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
These problems have been overcome in Ref. @cite_18 by using the LSDA+ @math method @cite_11 @cite_34 to calculate the electronic structure of BFO in various configurations along the transformation path from the fully distorted @math structure to the centrosymmetric cubic perovskite ( @math ) structure. Within the LSDA+ @math method the local @math - @math exchange splitting is enhanced by the Hubbard @math and BFO stays insulating even in the undistorted cubic perovskite structure (for @math values @math 2--4 eV @cite_18 ).
{ "cite_N": [ "@cite_18", "@cite_34", "@cite_11" ], "mid": [ "2112232168", "1977700257", "1966156734" ], "abstract": [ "The ground-state structural and electronic properties of ferroelectric BiFeO 3 are calculated using density functional theory within the local spin-density approximation sLSDAd and the LSDA+U method. The crystal structure is computed to be rhombohedral with space group R3c, and the electronic structure is found to be insulating and antiferromagnetic, both in excellent agreement with available experiments. A large ferroelectric polarization of 90‐ 100 m C c m 2 is predicted, consistent with the large atomic displacements in the ferroelectric phase and with recent experimental reports, but differing by an order of magnitude from early experiments. One possible explanation is that the latter may have suffered from large leakage currents. However, both past and contemporary measurements are shown to be consistent with the modern theory of polarization, suggesting that the range of reported polarizations may instead correspond to distinct switching paths in structural space. Modern measurements on well-characterized bulk samples are required to confirm this interpretation.", "The generalization of the Local Density Approximation (LDA) method for the systems with strong Coulomb correlations is presented which gives a correct description of the Mott insulators. The LDA+U method is based on the model hamiltonian approach and allows to take into account the non-sphericity of the Coulomb and exchange interactions. parameters. Orbital-dependent LDA+U potential gives correct orbital polarization and corresponding Jahn-Teller distortion. To calculate the spectra of the strongly correlated systems the impurity Anderson model should be solved with a many-electron trial wave function. All parameters of the many-electron hamiltonian are taken from LDA+U calculations. The method was applied to NiO and has shown good agreement with experimental photoemission spectra and with the oxygen Kα X-ray emission spectrum.", "We propose a form for the exchange-correlation potential in local-density band theory, appropriate for Mott insulators. The idea is to use the constrained-local-density-approximation'' Hubbard parameter U as the quantity relating the single-particle potentials to the magnetic- (and orbital-) order parameters. Our energy functional is that of the local-density approximation plus the mean-field approximation to the remaining part of the U term. We argue that such a method should make sense, if one accepts the Hubbard model and the success of constrained-local-density-approximation parameter calculations. Using this ab initio scheme, we find that all late-3d-transition-met al monoxides, as well as the parent compounds of the high- @math compounds, are large-gap magnetic insulators of the charge-transfer type. Further, the method predicts that @math is a low-spin ferromagnet and NiS a local-moment p-type met al. The present version of the scheme fails for the early-3d-transition-met al monoxides and for the late 3d transition met als." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
From these calculation a spontaneous polarization of bulk BFO of @math @math C cm @math has been obtained. This is an order of magnitude larger than what was previously believed to be the case, based on the measurements in Ref. @cite_97 , and even exceeds the polarization of typical prototype ferroelectrics such as BaTiO @math , PbTiO @math , or PbZr @math Ti @math O @math (PZT). Variation of @math within reasonable limits changes the calculated value for the electric polarization by only @math @math C cm @math , i.e. the large value of the polarization is rather independent from the precise value of the Hubbard parameter. This is consistent with the assumption that the transition met al @math states do not play an active role for the ferroelectric instability in BFO. The calculated large spontaneous polarization for bulk BFO is also consistent with the large ionic displacements in the experimentally observed @math structure of BFO (see Fig. c), compared to an appropriate centrosymmetric reference configuration. Recently, the large polarization of @math @math C cm @math along (111) for bulk BFO has also been confirmed experimentally by new measurements on high-quality single crystals @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_97" ], "mid": [ "2030444122", "2031750418" ], "abstract": [ "Electric polarization loops are measured at room temperature on highly pure BiFeO3 single crystals synthesized by a flux growth method. Because the crystals have a high electrical resistivity, the resulting low leakage currents allow the authors to measure a large spontaneous polarization in excess of 100μCcm−2, a value never reported in the bulk. During electric cycling, the slow degradation of the material leads to an evolution of the hysteresis curves eventually preventing full saturation of the crystals.", "Single crystals of BiFeO3 of perovskite structure were grown with dimensions of greater than 1 mm from a Bi2O3 flux. Dielectric hysteresis loops were measured on these crystals. While the loops were not fully saturated, they confirm that BiFeO3 is ferroelectric." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
Effects of epitaxial strain can be assessed from first principles by performing bulk calculations for a strained unit cell, where the lattice constant within a certain lattice plane (corresponding to the orientation of the substrate surface) is constrained, whereas the lattice constant in the perpendicular direction as well as all internal structural parameters are allowed to relax. Such calculations have been performed for BFO corresponding to a (111) orientation of the substrate @cite_52 . In this case the @math symmetry of the bulk structure is conserved and the epitaxial constraint is applied in the lattice plane perpendicular to the polarization direction. It was found that the sensitivity of the electric polarization to strain is surprisingly weak in BFO, much weaker than in other well-known ferroelectrics @cite_52 (see Fig. ). A systematic comparison of the strain dependence in various ferroelectrics, including BFO in both the @math and a hypothetical tetragonal phase with @math symmetry, has been performed in Ref. @cite_45 (see Fig. ). It was shown that the effect of epitaxial strain for all investigated systems can be understood in terms of the usual bulk linear response functions and that both strong and weak strain dependence can occur.
{ "cite_N": [ "@cite_45", "@cite_52" ], "mid": [ "1993552027", "2091794635" ], "abstract": [ "We investigate the variation of the spontaneous ferroelectric polarization with epitaxial strain for BaTiO @math , PbTiO @math , LiNbO @math , and BiFeO @math using first principles calculations. We find that while the strain dependence of the polarization is very strong in the simple perovskite systems BaTiO @math and PbTiO @math it is only weak in LiNbO @math and BiFeO @math . We show that this different behavior can be understood purely in terms of the piezoelectric and elastic constants of the unstrained bulk material, and we discuss several factors that determine the strain behavior of a certain material.", "The dependencies on strain and oxygen vacancies of the ferroelectric polarization and the weak ferromagnetic magnetization in the multiferroic material bismuth ferrite, @math , are investigated using first principles density functional theory calculations. The electric polarization is found to be rather independent of strain, in striking contrast to most conventional perovskite ferroelectrics. It is also not significantly affected by oxygen vacancies, or by the combined presence of strain and oxygen vacancies. The magnetization is also unaffected by strain, however, the incorporation of oxygen vacancies can alter the magnetization slightly, and also leads to the formation of @math . These results are discussed in light of recent experiments on epitaxial films of @math , which reported a strong thickness dependence of both magnetization and polarization." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
Finally, it should be noted that Ref. @cite_58 also contains results of first principles calculations for the electric polarization of two structural variants of BFO: the rhombohedral bulk structure with @math space group, and a hypothetical tetragonal structure with @math symmetry, based on the lattice parameters found in the thin film samples. At that time it was assumed that such a tetragonal phase is stabilized in epitaxial thin films and that the difference in polarization observed in thin films compared to bulk BFO was due to a large difference in polarization between the two different structural modifications. However, the DFT results presented in Ref. @cite_58 were not conclusive, since only the bare polarization for the two different structures was reported and not the spontaneous polarization that is measured in the corresponding current-voltage'' switching experiments.
{ "cite_N": [ "@cite_58" ], "mid": [ "2130156865" ], "abstract": [ "Enhancement of polarization and related properties in heteroepitaxially constrained thin films of the ferroelectromagnet, BiFeO 3 , is reported. Structure analysis indicates that the crystal structure of film is monoclinic in contrast to bulk, which is rhombohedral. The films display a room-temperature spontaneous polarization (50 to 60 microcoulombs per square centimeter) almost an order of magnitude higher than that of the bulk (6.1 microcoulombs per square centimeter). The observed enhancement is corroborated by first-principles calculations and found to originate from a high sensitivity of the polarization to small changes in lattice parameters. The films also exhibit enhanced thickness-dependent magnetism compared with the bulk. These enhanced and combined functional responses in thin film form present an opportunity to create and implement thin film devices that actively couple the magnetic and ferroelectric order parameters." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
In addition to these structural studies, DFT calculations have also been used to investigate the magnetic properties of BFO, in particular the possible origin for the significant magnetization reported in Ref. @cite_58 . Bulk BFO is known to exhibit G-type'' AFM ordering @cite_65 , i.e. the magnetic moment of each Fe cation is antiparallel to that of its nearest neighbors. Superimposed to this G-type magnetic order a long-period cycloidal modulation is observed, where the AFM order parameter @math , defined as the difference between the two sublattice magnetizations @math , rotates within the (110) plane with a wavelength of @math @cite_85 .
{ "cite_N": [ "@cite_85", "@cite_58", "@cite_65" ], "mid": [ "1993550580", "2130156865", "2065166753" ], "abstract": [ "New information on the magnetic ordering of the iron ions in bismuth ferrite BiFeO3 was obtained by a study with a high-resolution time-of-flight neutron diffractometer. The observed splitting of magnetic diffraction maxima could be interpreted in terms of a magnetic cycloidal spiral with a long period of 620+or-20 AA, which is unusual for perovskites.", "Enhancement of polarization and related properties in heteroepitaxially constrained thin films of the ferroelectromagnet, BiFeO 3 , is reported. Structure analysis indicates that the crystal structure of film is monoclinic in contrast to bulk, which is rhombohedral. The films display a room-temperature spontaneous polarization (50 to 60 microcoulombs per square centimeter) almost an order of magnitude higher than that of the bulk (6.1 microcoulombs per square centimeter). The observed enhancement is corroborated by first-principles calculations and found to originate from a high sensitivity of the polarization to small changes in lattice parameters. The films also exhibit enhanced thickness-dependent magnetism compared with the bulk. These enhanced and combined functional responses in thin film form present an opportunity to create and implement thin film devices that actively couple the magnetic and ferroelectric order parameters.", "The temperature dependence of structural and magnetic order parameters of ferroelectric-antiferromagnetic BiFeO3 with rhombohedral perovskite structure is investigated by means of neutron diffraction on a powder sample. The tilt angle of oxygen octahedra decreases from 12.5 degrees at 4.2K to 11.4 degrees at 878K. Associated with ferroelectricity the cation displacements diminish essentially with increasing temperature. Changes of cation shifts, tilt angle, distortion and strain of octahedra are discussed with respect to phase transitions." ] }
0904.3736
2099610044
Multiferroics, materials where spontaneous long-range magnetic and dipolar orders coexist, represent an attractive class of compounds, which combine rich and fascinating fundamental physics with a technologically appealing potential for applications in the general area of spintronics. Ab initio calculations have significantly contributed to recent progress in this area, by elucidating different mechanisms for multiferroicity and providing essential information on various compounds where these effects are manifestly at play. In particular, here we present examples of density-functional theory investigations for two main classes of materials: (a) multiferroics where ferroelectricity is driven by hybridization or purely structural effects, with BiFeO3 as the prototype material, and (b) multiferroics where ferroelectricity is driven by correlation effects and is strongly linked to electronic degrees of freedom such as spin-, charge-, or orbital-ordering, with rare-earth manganites as prototypes. As for the first class of multiferroics, first principles calculations are shown to provide an accurate qualitative and quantitative description of the physics in BiFeO3, ranging from the prediction of large ferroelectric polarization and weak ferromagnetism, over the effect of epitaxial strain, to the identification of possible scenarios for coupling between ferroelectric and magnetic order. For the second class of multiferroics, ab initio calculations have shown that, in those cases where spin-ordering breaks inversion symmetry (e.g. in antiferromagnetic E-type HoMnO3), the magnetically induced ferroelectric polarization can be as large as a few µC cm−2. The examples presented point the way to several possible avenues for future research: on the technological side, first principles simulations can contribute to a rational materials design, aimed at identifying spintronic materials that exhibit ferromagnetism and ferroelectricity at or above room temperature. On the fundamental side, ab initio approaches can be used to explore new mechanisms for ferroelectricity by exploiting electronic correlations that are at play in transition met al oxides, and by suggesting ways to maximize the strength of these effects as well as the corresponding ordering temperatures.
Furthermore, first principles studies addressing the effect of epitaxial strain and the presence of oxygen vacancies did not find a significant increase in magnetization @cite_52 , and it is therefore likely that the large magnetization reported in @cite_58 is due to other defects or small amounts of impurity phases.
{ "cite_N": [ "@cite_58", "@cite_52" ], "mid": [ "2130156865", "2091794635" ], "abstract": [ "Enhancement of polarization and related properties in heteroepitaxially constrained thin films of the ferroelectromagnet, BiFeO 3 , is reported. Structure analysis indicates that the crystal structure of film is monoclinic in contrast to bulk, which is rhombohedral. The films display a room-temperature spontaneous polarization (50 to 60 microcoulombs per square centimeter) almost an order of magnitude higher than that of the bulk (6.1 microcoulombs per square centimeter). The observed enhancement is corroborated by first-principles calculations and found to originate from a high sensitivity of the polarization to small changes in lattice parameters. The films also exhibit enhanced thickness-dependent magnetism compared with the bulk. These enhanced and combined functional responses in thin film form present an opportunity to create and implement thin film devices that actively couple the magnetic and ferroelectric order parameters.", "The dependencies on strain and oxygen vacancies of the ferroelectric polarization and the weak ferromagnetic magnetization in the multiferroic material bismuth ferrite, @math , are investigated using first principles density functional theory calculations. The electric polarization is found to be rather independent of strain, in striking contrast to most conventional perovskite ferroelectrics. It is also not significantly affected by oxygen vacancies, or by the combined presence of strain and oxygen vacancies. The magnetization is also unaffected by strain, however, the incorporation of oxygen vacancies can alter the magnetization slightly, and also leads to the formation of @math . These results are discussed in light of recent experiments on epitaxial films of @math , which reported a strong thickness dependence of both magnetization and polarization." ] }
0904.1615
1537453950
The sequence a_1,...,a_m is a common subsequence in the set of permutations S = p_1,...,p_k on [n] if it is a subsequence of p_i(1),...,p_i(n) and p_j(1),...,p_j(n) for some distinct p_i, p_j in S. Recently, Beame and Huynh-Ngoc (2008) showed that when k>=3, every set of k permutations on [n] has a common subsequence of length at least n^ 1 3 . We show that, surprisingly, this lower bound is asymptotically optimal for all constant values of k. Specifically, we show that for any k>=3 and n>=k^2 there exists a set of k permutations on [n] in which the longest common subsequence has length at most 32(kn)^ 1 3 . The proof of the upper bound is constructive, and uses elementary algebraic techniques.
There has been extensive research on error-correcting codes built over a metric space defined by a deletion distance @cite_12 @cite_9 @cite_6 @cite_7 , and on codes built over the symmetric group @math @cite_1 @cite_5 @cite_10 @cite_8 . As far as we know, however, our result is the first to explicitly provide bounds on the capabilities of error-correcting codes built over @math .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2109386785", "2041807026", "1647671624", "2040651417", "2029448413", "2066218472", "", "1880538223" ], "abstract": [ "This paper gives a brief survey of binary single-deletion-correcting codes. The Varshamov-Tenengolts codes appear to be optimal, but many interesting unsolved problems remain. The connections with shift-register sequences also remain somewhat mysterious.", "Let us denote by R(k, ⩾ λ)[R(k, ⩽ λ)] the maximal number M such that there exist M different permutations of the set 1,…, k such that any two of them have at least λ (at most λ, respectively) common positions. We prove the inequalities R(k, ⩽ λ) ⩽ kR(k − 1, ⩽ λ − 1), R(k, ⩾ λ) ⩾ R(k, ⩽ λ − 1) ⩽ k!, R(k, ⩾ λ) ⩽ kR(k − 1, ⩾ λ − 1). We show: R(k, ⩾ k − 2) = 2, R(k, ⩾ 1) = (k − 1)!, R(pm, ⩾ 2) = (pm − 2)!, R(pm + 1, ⩾ 3) = (pm − 2)!, R(k, ⩽ k − 3) = k!2, R(k, ⩽ 0) = k, R(pm, ⩽ 1) = pm(pm − 1), R(pm + 1, ⩽ 2) = (pm + 1)pm(pm − 1). The exact value of R(k, ⩾ λ) is determined whenever k ⩾ k0(k − λ); we conjecture that R(k, ⩾ λ) = (k − λ)! for k ⩾ k0(λ). Bounds for the general case are given and are used to determine that the minimum of |R(k, ⩾ λ) − R(k, ⩽ λ)| is attained for λ = (k2) + O(klog k).", "", "", "We present simple, polynomial time encodable and decodable codes which are asymptotically good for channels allowing insertions, deletions, and transpositions. As a corollary, they achieve exponential error probability in a stochastic model of insertion-deletion.", "A permutation array (or code) of length n and distance d is a set Γ of permutations from some fixed set of n symbols such that the Hamming distance between each distinct x, y ∈ Γ is at least d. One motivation for coding with permutations is powerline communication. After summarizing known results, it is shown here that certain families of polynomials over finite fields give rise to permutation arrays. Additionally, several new computational constructions are given, often making use of automorphism groups. Finally, a recursive construction for permutation arrays is presented, using and motivating the more general notion of codes with constant weight composition.", "", "An (n,c,l,r) erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of l-bit packets of total length cn from an n-bit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r l packets. We describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n." ] }