aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0904.2061
2127353230
In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we are mainly concerned with is how much the lack of central coordination harms social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several new equilibrium concepts. For each equilibrium concept, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.
Low-order polynomial time algorithms with @math and @math are explored by Assmann ( @cite_7 ), ( @cite_24 ) and ( @cite_14 ). ( @cite_1 ) give the first APTAS (Asymptotic Polynomial Time Approximation Schemes), Jansen and Solis-Oba ( @cite_25 ) derive an AFPTAS. For the analysis of average case performance, there are also several results ( @cite_2 , @cite_1 ). Woeginger and Zhang ( @cite_10 ) also consider the variant with variable-sized bins.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_24", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "1482409676", "2962924833", "1975625792", "2057764247", "2172277188", "2102006745", "2024624524" ], "abstract": [ "We define two simple algorithms for the bin covering problem and give their asymptotic performance.", "", "Bin covering takes as input a list of items with sizes in (0, 1) and places them into bins of unit demand so as to maximize the number of bins whose demand is satisfied. This is in a sense a dual problem to the classical one-dimensional bin packing problem, but has for many years lagged behind the latter in terms of the quality of the best approximation algorithms. We design algorithms for this problem that close the gap, both in terms of worst- and average-case results. We present (1) the first asymptotic approximation scheme for the offline version, (2) algorithms that have bounded worst-case behavior for instances with discrete item sizes and expected behavior that is asymptotically optimal for all discrete “perfect-packing distributions” (ones for which optimal packings have sublinear expected waste), and (3) a learning algorithm that has asymptotically optimal expected behavior for all discrete distributions. The algorithms of (2) and (3) are based on the recently-developed online Sum-of-Squares algorithm for bin packing. We also present experimental analysis comparing the algorithms of (2) and suggesting that one of them, the Sum-of-Squares-with-Threshold algorithm, performs quite well even for discrete distributions that do not have the perfect-packing property.", "The NP-hard problem of packing items from a given set into bins so as to maximize the number of bins used, subject to the constraint that each bin be filled to at least a given threshold, is considered. Approximation algorithms are presented that provide guarantees of 12, 23, and 34 the optimal number, at running time costs of O(n), O(nlogn), and O(nlog2n), respectively, and the average case behavior of these algorithms is explored via empirical tests on randomly generated sets of items.", "In the dual bin packing problem, the objective is to assign items of given size to the largest possible number of bins, subject to the constraint that the total size of the items assigned to any bin is at least equal to 1. We carry out a probabilistic analysis of this problem under the assumption that the items are drawn independently from the uniform distribution on [0, 1] and reveal the connection between this problem and the classical bin packing problem as well as to renewal theory.", "We deal with the variable-sized bin covering problem: Given a list L of items in (0,1] and a finite collection B of feasible bin sizes, the goal is to select a set of bins with sizes in B and to cover them with the items in L such that the total size of the covered bins is maximized. In the on-line version of this problem, the items must be assigned to bins one by one without previewing future items. This note presents a complete solution to the on-line problem: For every collection B of bin sizes, we give an on-line approximation algorithm with a worst-case ratio r(B), and we prove that no on-line algorithm can perform better in the worst case. The value r(B) mainly depends on the largest gap between consecutive bin sizes.", "In the bin covering problem there is a group L=(a1,?,an) of items with sizes s?(ai)?(0,1), and the goal is to find a packing of the items into bins to maximize the number of bins that receive items of total size at least 1. This is a dual problem to the classical bin packing problem. In this paper we present the first asymptotic fully polynomial-time approximation scheme for the problem." ] }
0904.2061
2127353230
In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we are mainly concerned with is how much the lack of central coordination harms social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several new equilibrium concepts. For each equilibrium concept, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.
The characteristic function is defined as: @math iff @math ; @math otherwise. An interesting phenomenon in WMG is that the bargaining powers of players usually are not proportional to their weights. For a simple example, there are 3 players in total, their weights are 3, 4, 5, respectively, and the quota is 7, then all the three players have the same bargaining power! So a central topic in WMG is how to measure the bargaining powers of the players. There are mainly four recognized ways: the Shapley-Shubic index ( @cite_19 ), the Banzhaf index ( @cite_0 ), the Holler-Packel index ( @cite_6 @cite_20 ), and the Deegen-Packel index ( @cite_12 ).
{ "cite_N": [ "@cite_6", "@cite_0", "@cite_19", "@cite_12", "@cite_20" ], "mid": [ "1985941874", "29082603", "2002616065", "2076786454", "2037337166" ], "abstract": [ "", "A quick attach apparatus for end loaders or the like such as tractor loaders or the like. The tractor loader includes a pair of booms operatively pivotally secured at one end thereof to the tractor. A hydraulic cylinder is operatively pivotally secured to each of the booms and is positioned over the other end thereof. A hook-up bracket is pivotally secured to the said other end of each boom and the hydraulic cylinder and includes a channel-shaped portion extending forwardly therefrom. Each of the various attachments for the loader such as buckets, forks, blades, etc. have a pair of channel-shaped pockets secured to the rearward end thereof which are adapted to receive a hook-up bracket therein. A locking apparatus is provided on each of the brackets to detachably maintain the hook-up brackets in their respective pockets. The locking apparatus includes means for yieldably maintaining the locking apparatus in an unlocked condition and means for automatically locking the locking apparatus after the hook-up bracket has been properly received within its respective pockets.", "In the following paper we offer a method for the a priori evaluation of the division of power among the various bodies and members of a legislature or committee system. The method is based on a technique of the mathematical theory of games, applied to what are known there as “simple games” and “weighted majority games.” We apply it here to a number of illustrative cases, including the United States Congress, and discuss some of its formal properties. The designing of the size and type of a legislative body is a process that may continue for many years, with frequent revisions and modifications aimed at reflecting changes in the social structure of the country; we may cite the role of the House of Lords in England as an example. The effect of a revision usually cannot be gauged in advance except in the roughest terms; it can easily happen that the mathematical structure of a voting system conceals a bias in power distribution unsuspected and unintended by the authors of the revision. How, for example, is one to predict the degree of protection which a proposed system affords to minority interests? Can a consistent criterion for “fair representation” be found? It is difficult even to describe the net effect of a double representation system such as is found in the U. S. Congress (i.e., by states and by population), without attempting to deduce it a priori . The method of measuring “power” which we present in this paper is intended as a first step in the attack on these problems.", "Measures of (a priori) power play a useful role in assessing the character of interpersonal interaction found in collective decision making bodies. We propose and axiomatically characterize an alternative power index to the familiarShapley Shubik andBanzhaf indices which can be used for such purposes. The index presented is shown to be unique for the class of simplen-person games. By subsequent generalization of the index and its axioms to the class ofn-person games in characteristic function form we obtain an analog to theShapley value.", "We have pointed out the theoretical drawbacks of the traditional indices for measuring a priori voting power inasmuch as they are implied in considering the coalition value a private good. This criticism caused us to view the coalition outcome as a public good. From this aspect and additional considerations with respect to power, luck, and decisiveness, we obtained a “story” describing the characteristics of an adequate measure of a priori voting power. These characteristics were found to be fulfilled by an index presented by Holler (1978). Through the above analysis this index has received its theoretical justification. An independent view of this index was then provided by means of an axiomatic characterization. This characterization makes possible abstract comparison of the index with previously established “private good” indices." ] }
0904.2061
2127353230
In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we are mainly concerned with is how much the lack of central coordination harms social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several new equilibrium concepts. For each equilibrium concept, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.
Matsui and Matsui ( @cite_4 @cite_9 ) prove that all the problems of computing the Shapley-Shubik indices, the Banzhaf indices and the Deegan-Packel indices in WMGs are NP-hard, and there are pseudo-polynomial time dynamic programming algorithms for them. Cao and Yang ( @cite_11 ) show that computing the Holler-Packel index is also NP-hard. Deng and Papadimitriou ( @cite_5 ) prove that it is #P-complete to compute the Shapley-Shubik index. Matsui and Matsui ( @cite_9 ) observe that Deng and Papadimitriou's proof can be easily carried over to the problem of computing the Banzhaf index.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_4", "@cite_11" ], "mid": [ "2046116913", "2038127015", "2144872768", "2005524240" ], "abstract": [ "We study from a complexity theoretic standpoint the various solution concepts arising in cooperative game theory. We use as a vehicle for this study a game in which the players are nodes of a graph with weights on the edges, and the value of a coalition is determined by the total weight of the edges contained in it. The Shapley value is always easy to compute. The core is easy to characterize when the game is convex, and is intractable (NP-complete) otherwise. Similar results are shown for the kernel, the nucleolus, the e-core, and the bargaining set. As for the von Neumann-Morgenstern solution, we point out that its existence may not even be decidable. Many of these results generalize to the case in which the game is presented by a hypergraph with edges of size k > 2.", "In this paper, we prove that both problems for calculating the Banzhaf power index and the Shapley-Shubik power index for weighted majority games are NP-complete.", "For measuring an individual's voting power of a voting game, some power indices are proposed. In this paper, we discuss the problems for calculating the Shapley-Shubik index, the Banzhaf index and the Deegan-Packel index of weighted majority games.", "In this paper, we introduce a simple coalition formation game in the environment of bidding, which is a special case of the weighted majority game (WMG), and is named the weighted simple-majority game (WSMG). In WSMG, payoff is allocated to the winners proportional to the players’ powers, which can be measured in various ways. We define a new kind of stability: the counteraction-stability (C-stability), where any potential deviating players will confront counteractions of the other players. We show that C-stable coalition structures in WSMG always contains a minimal winning coalition of minimum total power. For the variant where powers are measured directly by their weights, we show that it is NP-hard to find a C-stable coalition structure and design a pseudo-polynomial time algorithm. Sensitivity analysis for this variant, which shows many interesting properties, is also done. We also prove that it is NP-hard to compute the Holler-Packel indices in WSMGs, and hence in WMGs as well." ] }
0904.2061
2127353230
In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we are mainly concerned with is how much the lack of central coordination harms social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several new equilibrium concepts. For each equilibrium concept, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.
Nash stability, i.e. NE that we mentioned in the last section, requires that no player will benefit if he leaves the original coalition to which he belongs and joins a new one. In this concept, there is no restriction for the players' migrations. To require that any player's migrations into a coalition should not harm the original members gives the concept of individual stability (IE for short). To further require that any player's migration should not harm members of the original coalition to which he belongs gives contractually individual stability (CIE for short). It's obvious that NE implies IE, and IE further implies CIE (for more knowledge, please refer @cite_8 ). Since we will show that @math and @math , the latter two concepts will be omitted in our discussion.
{ "cite_N": [ "@cite_8" ], "mid": [ "2134660637" ], "abstract": [ "We consider the partitioning of a society into coalitions in purely hedonic settings, i.e., where each player's payoff is completely determined by the identity of other members of her coalition. We first discuss how hedonic and nonhedonic settings differ and some sufficient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can benefit from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties. Journal of Economic Literature Classification Numbers: C71, A14, D20." ] }
0904.2061
2127353230
In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we are mainly concerned with is how much the lack of central coordination harms social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several new equilibrium concepts. For each equilibrium concept, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.
Selfish bin packing (SBP for short), which combines the idea of decentralization into the classic bin packing problem, has a great analogy to SBC. In SBP, there are also @math items, each of which has a size and is controlled by a selfish agent, and sufficiently many bins with identical capacities @math . The difference is that the total size of items that are packed into a bin should not exceed @math , and every nonempty bin incurs a cost of 1, which is shared among its members proportional to their sizes. This model is introduced by Bil o ( @cite_3 ). The exact @math is still unknown up to now, and the current best lower bound and upper bound are 1.6416 (by Epstein and Kleiman ( @cite_18 ), Yu and Zhang ( @cite_21 ), independently) and 1.6428 (by Epstein and Kleiman ( @cite_18 )), respectively, with a narrow gap to cover. Epstein and Kleiman also show that @math , and @math . Yu and Zhang also show that computing an NE can be done in @math time.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_3" ], "mid": [ "", "1584124593", "2121115143" ], "abstract": [ "", "We study a bin packing game in which any item to be packed is handled by a selfish agent. Each agent aims at minimizing his sharing cost with the other items staying in the same bin, where the social cost is the number of bins used. We first show that computing a pure Nash equilibrium can be done in polynomial time. We then prove that the price of anarchy for the game is in between 1.6416 and 1.6575, improving the previous bounds.", "In the non cooperative version of the classical minimum bin packing problem, an item is charged a cost according to the percentage of the used bin space it requires. We study the game induced by the selfish behavior of the items which are interested in being packed in one of the bins so as to minimize their cost. We prove that such a game always converges to a pure Nash equilibrium starting from any initial packing of the items, estimate the number of steps needed to reach one such equilibrium, prove the hardness of computing good equilibria and give an upper and a lower bound for the price of anarchy of the game. Then, we consider a multidimensional extension of the problem in which each item can require to be packed in more than just one bin. Unfortunately, we show that in such a case the induced game may not admit a pure Nash equilibrium even under particular restrictions. The study of these games finds applications in the analysis of the bandwidth cost sharing problem in non cooperative networks." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
Description Logics (DLs) @cite_6 is a family of simple logic languages used for knowledge representation. DLs are used for describing various kinds of knowledge of a specific field as well as of general nature. The description logic approach uses to represent sets of , and to describe binary relations between concepts. Objects are the instances occurring in the modelled application field, and thus are also called or .
{ "cite_N": [ "@cite_6" ], "mid": [ "1555563750" ], "abstract": [ "Description logics are embodied in several knowledge-based systems and are used to develop various real-life applications. Now in paperback, The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically oriented readers, to those with more practically oriented interests who need a sound and modern understanding of knowledge representation systems based on description logics. As well as general revision throughout the book, this new edition presents a new chapter on ontology languages for the semantic web, an area of great importance for the future development of the web. In sum, the book will serve as a unique resource for the subject, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
To make tableau-based reasoning more efficient on large data sets, several techniques have been developed in recent years, see e.g. @cite_16 . These are used by the state-of-the-art description logic reasoners, such as RacerPro @cite_22 or Pellet @cite_0 , the two tableau reasoners used in our performance evaluation in .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_22" ], "mid": [ "2154829072", "2169606199", "56893218" ], "abstract": [ "In this paper, we present a brief overview of Pellet: a complete OWL-DL reasoner with acceptable to very good performance, extensive middleware, and a number of unique features. Pellet is the first sound and complete OWL-DL reasoner with extensive support for reasoning with individuals (including nominal support and conjunctive query), user-defined datatypes, and debugging support for ontologies. It implements several extensions to OWL-DL including a combination formalism for OWL-DL ontologies, a non-monotonic operator, and preliminary support for OWL Rule hybrid reasoning. Pellet is written in Java and is open source.", "Practical description logic systems play an evergrowing role for knowledge representation and reasoning research even in distributed environments. In particular, the often-discussed semantic web initiative is based on description logics (DLs) and defines important challenges for current system implementations. Recently, several standards for representation languages have been proposed (RDF, OWL). By introducing optimization techniques for inference algorithms we demonstrate that sound and complete query engines for semantic web representation languages can be built for practically significant query classes. The paper introduces and evaluates optimization techniques for the instance retrieval problem w.r.t. the description logic SHIQ(Dn)-, which covers large parts of OWL. The paper discusses practical experiments with the description logic system RACER.", "This paper reports on a pragmatic query language for Racer. The abstract syntax and semantics of this query language is defined. Next, the practical relevance of this query language is shown, applying the query answering algorithms to the problem of consistency maintenance between object-oriented design models." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
@cite_10 discuss how a first-order theorem prover, such as Vampire, can be modified and optimised for reasoning over description logic knowledge bases. This work, however, mostly focuses on TBox reasoning.
{ "cite_N": [ "@cite_10" ], "mid": [ "1537460435" ], "abstract": [ "It is claimed in [45] that first-order theorem provers are not efficient for reasoning with ontologies based on description logics compared to specialised description logic reasoners. However, the development of more expressive ontology languages requires the use of theorem provers able to reason with full first-order logic and even its extensions. So far, theorem provers have extensively been used for running experiments over TPTP containing mainly problems with relatively small axiomatisations. A question arises whether such theorem provers can be used to reason in real time with large axiomatisations used in expressive ontologies such as SUMO. In this paper we answer this question affirmatively by showing that a carefully engineered theorem prover can answer queries to ontologies having over 15,000 first-order axioms with equality. Ontologies used in our experiments are based on the language KIF, whose expressive power goes far beyond the description logic based languages currently used in the Semantic Web." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
The paper @cite_24 describes a resolution-based inference algorithm which is not as sensitive to the increase of the ABox size as the tableau-based methods. The system KAON2 @cite_12 is an implementation of this approach, providing reasoning services over the description logic language @math . In we use KAON2 as one of the systems with which we compare the performance of DLog.
{ "cite_N": [ "@cite_24", "@cite_12" ], "mid": [ "2396306223", "2118699275" ], "abstract": [ "We present several algorithms for reasoning with description logics closely related to SHIQ. Firstly, we present an algorithm for deciding satisfiability of SHIQ knowledge bases. Then, to enable representing concrete data such as strings or integers, we devise a general approach for reasoning with concrete domains in the framework of resolution, and apply it to obtain a procedure for deciding SHIQ(D). For unary coding of numbers, this procedure is worst-case optimal, i.e. it runs in exponential time. Motivated by the prospects of reusing optimization techniques from deductive databases, such as magic sets, we devise an algorithm for reducing SHIQ(D) knowledge bases to disjunctive datalog programs. Furthermore, we show that so-called DL-safe rules can be combined with disjunctive programs obtained by our transformation to increase the expressivity of the logic, without affecting decidability. We show that our algorithms can easily be extended to handle answering conjunctive queries over SHIQ(D) knowledge bases. Finally, we extend our algorithms to support metamodeling. Since SHIQ(D) is closely related to OWL-DL, our algorithms provide alternative mechanisms for reasoning in the Semantic Web.", "Color evaluation of a color sample is effected by producing in a simultaneous field of vision a plurality of separate color comparison regions each differing in color slightly from one another and juxtaposed with a multiplicity of images of a single small area of a color sample. The color of the sample images can be compared simultaneously with the colors of the comparison regions and a color attribute of the comparison regions varied until visual correspondence is obtained between the sample color and one of the comparison regions." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
The basic idea of KAON2 is to first transform a @math knowledge base into a skolemized first-order clausal form. However, instead of using direct clausification, first a structural transformation @cite_2 is applied on the @math axioms. This transformation eliminates the nested concept descriptions by introducing new concepts; the resulting set of first-order clauses is denoted by @math . In the next step, basic superposition @cite_18 , a refinement of first-order resolution, is applied to saturate @math . The resulting set of clauses is denoted by @math . Clauses @math are then transformed into a disjunctive datalog program @cite_32 entailing the same set of ground facts as the initial DL knowledge base. This program is executed using a disjunctive datalog engine written specifically for KAON2. In this approach, the saturated clauses may still contain (non-nested) function symbols which are eliminated by introducing a new constant @math , standing for @math , for each individual @math in the ABox. This effectively means that KAON2 has to read the of the ABox before attempting to answer any queries.
{ "cite_N": [ "@cite_18", "@cite_32", "@cite_2" ], "mid": [ "2082976889", "", "1999997889" ], "abstract": [ "Abstract Deduction methods for first-order constrained clauses with equality are described within an abstract framework: constraint strategies , consisting of an inference system, a constraint inheritance strategy and redundancy criteria for clauses and inferences. We give simple conditions for such a constraint strategy to be complete (refutationally and in the sense of Knuth-Bendix-like completion). This allows to prove in a uniform way the completeness of several instantiations of the framework with concrete strategies. For example, strategies in which equality constraints are inherited are basic : no inferences are needed on subterms introduced by unifiers of previous inferences. Ordering constraints reduce the search space by inheriting the ordering restrictions of previous inferences and increase the expressive power of the logic.", "", "Most resolution theorem provers convert a theorem into clause form before attempting to find a proof. The conventional translation of a first-order formula into clause form often obscures the structure of the formula, and may increase the length of the formula by an exponential amount in the worst case. We present a non-standard clause form translation that preserves more of the structure of the formula than the conventional translation. This new translation also avoids the exponential increase in size which may occur with the standard translation. We show how this idea may be combined with the idea of replacing predicates by their definitions before converting to clause form. We give a method of lock resolution which is appropriate for the non-standard clause form translation, and which has yielded a spectacular reduction in search space and time for one example. These techniques should increase the attractiveness of resolution theorem provers for program verification applications, since the theorems that arise in program verification are often simple but tedious for humans to prove." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
@cite_27 introduces the term Description Logic Programming (DLP), advocating a direct transformation of @math description logic concepts into Horn-clauses. It poses some restrictions on the form of the knowledge base, to disallow axioms requiring disjunctive reasoning. As an extension, @cite_21 introduces a fragment of the @math language which can be transformed into Horn-clauses. This work, however, still poses restrictions on the use of disjunctions. In @cite_31 and @cite_44 authors present a semantic search engine which works on web-scale and builds on the extension of the DLP idea. Further important work on Description Logic Programming includes @cite_34 and @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_44", "@cite_27", "@cite_31", "@cite_34" ], "mid": [ "11793264", "1492864441", "", "2952727465", "2141113026", "2071778829" ], "abstract": [ "Integrating description logics (DL) and logic programming (LP) would produce a very powerful and useful formalism. However, DLs and LP are based on quite different principles, so achieving a seamless integration is not trivial. In this paper, we introduce hybrid MKNF knowledge bases that faithfully integrate DLs with LP using the logic of Minimal Knowledge and Negation as Failure (MKNF) [Lifschitz, 1991]. We also give reasoning algorithms and tight data complexity bounds for several interesting fragments of our logic.", "Data complexity of reasoning in description logics (DLs) estimates the performance of reasoning algorithms measured in the size of the ABox only. We show that, even for the very expressive DL SHIQ, satisfiability checking is data complete for NP. For applications with large ABoxes, this can be a more accurate estimate than the usually considered combined complexity, which is EXPTIME-complete. Furthermore, we identify an expressive fragment, Horn-SHIQ, which is data complete for P, thus being very appealing for practical usage.", "", "", "In this paper we discuss the challenges of performing reasoning on large scale RDF datasets from the Web. We discuss issues and practical solutions relating to reasoning over web data using a rule-based approach to forward-chaining; in particular, we identify the problem of ontology hijacking: new ontologies published on the Web re-defining the semantics of existing concepts resident in other ontologies. Our solution introduces consideration of authoritative sources. Our system is designed to scale, comprising of file-scans and selected lightweight on-disk indices. We evaluate our methods on a dataset in the order of a hundred million statements collected from real-world Web sources.", "We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization." ] }
0904.0578
2953023217
This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).
Another approach of utilising Logic Programming in DL reasoning was proposed by the research group of the authors of the present paper. Earlier results of this work have been published in several conference papers. The first step of our research resulted in a resolution-based transformation of ABox reasoning problems to Prolog for the DL language @math and an @cite_11 . As the second step, we examined how ABox reasoning services can be provided with respect to a TBox: we extended our approach to allow ABox inference involving @math TBox axioms of a restricted form @cite_43 . In @cite_8 we presented a system doing almost full @math reasoning, which uses an interpreter based on PTTP techniques (see below).
{ "cite_N": [ "@cite_43", "@cite_8", "@cite_11" ], "mid": [ "1517066375", "", "1502939043" ], "abstract": [ "The goal of this paper is to present how the Prolog Technology Theorem Proving (PTTP) approach can be used for ABox-reasoning. This work presents an inference algorithm over the language ALC, and evaluates its performance highlighting the advantages and drawbacks of this method.", "", "In this paper we present a novel approach for determining the instances of description logic concepts when huge amounts of underlying data are expected. In such cases, traditional description logic theorem proving techniques cannot be used due to performance problems. Our idea is to transform a concept description into a Prolog program which represents a query-plan. This transformation is done without any knowledge of the particular data. Data are accessed dynamically during the normal Prolog execution of the generated program. With this technique only those pieces of data are accessed which are indeed important for answering the query, i.e. we solve the original problem in a database friendly way. We evaluate the performance of our approach and compare it to several description logic reasoners." ] }
0904.0811
1482989754
We study the density of the weights of Generalized Reed--Muller codes. Let @math denote the code of multivariate polynomials over @math in @math variables of total degree at most @math . We consider the case of fixed degree @math , when we let the number of variables @math tend to infinity. We prove that the set of relative weights of codewords is quite sparse: for every @math which is not rational of the form @math , there exists an interval around @math in which no relative weight exists, for any value of @math . This line of research is to the best of our knowledge new, and complements the traditional lines of research, which focus on the weight distribution and the divisibility properties of the weights. Equivalently, we study distributions taking values in a finite field, which can be approximated by distributions coming from constant degree polynomials, where we do not bound the number of variables. We give a complete characterization of all such distributions.
The weight distribution of @math is the number of codewords below a certain weight. The case of @math , i.e. of linear functions, is trivial, since all non-constant codewords have the same weight. The case of @math , i.e. of quadratic functions, is also fully understood. A theorem of Dixon @cite_2 gives a canonical characterization of quadratic functions, and in particular gives the possible weights and the weight distribution of quadratic functions. By the McWilliams identity, this characterize the weight distribution of their dual codes, which are @math and @math . These are, to the best of our knowledge, the only (non-trivial) orders for which complete characterization the weights of Generalized Reed--Muller codes is known. For other orders, complete characterization is known only for specific values of @math . For example, for cubics the record is the work of Sugita, Kasami and Fujiwara @cite_13 , characterizing the weight distribution for @math .
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "1797136128", "1606480398" ], "abstract": [ "Extractors are functions which are able to “extract” random bits from arbitrary distributions which “contain” sufficient randomness. Explicit constructions of extractors have many applications in complexity theory and combinatorics. This manuscript is a survey of recent developments in extractors and focuses on explicit constructions of extractors following Trevisan’s breakthrough result [Tre99].", "Linear Codes. Nonlinear Codes, Hadamard Matrices, Designs and the Golay Code. An Introduction to BCH Codes and Finite Fields. Finite Fields. Dual Codes and Their Weight Distribution. Codes, Designs and Perfect Codes. Cyclic Codes. Cyclic Codes: Idempotents and Mattson-Solomon Polynomials. BCH Codes. Reed-Solomon and Justesen Codes. MDS Codes. Alternant, Goppa and Other Generalized BCH Codes. Reed-Muller Codes. First-Order Reed-Muller Codes. Second-Order Reed-Muller, Kerdock and Preparata Codes. Quadratic-Residue Codes. Bounds on the Size of a Code. Methods for Combining Codes. Self-dual Codes and Invariant Theory. The Golay Codes. Association Schemes. Appendix A. Tables of the Best Codes Known. Appendix B. Finite Geometries. Bibliography. Index." ] }
0904.0811
1482989754
We study the density of the weights of Generalized Reed--Muller codes. Let @math denote the code of multivariate polynomials over @math in @math variables of total degree at most @math . We consider the case of fixed degree @math , when we let the number of variables @math tend to infinity. We prove that the set of relative weights of codewords is quite sparse: for every @math which is not rational of the form @math , there exists an interval around @math in which no relative weight exists, for any value of @math . This line of research is to the best of our knowledge new, and complements the traditional lines of research, which focus on the weight distribution and the divisibility properties of the weights. Equivalently, we study distributions taking values in a finite field, which can be approximated by distributions coming from constant degree polynomials, where we do not bound the number of variables. We give a complete characterization of all such distributions.
Considering general orders, several characteristics of the weights are known. The minimal weight of non-zero codewords in @math is known, as are as are the codewords achieving this minimal distance @cite_11 . In the case of Reed--Muller codes, corresponding to @math , Kasami and Tokura @cite_0 give a complete characterization of codewords of weight at most twice the minimal weight of the code, and Azumi, Kasami and Tokura @cite_3 gave a characterization of codewords of weight at most @math the minimal weight of the code. Recently, Kaufman and the author @cite_10 gave a relatively tight estimate on the number of codewords in Reed--Muller codes, holding for all weights.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_3", "@cite_11" ], "mid": [ "2042993834", "", "2038242066", "2073236804" ], "abstract": [ "The following theorem is proved. Let f(x_1, , x_m) be a binary nonzero polynomial of m variables of degree . H the number of binary m -tuples (a_1, , a_m) with f(a_1, , a_m) = 1 is less than 2^ m- +1 , then f can be reduced by an invertible affme transformation of its variables to one of the following forms. where m + and 3 . This theorem completely characterizes the codewords of the th-order Reed-Muller code whose weights are less than twice the minimum weight and leads to the weight enumerators for those codewords. These weight formulas are extensions of Berlekamp and Sloane's results.", "", "Let P r be the set of all polynomials of degree r in m variables over GF (2). Polynomial ƒ in P r is said to be affine equivalent to polynomial g in P r , if ƒ is transformable to g by an invertible affine transformation of the variables. Any polynomial of weight less than 2 m − r +1 + 2 m − r −1 in P r is shown to have a simple structure. By using this fact, we find out a set of representative polynomials such that any polynomial of weight less than 2 m − r +1 + 2 m − r −1 in P r is affine equivalent to one and only one polynomial of the set. By counting the number of polynomials which are affine equivalent to each representative polynomial in the set, we derive explicit formulas for the enumerators of all weights less than 2.5 d of Reed—Muller codes, where d is the minimum weight.", "The polynomial formulation of generalized ReedMuller codes, first introduced by Kasami, Lin, and Peterson is somewhat formalized and an extensive study is made of the interrelations between the m -variable approach of Kasami, Lin, and Peterson and the one-variable approach of Mattson and Solomon. The automorphism group is studied in great detail, both in the m -variable and in the one-variable language. The number of minimum weight vectors is obtained in the general case. Two ways of restricting generalized ReedMuller codes to subcodes are studied: the nonprimitive and the subfield subcodes. Connections with geometric codes are pointed out and a new series of majority decodable codes is introduced." ] }
0904.1113
2951602439
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1 , where is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.
The problem of finding good @math -means clusterings allows for polynomial-time approximation schemes @cite_4 @cite_5 @cite_12 with various dependencies of the running time on @math , @math , @math , and the approximation ratio @math . The running times of these approximation schemes depend exponentially on @math . Recent research on this subject also includes the work by @cite_17 and @cite_1 . However, the most widely used algorithm for @math -means clustering is still the @math -means method due to its simplicity and speed.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "2059651397", "2134089414", "1998905999", "2110105238", "2104837959" ], "abstract": [ "In this paper, we show that for several clustering problems one can extract a small set of points, so that using those core-sets enable us to perform approximate clustering efficiently. The surprising property of those core-sets is that their size is independent of the dimension.Using those, we present a (1+ e)-approximation algorithms for the k-center clustering and k-median clustering problems in Euclidean space. The running time of the new algorithms has linear or near linear dependency on the number of points and the dimension, and exponential dependency on 1 e and k. As such, our results are a substantial improvement over what was previously known.We also present some other clustering results including (1+ e)-approximate 1-cylinder clustering, and k-center clustering with outliers.", "Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be protably modied to make use of this information. In experiments with articial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance.", "For a partition of an n -point set (X ) into k subsets (clusters) S 1 ,S 2 ,. . .,S k , we consider the cost function ( i=1 ^k x S_i |x-c(S_i) |^2 ) , where c(S i ) denotes the center of gravity of S i . For k=2 and for any fixed d and e >0 , we present a deterministic algorithm that finds a 2-clustering with cost no worse than (1+e) -times the minimum cost in time O(n log n); the constant of proportionality depends polynomially on e . For an arbitrary fixed k , we get an O(n log k n) algorithm for a fixed e , again with a polynomial dependence on e .", "We present the first linear time (1 + spl epsiv )-approximation algorithm for the k-means problem for fixed k and spl epsiv . Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.", "In this paper, we present \"k-means+ID3\", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED" ] }
0904.1113
2951602439
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1 , where is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.
In an attempt to reconcile theory and practice, Arthur and Vassilvitskii @cite_9 performed the first smoothed analysis of the @math -means method: If the data points are perturbed by Gaussian perturbations of standard deviation @math , then the smoothed number of iterations is polynomial in @math , @math , the diameter of the point set, and @math . However, this bound is still super-polynomial in the number @math of data points. They conjectured that @math -means has indeed polynomial smoothed running time, i.e., that the smoothed number of iterations is bounded by some polynomial in @math and @math .
{ "cite_N": [ "@cite_9" ], "mid": [ "2034380011" ], "abstract": [ "We show a worst-case lower bound and a smoothed upper bound on the number of iterations performed by the Iterative Closest Point (ICP) algorithm. First proposed by Besl and McKay, the algorithm is widely used in computational geometry, where it is known for its simplicity and its observed speed. The theoretical study of ICP was initiated by Ezra, Sharir, and Efrat, who showed that the worst-case running time to align two sets of @math points in @math is between @math and @math . We substantially tighten this gap by improving the lower bound to @math . To help reconcile this bound with the algorithm's observed speed, we also show that the smoothed complexity of ICP is polynomial, independent of the dimensionality of the data. Using similar methods, we improve the best known smoothed upper bound for the popular k-means method to @math , once again independent of the dimension." ] }
0904.1113
2951602439
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1 , where is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.
Since then, there has been only partial success in proving the conjecture. Manthey and R "oglin improved the smoothed running time bound by devising two bounds @cite_24 : The first is polynomial in @math and @math . The second is @math , where the degree of the polynomial is independent of @math and @math . Additionally, they proved a polynomial bound for the smoothed running time of @math -means on one-dimensional instances.
{ "cite_N": [ "@cite_24" ], "mid": [ "2949173273" ], "abstract": [ "The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii (FOCS 2006) aimed at closing this gap, and they proved a bound of @math on the smoothed running-time of the k-means method, where n is the number of data points and @math is the standard deviation of the Gaussian perturbation. This bound, though better than the worst-case bound, is still much larger than the running-time observed in practice. We improve the smoothed analysis of the k-means method by showing two upper bounds on the expected running-time of k-means. First, we prove that the expected running-time is bounded by a polynomial in @math and @math . Second, we prove an upper bound of @math , where d is the dimension of the data space. The polynomial is independent of k and d, and we obtain a polynomial bound for the expected running-time for @math . Finally, we show that k-means runs in smoothed polynomial time for one-dimensional instances." ] }
0903.4594
2102325130
It is well known that the generalized max-weight matching (GMWM) scheduling policy, and in general throughput-optimal scheduling policies, often require the solution of a complex optimization problem, making their implementation prohibitively difficult in practice. This has motivated many researchers to develop distributed sub-optimal algorithms that approximate the GMWM policy. One major assumption commonly shared in this context is that the time required to find an appropriate schedule vector is negligible compared to the length of a timeslot. This assumption may not be accurate as the time to find schedule vectors usually increases polynomially with the network size. On the other hand, we intuitively expect that for many sub-optimal algorithms, the schedule vector found becomes a better estimate of the one returned by the GMWM policy as more time is given to the algorithm. We thus, in this paper, consider the problem of scheduling from a new perspective through which we carefully incorporate channel variations and time-efficiency of sub-optimal algorithms into the scheduler design. Specifically, we propose a dynamic control policy (DCP) that works on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require the knowledge of the structure of the given sub-optimal algorithm, and with low-overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the stability region induced by DCP, and show that our characterization can be tight. We also show that the stability region of DCP is at least as large as the one for any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.
Previous work on throughput-optimal scheduling includes the studies in @cite_27 @cite_21 @cite_26 @cite_15 . In particular, in @cite_27 , Tassiulas and Ephremides characterized the throughput capacity region for multi-hop wireless networks, and developed the GMWM scheduling as a throughput-optimal scheduling policy. This result has been further extended to general network models with ergodic channel and arrival processes @cite_26 . Due to its applicability to general multi-hop networks, the GMWM scheduling has been employed, either directly or in a modified form, as a key component in different setups and many cross-layer designs. Examples include control of cooperative relay networks @cite_15 , rate control @cite_2 , energy efficiency @cite_12 @cite_4 , and congestion control @cite_0 @cite_24 . This scheduling policy has also inspired pricing strategies maximizing social welfare @cite_13 , and fair resource allocation @cite_0 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_21", "@cite_0", "@cite_24", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2070317877", "", "1520608217", "2173655337", "2170178059", "2105177639", "2120344179", "2040496561", "2015755143", "2112577169" ], "abstract": [ "We consider dynamic routing and power allocation for a wireless network with time-varying channels. The network consists of power constrained nodes that transmit over wireless links with adaptive transmission rates. Packets randomly enter the system at each node and wait in output queues to be transmitted through the network to their destinations. We establish the capacity region of all rate matrices ( spl lambda sub ij ) that the system can stably support-where spl lambda sub ij represents the rate of traffic originating at node i and destined for node j. A joint routing and power allocation policy is developed that stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region. Such performance holds for general arrival and channel state processes, even if these processes are unknown to the network controller. We then apply this control algorithm to an ad hoc wireless network, where channel variations are due to user mobility. Centralized and decentralized implementations are compared, and the stability region of the decentralized algorithm is shown to contain that of the mobile relay strategy developed by Grossglauser and Tse (2002).", "", "", "We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel conditions may be time-varying and different for different receivers. It is well-known that appropriately chosen queue-length based policies are throughput-optimal while other policies based on the estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability.", "This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints. By dual decomposition, the resource allocation problem naturally decomposes into three subproblems: congestion control, routing and scheduling that interact through congestion price. The global convergence property of this algorithm is proved. We next extend the dual algorithm to handle networks with timevarying channels and adaptive multi-rate devices. The stability of the resulting system is established, and its performance is characterized with respect to an ideal reference system which has the best feasible rate region at link layer. We then generalize the aforementioned results to a general model of queueing network served by a set of interdependent parallel servers with time-varying service capabilities, which models many design problems in communication networks. We show that for a general convex optimization problem where a subset of variables lie in a polytope and the rest in a convex set, the dual-based algorithm remains stable and optimal when the constraint set is modulated by an irreducible finite-state Markov chain. This paper thus presents a step toward a systematic way to carry out cross-layer design in the framework of “layering as optimization decomposition” for time-varying channel models.", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >", "We consider optimal control for general networks with both wireless and wireline components and time varying channels. A dynamic strategy is developed to support all traffic whenever possible, and to make optimally fair decisions about which data to serve when inputs exceed network capacity. The strategy is decoupled into separate algorithms for flow control, routing, and resource allocation, and allows each user to make decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network.", "In cooperative relaying, multiple nodes cooperate to forward a packet within a network. To date, such schemes have been primarily investigated at the physical layer with the focus on communication of a single end-to-end flow. This paper considers cooperative relay networks with multiple stochastically varying flows, which may be queued within the network. Throughput optimal network control policies are studied that take into account queue dynamics to jointly optimize routing, scheduling and resource allocation. To this end, a generalization of the maximum differential backlog algorithm is given, which takes into account the cooperative gains in the network. Several structural characteristics of this policy are discussed for the special case of parallel relay networks.", "We consider an ad-hoc wireless network operating within a free market economic model. Users send data over a choice of paths, and scheduling and routing decisions are updated dynamically based on time varying channel conditions, user mobility, and current network prices charged by intermediate nodes. Each node sets its own price for relaying services, with the goal of earning revenue that exceeds its time average reception and transmission expenses. We first develop a greedy pricing strategy that maximizes social welfare while ensuring all participants make non-negative profit. We then construct a (non-greedy) policy that balances profits more evenly by optimizing a profit fairness metric. Both algorithms operate in a distributed manner and do not require knowledge of traffic rates or channel statistics. This work demonstrates that individuals can benefit from carrying wireless devices even if they are not interested in their own personal communication.", "We consider the fundamental delay tradeoffs for minimizing energy expenditure in a multiuser wireless downlink with randomly varying channels. First, we extend the Berry-Gallager bound to a multiuser context, demonstrating that any algorithm that yields average power within O(1 V) of the minimum power required for network stability must also have an average queueing delay greater than or equal to Omega(radicV). We then develop a class of algorithms, parameterized by V, that come within a logarithmic factor of achieving this fundamental tradeoff. The algorithms overcome an exponential state-space explosion, and can be implemented in real time without a priori knowledge of traffic rates or channel statistics. Further, we discover a ldquosuperfastrdquo scheduling mode that beats the Berry-Gallager bound in the exceptional case when power functions are piecewise linear." ] }
0903.4594
2102325130
It is well known that the generalized max-weight matching (GMWM) scheduling policy, and in general throughput-optimal scheduling policies, often require the solution of a complex optimization problem, making their implementation prohibitively difficult in practice. This has motivated many researchers to develop distributed sub-optimal algorithms that approximate the GMWM policy. One major assumption commonly shared in this context is that the time required to find an appropriate schedule vector is negligible compared to the length of a timeslot. This assumption may not be accurate as the time to find schedule vectors usually increases polynomially with the network size. On the other hand, we intuitively expect that for many sub-optimal algorithms, the schedule vector found becomes a better estimate of the one returned by the GMWM policy as more time is given to the algorithm. We thus, in this paper, consider the problem of scheduling from a new perspective through which we carefully incorporate channel variations and time-efficiency of sub-optimal algorithms into the scheduler design. Specifically, we propose a dynamic control policy (DCP) that works on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require the knowledge of the structure of the given sub-optimal algorithm, and with low-overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the stability region induced by DCP, and show that our characterization can be tight. We also show that the stability region of DCP is at least as large as the one for any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.
The GMWM scheduling despite its optimality, in every timeslot, requires the solution of the GMWM problem, which can be, in general, NP-hard and Non-Approximable @cite_6 . Thus, many studies has focused on developing sub-optimal constant factor approximations to the GMWM scheduling. One interesting study addressing the complexity issue is the work in @cite_31 , where sub-optimal algorithms are modeled as randomized algorithms, and it is shown that throughput-optimality can be achieved with linear complexity. In a more recent work @cite_1 , the authors propose distributed schemes to implement a randomized policy similar to the one in @cite_31 that can stabilize the entire capacity region. These results, however, assume non-time-varying channels. Other recent studies in @cite_19 @cite_23 generalize the approach in @cite_31 to time-varying networks, and prove its throughput-optimality. This optimality, as expected, comes at the price of requiring excessively large amount of other valuable resources in the network, which in this case is memory storage. Specifically, the memory requirement in @cite_19 @cite_23 increases exponentially with the number of users, making the generalized approach hardly amenable to practical implementation in large networks.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_19", "@cite_23", "@cite_31" ], "mid": [ "2152652256", "2059739072", "2096752169", "2055520496", "1607044975" ], "abstract": [ "A major challenge in the design of wireless networks is the need for distributed scheduling algorithms that will efficiently share the common spectrum. Recently, a few distributed algorithms for networks in which a node can converse with at most a single neighbor at a time have been presented. These algorithms guarantee 50 of the maximum possible throughput. We present the first distributed scheduling framework that guarantees maximum throughput. It is based on a combination of a distributed matching algorithm and an algorithm that compares and merges successive matching solutions. The comparison can be done by a deterministic algorithm or by randomized gossip algorithms. In the latter case, the comparison may be inaccurate. Yet, we show that if the matching and gossip algorithms satisfy simple conditions related to their performance and to the inaccuracy of the comparison (respectively), the framework attains the desired throughput.It is shown that the complexities of our algorithms, that achieve nearly 100 throughput, are comparable to those of the algorithms that achieve 50 throughput. Finally, we discuss extensions to general interference models. Even for such models, the framework provides a simple distributed throughput optimal algorithm.", "We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks.", "We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are made on the arrival process statistics other than the assumption that their mean values lie within the capacity region and that they satisfy a version of the law of large numbers. We prove that, for any mean arrival rate that lies in the capacity region, the queues will be stable under our policy. Moreover, we show that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of our policy.", "We consider a class of queueing networks referred to as \"generalized constrained queueing networks\" which form the basis of several different communication networks and information systems. These networks consist of a collection of queues such that only certain sets of queues can be concurrently served. Whenever a queue is served, the system receives a certain reward. Different rewards are obtained for serving different queues, and furthermore, the reward obtained for serving a queue depends on the set of concurrently served queues. We demonstrate that the dependence of the rewards on the schedules alter fundamental relations between performance metrics like throughput and stability. Specifically, maximizing the throughput is no longer equivalent to maximizing the stability region; we therefore need to maximize one subject to certain constraints on the other. Since stability is critical for bounding packet delays and buffer overflow, we focus on maximizing the throughput subject to stabilizing the system. We design provably optimal scheduling strategies that attain this goal by scheduling the queues for service based on the queue lengths and the rewards provided by different selections. The proposed scheduling strategies are however computationally complex. We subsequently develop techniques to reduce the complexity and yet attain the same throughput and stability region. We demonstrate that our framework is general enough to accommodate random rewards and random scheduling constraints.", "A resource allocation model that has within its scope a number of computer and communication network architectures was introduced by Tassiulas and Ephremides (1992) and scheduling methods that achieve maximum throughput were proposed. Those methods require the solution of a complex optimization problem at each packet transmission time and as a result they are not amenable to direct implementations. We propose a class of maximum throughput scheduling policies for the model introduced by Tassiulas and Ephremides that have linear complexity and can lead to practical implementations. They rely on a randomized, iterative algorithm for the solution of the optimization problem arising in the scheduling, in combination with an incremental updating rule. The proposed policy is of maximum throughput under some fairly general conditions on the randomized algorithm." ] }
0903.4594
2102325130
It is well known that the generalized max-weight matching (GMWM) scheduling policy, and in general throughput-optimal scheduling policies, often require the solution of a complex optimization problem, making their implementation prohibitively difficult in practice. This has motivated many researchers to develop distributed sub-optimal algorithms that approximate the GMWM policy. One major assumption commonly shared in this context is that the time required to find an appropriate schedule vector is negligible compared to the length of a timeslot. This assumption may not be accurate as the time to find schedule vectors usually increases polynomially with the network size. On the other hand, we intuitively expect that for many sub-optimal algorithms, the schedule vector found becomes a better estimate of the one returned by the GMWM policy as more time is given to the algorithm. We thus, in this paper, consider the problem of scheduling from a new perspective through which we carefully incorporate channel variations and time-efficiency of sub-optimal algorithms into the scheduler design. Specifically, we propose a dynamic control policy (DCP) that works on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require the knowledge of the structure of the given sub-optimal algorithm, and with low-overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the stability region induced by DCP, and show that our characterization can be tight. We also show that the stability region of DCP is at least as large as the one for any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.
The closest work to ours in this paper is @cite_29 , where based on the linear-complexity algorithm in @cite_31 , the impact of channel memory on the stability region of a general class of sub-optimal algorithms is studied. Despite its consideration for channel variations, this work still does not model the search-time, and implicitly assumes it is negligible.
{ "cite_N": [ "@cite_29", "@cite_31" ], "mid": [ "2170717320", "1607044975" ], "abstract": [ "Throughput optimal scheduling policies in general require the solution of a complex and often NP-hard optimization problem. Related literature has shown that in the context of time-varying channels, randomized scheduling policies can be employed to reduce the complexity of the optimization problem but at the expense of a memory requirement that is exponential in the number of data flows. In this paper, we consider a linear-memory randomized scheduling policy (LM-RSP) that is based on a pick-and-compare principle in a time-varying network with N one-hop data flows. For general ergodic channel processes, we study the performance of LM-RSP in terms of its stability region and average delay. Specifically, we show that LM-RSP can stabilize a fraction of the capacity region. Our analysis characterizes this fraction as well as the average delay as a function of channel variations and the efficiency of LM-RSP in choosing an appropriate schedule vector. Applying these results to a class of Markovian channels, we provide explicit results on the stability region and delay performance of LM-RSP.", "A resource allocation model that has within its scope a number of computer and communication network architectures was introduced by Tassiulas and Ephremides (1992) and scheduling methods that achieve maximum throughput were proposed. Those methods require the solution of a complex optimization problem at each packet transmission time and as a result they are not amenable to direct implementations. We propose a class of maximum throughput scheduling policies for the model introduced by Tassiulas and Ephremides that have linear complexity and can lead to practical implementations. They rely on a randomized, iterative algorithm for the solution of the optimization problem arising in the scheduling, in combination with an incremental updating rule. The proposed policy is of maximum throughput under some fairly general conditions on the randomized algorithm." ] }
0903.4856
1556684468
Reference EPFL-ARTICLE-229257 URL: http: arxiv.org abs 0903.4856 Record created on 2017-06-21, modified on 2017-06-21
An algorithm to compute the entire regularization path of the @math -SVM has originally been reported by @cite_8 . @cite_20 gave such an algorithm for the LASSO, and later @cite_7 and @cite_19 proposed solution path algorithms for @math -SVM and one-class SVM respectively. Also Receiver Operating Characteristic (ROC) curves of SVM were solved by such methods @cite_10 . Support vector regression (SVR) is interesting as its underlying quadratic program depends on two parameters, a regularization parameter (for which the solution path was tracked by @cite_15 @cite_13 @cite_7 ) and a tube-width parameter (for which @cite_35 recently gave a solution path algorithm). See also @cite_11 for a recent overview.
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_7", "@cite_8", "@cite_19", "@cite_15", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2055586095", "2148418853", "2227752230", "2133958955", "2117525326", "2159067232", "2096536283", "2063978378", "2057682568" ], "abstract": [ "Recently, a very appealing approach was proposed to compute the entire solution path for support vector classification (SVC) with very low extra computational cost. This approach was later extended to a support vector regression (SVR) model called e-SVR. However, the method requires that the error parameter e be set a priori, which is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we show that the solution path for e-SVR is also piecewise linear with respect to e. We further propose an efficient algorithm for exploring the two-dimensional solution space defined by the regularization and error parameters. As opposed to the algorithm for SVC, our proposed algorithm for e-SVR initializes the number of support vectors to zero and then increases it gradually as the algorithm proceeds. As such, a good regression function possessing the sparseness property can be obtained after only a few iterations.", "In this paper, regularization path algorithms were proposed as a novel approach to the model selection problem by exploring the path of possibly all solutions with respect to some regularization hyperparameter in an efficient way. This approach was later extended to a support vector regression (SVR) model called epsiv -SVR. However, the method requires that the error parameter epsiv be set a priori. This is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we analyze the solution space for epsiv-SVR and propose a new solution path algorithm, called epsiv-path algorithm, which traces the solution path with respect to the hyperparameter epsiv rather than lambda. Although both two solution path algorithms possess the desirable piecewise linearity property, our epsiv-path algorithm overcomes some limitations of the original lambda-path algorithm and has more advantages. It is thus more appealing for practical use.", "This paper presents the ν-SVM and theν-SVR full regularization paths along with aleave-one-out inspired stopping criterion and an efficientimplementation. In the ν-SVR method, two parameters areprovided by the user: the regularization parameter Candνwhich settles the width of the ν-tube. Inthe classical ν-SVM method, parameter νisan lower bound on the number of support vectors in the solution.Based on the previous works of [1,2], extensions of regularizationpaths for SVM and SVR are proposed and permit to automaticallycompute the solution path by varying νor theregularization parameter.", "The support vector machine (SVM) is a widely used tool for classification. Many efficient implementations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model. We illustrate our algorithm on some examples, and use our representation to give further insight into the range of SVM solutions.", "This paper applies the algorithm of , (2004) to the problem of learning the entire solution path of the one class support vector machine (OC-SVM) as its free parameter ν varies from 0 to 1. The OC-SVM with Gaussian kernel is a nonparametric estimator of a level set of the density governing the observed sample, with the parameter ν implicitly defining the corresponding level. Thus, the path algorithm produces estimates of all level sets and can therefore be applied to a variety of problems requiring estimation of multiple level sets including clustering, outlier ranking, minimum volume set estimation, and density estimation. The algorithm's cost is comparable to the cost of computing the OC-SVM for a single point on the path. We introduce a heuristic for enforced nestedness of the sets in the path, and present a method for kernel bandwidth selection based in minimum integrated volume, a kind of AUC criterion. These methods are illustrated on three datasets.", "In this paper we derive an algorithm that computes the entire solution path of the support vector regression, with essentially the same computational cost as fitting one SVR model. We also propose an unbiased estimate for the degrees of freedom of the SVR model, which allows convenient selection of the regularization parameter.", "Receiver Operating Characteristic (ROC) curves are a standard way to display the performance of a set of binary classifiers for all feasible ratios of the costs associated with false positives and false negatives. For linear classifiers, the set of classifiers is typically obtained by training once, holding constant the estimated slope and then varying the intercept to obtain a parameterized set of classifiers whose performances can be plotted in the ROC plane. We consider the alternative of varying the asymmetry of the cost function used for training. We show that the ROC curve obtained by varying both the intercept and the asymmetry, and hence the slope, always outperforms the ROC curve obtained by varying only the intercept. In addition, we present a path-following algorithm for the support vector machine (SVM) that can compute efficiently the entire ROC curve, and that has the same computational complexity as training a single classifier. Finally, we provide a theoretical analysis of the relationship between the asymmetric cost model assumed when training a classifier and the cost model assumed in applying the classifier. In particular, we show that the mismatch between the step function used for testing and its convex upper bounds, usually used for training, leads to a provable and quantifiable difference around extreme asymmetries.", "The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.", "We consider the generic regularized optimization problem β(λ) = argminβ L(y, Xβ) + λJ(β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO-that is, if L is squared error loss and J(β) = ∥β∥ 1 is the l 1 norm of β-the optimal coefficient path is piecewise linear, that is, ∂β(λ) ∂λ. is piecewise constant. We derive a general characterization of the properties of (loss L, penalty J) pairs which give piecewise linear coefficient paths. Such pairs allow for efficient generation of the full regularized coefficient paths. We investigate the nature of efficient path following algorithms which arise. We use our results to suggest robust versions of the LASSO for regression and classification, and to develop new, efficient algorithms for existing problems in the literature, including Mammen and van de Geer's locally adaptive regression splines." ] }
0903.4856
1556684468
Reference EPFL-ARTICLE-229257 URL: http: arxiv.org abs 0903.4856 Record created on 2017-06-21, modified on 2017-06-21
As @cite_8 point out, one drawback of their algorithm for the two-class SVM is that it does not work for singular kernel matrices, but requires that in the process of the algorithm, all occurring principal minors of the kernel matrix need to be invertible. The same is required by the other existing path algorithms mentioned above. However, large kernel matrices do often have very low numerical rank, even in those cases where radial base function kernels are used [Section 5.1] Hastie:2004p3702 , but of course also in the case of linear SVMs with sparse features, such as in the application to conjoint analysis discussed in this paper. The inability to deal with singular sub-matrices is probably one of the main reasons that none of the above mentioned algorithms could so far effectively be applied on medium larger scale problems @cite_8 @cite_11 . [Section 4.2] Rosset:2007p6880 report that their algorithm prematurely terminates on @math matrices due to this described problem.
{ "cite_N": [ "@cite_11", "@cite_8" ], "mid": [ "2057682568", "2133958955" ], "abstract": [ "We consider the generic regularized optimization problem β(λ) = argminβ L(y, Xβ) + λJ(β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO-that is, if L is squared error loss and J(β) = ∥β∥ 1 is the l 1 norm of β-the optimal coefficient path is piecewise linear, that is, ∂β(λ) ∂λ. is piecewise constant. We derive a general characterization of the properties of (loss L, penalty J) pairs which give piecewise linear coefficient paths. Such pairs allow for efficient generation of the full regularized coefficient paths. We investigate the nature of efficient path following algorithms which arise. We use our results to suggest robust versions of the LASSO for regression and classification, and to develop new, efficient algorithms for existing problems in the literature, including Mammen and van de Geer's locally adaptive regression splines.", "The support vector machine (SVM) is a widely used tool for classification. Many efficient implementations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model. We illustrate our algorithm on some examples, and use our representation to give further insight into the range of SVM solutions." ] }
0903.4856
1556684468
Reference EPFL-ARTICLE-229257 URL: http: arxiv.org abs 0903.4856 Record created on 2017-06-21, modified on 2017-06-21
By observing that all the above mentioned algorithms are reporting the solution paths of parametric quadratic programming of the form ), we point out that it is in fact not necessary to use different algorithms for each problem variant. Generic algorithms have been known for quite some time @cite_27 @cite_14 , @cite_21 @cite_30 @cite_2 , @cite_24 , but have interestingly not yet received broader attention in the area of machine learning.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_21", "@cite_24", "@cite_27", "@cite_2" ], "mid": [ "2104267382", "246887631", "2053744471", "624503216", "584634945", "1983800681" ], "abstract": [ "We present an ”active set” algorithm for the solution of the convex (but not necessarily strictly convex) parametric quadratic programming problem. The optimal solution and associated multipliers are obtained as piece-wise linear functions of the parameter. At the end of each interval, the active set is changed by either adding, deleting, or exchanging a constraint. The method terminates when either the optimal solution has been obtained for all values of the parameter, or, a further increase in the parameter results in either the feasible region being null or the objective function being unbounded from below. The method used to solve the linear equations associated with a particular active set is left unspecified. The parametric algorithm can thus be implemented using the linear equation solving method of any active set quadratic programming algorithm.", "Abstract : An algorithm is described for determining the optimal solution of parametric linear and quadratic programming problems as an explicit piecewise linear function of the parameter. Each linear function is uniquely determined by an appropriate subset of active constraints. For every critical value of the parameter a new subset has to be determined. A simple rule is given for adding and deleting constraints from this subset. (Author)", "A method is presented for the solution of the parametric quadratic programming problem by the use of conjugate directions. It is based on the method for quadratic programming proposed by the author in [1].", "The main contributions in this thesis are advances in parametric programming. The thesis is divided into three parts; theoretical advances, application areas and constrained control allocation. The first part deals with continuity properties and the structure of solutions to convex parametric quadratic and linear programs. The second part focuses on applications of parametric quadratic and linear programming in control theory. The third part deals with constrained control allocation and how parametric programming can be used to obtain explicit solutions to this problem.", "", "In this paper we consider a semidefinite programming (SDP) problem in which the objective function depends linearly on a scalar parameter. We study the properties of the optimal objective function value as a function of that parameter and extend the concept of the optimal partition and its range in linear programming to SDP. We also consider an approach to sensitivity analysis in SDP and the extension of our results to an SDP problem with a parametric right-hand side." ] }
0903.4856
1556684468
Reference EPFL-ARTICLE-229257 URL: http: arxiv.org abs 0903.4856 Record created on 2017-06-21, modified on 2017-06-21
One goal of this paper is to popularize the generic solution algorithms for parametric quadratic programming, because we think that they have some major advantages: The same algorithm can be applied to any solution path problem that can be written in the form ), which includes all of @cite_8 @cite_20 @cite_15 @cite_13 @cite_10 @cite_7 @cite_19 @cite_22 @cite_35 . Many of the known generic algorithms can deal with inputs; in particular the algorithms can cope with singular sub-matrices in the objective function. There is significant existing literature on the performance, numerical stability, and complexity of the generic algorithms. Our criss-cross algorithm is numerically more stable, and also more robust in the sense that small errors do not add up while tracking the solution path. Also, such algorithms are faster for sparse problems as in linear SVMs and conjoint analysis, because they do not need any matrix inversions.
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_22", "@cite_7", "@cite_8", "@cite_19", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2055586095", "2148418853", "2104794207", "2227752230", "2133958955", "2117525326", "2159067232", "2096536283", "2063978378" ], "abstract": [ "Recently, a very appealing approach was proposed to compute the entire solution path for support vector classification (SVC) with very low extra computational cost. This approach was later extended to a support vector regression (SVR) model called e-SVR. However, the method requires that the error parameter e be set a priori, which is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we show that the solution path for e-SVR is also piecewise linear with respect to e. We further propose an efficient algorithm for exploring the two-dimensional solution space defined by the regularization and error parameters. As opposed to the algorithm for SVC, our proposed algorithm for e-SVR initializes the number of support vectors to zero and then increases it gradually as the algorithm proceeds. As such, a good regression function possessing the sparseness property can be obtained after only a few iterations.", "In this paper, regularization path algorithms were proposed as a novel approach to the model selection problem by exploring the path of possibly all solutions with respect to some regularization hyperparameter in an efficient way. This approach was later extended to a support vector regression (SVR) model called epsiv -SVR. However, the method requires that the error parameter epsiv be set a priori. This is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we analyze the solution space for epsiv-SVR and propose a new solution path algorithm, called epsiv-path algorithm, which traces the solution path with respect to the hyperparameter epsiv rather than lambda. Although both two solution path algorithms possess the desirable piecewise linearity property, our epsiv-path algorithm overcomes some limitations of the original lambda-path algorithm and has more advantages. It is thus more appealing for practical use.", "Given a set of points in a Hilbert space that can be separated from the origin. The slab support vector machine (slab SVM) is an optimization problem that aims at finding a slab (two parallel hyperplanes whose distance—the slab width—is essentially fixed) that encloses the points and is maximally separated from the origin. Extreme cases of the slab SVM include the smallest enclosing ball problem and an interpolation problem that was used (as the slab SVM itself) in surface reconstruction with radial basis functions. Here we show that the path of solutions of the slab SVM, i.e., the solution parametrized by the slab width is piecewise linear.", "This paper presents the ν-SVM and theν-SVR full regularization paths along with aleave-one-out inspired stopping criterion and an efficientimplementation. In the ν-SVR method, two parameters areprovided by the user: the regularization parameter Candνwhich settles the width of the ν-tube. Inthe classical ν-SVM method, parameter νisan lower bound on the number of support vectors in the solution.Based on the previous works of [1,2], extensions of regularizationpaths for SVM and SVR are proposed and permit to automaticallycompute the solution path by varying νor theregularization parameter.", "The support vector machine (SVM) is a widely used tool for classification. Many efficient implementations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model. We illustrate our algorithm on some examples, and use our representation to give further insight into the range of SVM solutions.", "This paper applies the algorithm of , (2004) to the problem of learning the entire solution path of the one class support vector machine (OC-SVM) as its free parameter ν varies from 0 to 1. The OC-SVM with Gaussian kernel is a nonparametric estimator of a level set of the density governing the observed sample, with the parameter ν implicitly defining the corresponding level. Thus, the path algorithm produces estimates of all level sets and can therefore be applied to a variety of problems requiring estimation of multiple level sets including clustering, outlier ranking, minimum volume set estimation, and density estimation. The algorithm's cost is comparable to the cost of computing the OC-SVM for a single point on the path. We introduce a heuristic for enforced nestedness of the sets in the path, and present a method for kernel bandwidth selection based in minimum integrated volume, a kind of AUC criterion. These methods are illustrated on three datasets.", "In this paper we derive an algorithm that computes the entire solution path of the support vector regression, with essentially the same computational cost as fitting one SVR model. We also propose an unbiased estimate for the degrees of freedom of the SVR model, which allows convenient selection of the regularization parameter.", "Receiver Operating Characteristic (ROC) curves are a standard way to display the performance of a set of binary classifiers for all feasible ratios of the costs associated with false positives and false negatives. For linear classifiers, the set of classifiers is typically obtained by training once, holding constant the estimated slope and then varying the intercept to obtain a parameterized set of classifiers whose performances can be plotted in the ROC plane. We consider the alternative of varying the asymmetry of the cost function used for training. We show that the ROC curve obtained by varying both the intercept and the asymmetry, and hence the slope, always outperforms the ROC curve obtained by varying only the intercept. In addition, we present a path-following algorithm for the support vector machine (SVM) that can compute efficiently the entire ROC curve, and that has the same computational complexity as training a single classifier. Finally, we provide a theoretical analysis of the relationship between the asymmetric cost model assumed when training a classifier and the cost model assumed in applying the classifier. In particular, we show that the mismatch between the step function used for testing and its convex upper bounds, usually used for training, leads to a provable and quantifiable difference around extreme asymmetries.", "The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates." ] }
0903.4856
1556684468
Reference EPFL-ARTICLE-229257 URL: http: arxiv.org abs 0903.4856 Record created on 2017-06-21, modified on 2017-06-21
Instead of using our described generic criss-cross method, another obvious way to avoid degeneracies caused by singular sub-matrices in the objective function is to add a small value @math to each diagonal entry of the original matrix @math ; subsequently, all simple methods for the regular case such as @cite_8 @cite_11 can be used. There are several problems with this approach. First of all, the rank of the objective function matrix is blown up artificially, and the potential of using efficient small-rank-QP methods would be wasted. Secondly, the solution path of the perturbed problem may differ substantially from that of the original problem; in particular, the perturbation may lead to a much higher number of bends and therefore higher tracking cost, and the computed solutions could be far off the real solutions. In contrast, our criss cross method avoids all these issues, since it always solves the original unperturbed problem.
{ "cite_N": [ "@cite_11", "@cite_8" ], "mid": [ "2057682568", "2133958955" ], "abstract": [ "We consider the generic regularized optimization problem β(λ) = argminβ L(y, Xβ) + λJ(β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO-that is, if L is squared error loss and J(β) = ∥β∥ 1 is the l 1 norm of β-the optimal coefficient path is piecewise linear, that is, ∂β(λ) ∂λ. is piecewise constant. We derive a general characterization of the properties of (loss L, penalty J) pairs which give piecewise linear coefficient paths. Such pairs allow for efficient generation of the full regularized coefficient paths. We investigate the nature of efficient path following algorithms which arise. We use our results to suggest robust versions of the LASSO for regression and classification, and to develop new, efficient algorithms for existing problems in the literature, including Mammen and van de Geer's locally adaptive regression splines.", "The support vector machine (SVM) is a widely used tool for classification. Many efficient implementations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model. We illustrate our algorithm on some examples, and use our representation to give further insight into the range of SVM solutions." ] }
0903.4961
1650052555
In multiprocessor systems, various problems are treated with Lamport's logical clock and the resultant logical time orders between operations. However, one often needs to face the high complexities caused by the lack of logical time order information in practice. In this paper, we utilize the to infuse the so-called to each operation in a multiprocessor system, where the pending period is a time interval that contains the performed time of the operation. Further, we define the for any two operations with disjoint pending periods. The physical time order is obeyed by any real execution in multiprocessor systems due to that it is part of the truly happened operation orders restricted by global clock, and it is then proven to be independent and consistent with traditional logical time orders. The above novel yet fundamental concepts enables new effective approaches for analyzing multiprocessor systems, which are named as a whole. As a consequence of pending period analysis, many important problems of multiprocessor systems can be tackled effectively. As a significant application example, complete memory consistency verification, which was known as an NP-hard problem, can be solved with the complexity of @math (where @math is the number of operations). Moreover, the two event ordering problems, which were proven to be Co-NP-Hard and NP-hard respectively, can both be solved with the time complexity of O(n) if restricted by pending period information.
During the past decade of years, driving by the development of integrated circuit process, SMP (Symmetric Multi Processors) and CMP (Chip Multi Processor) techniques, the density of computing capacity of multiprocessor systems is fast increasing. The resultant scaling down of multiprocessor systems rewakes the intuitive idea of utilizing global clock, and a number of investigations with the consideration of global clock have been proposed. Herlihy and Wing @cite_27 proposed the concept of linearizability, which requires the accesses to the same memory location happening in disjoint time intervals with respect to a global clock, as a correctness condition of memory system. In @cite_30 , Singla proposed a temporal memory model delta consistency'' to offer time window to coalesce write operations to the same memory location. In @cite_11 @cite_6 , the global counters, which implicitly represent the global time, were employed to reason about the ordering of transactions in transactional memory @cite_2 . The common idea behind the above investigations is to obtain logical order information (especially execution order about the same memory location) by explicitly or implicitly employing a global clock.
{ "cite_N": [ "@cite_30", "@cite_6", "@cite_27", "@cite_2", "@cite_11" ], "mid": [ "1973241464", "1566614300", "2101939036", "2113751407", "2307238513" ], "abstract": [ "An important attribute in the specification of many compute-intensive applications is “time”. Simulation of interactive virtual environments is one such domain. There is a mismatch between the synchronization and consistency guarantees needed by such applications (which are temporal in nature) and the guarantees offered by current shared memory systems. Consequently, programming such applications using standard shared memory style synchronization and communication is cumbersome. Furthermore, such applications offer opportunities for relaxing both the synchronization and consistency requirements along the temporal dimension. In this work, we develop a temporal programming model that is more intuitive for the development of applications that need temporal correctness guarantees. This model embodies two mechanisms: “delta consistency” – a novel time-based correctness criterion to govern the shared memory access guarantees, and a companion “temporal synchronization” – a mechanism for thread synchronization along the time axis. These mechanisms are particularly appropriate for expressing the requirements in interactive application domains. In addition to the temporal programming model, we develop efficient explicit communication mechanisms that aggressively push the data out to “future” consumers to hide the read miss latency at the receiving end. We implement these mechanisms on a cluster of workstations in a software distributed shared memory architecture called “Beehive.” Using a virtual environment application as the driver, we show the efficacy of the proposed mechanisms in meeting the real time requirements of such applications.", "In a software transactional memory (STM) system, conflict detection is the problem of determining when two transactions cannot both safely commit. Validation is the related problem of ensuring that a transaction never views inconsistent data, which might potentially cause a doomed transaction to exhibit irreversible, externally visible side effects. Existing mechanisms for conflict detection vary greatly in their degree of speculation and their relative treatment of read-write and write-write conflicts. Validation, for its part, appears to be a dominant factor—perhaps the dominant factor—in the cost of complex transactions. We present the most comprehensive study to date of conflict detection strategies, characterizing the tradeoffs among them and identifying the ones that perform the best for various types of workload. In the process we introduce a lightweight heuristic mechanism—the global commit counter—that can greatly reduce the cost of validation and of single-threaded execution. The heuristic also allows us to experiment with mixed invalidation, a more opportunistic interleaving of reading and writing transactions. Experimental results on a 16-processor SunFire machine running our RSTM system indicate that the choice of conflict detection strategy can have a dramatic impact on performance, and that the best choice is workload dependent. In workloads whose transactions rarely conflict, the commit counter does little to help (and can even hurt) performance. For less scalable applications, however—those in which STM performance has traditionally been most problematic—it can improve transaction throughput many fold.", "A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.", "A shared data structure is lock-free if its operations do not require mutual exclusion. If one process is interrupted in the middle of an operation, other processes will not be prevented from operating on that object. In highly concurrent systems, lock-free data structures avoid common problems associated with conventional locking techniques, including priority inversion, convoying, and difficulty of avoiding deadlock. This paper introduces transactional memory , a new multiprocessor architecture intended to make lock-free synchronization as efficient (and easy to use) as conventional techniques based on mutual exclusion. Transactional memory allows programmers to define customized read-modify-write operations that apply to multiple, independently-chosen words of memory. It is implemented by straightforward extensions to any multiprocessor cache-coherence protocol. Simulation results show that transactional memory matches or outperforms the best known locking techniques for simple benchmarks, even in the absence of priority inversion, convoying, and deadlock.", "Most high-performance software transactional memories (STM) use optimistic invisible reads. Consequently, a transaction might have an inconsistent view of the objects it accesses unless the consistency of the view is validated whenever the view changes. Although all STMs usually detect inconsistencies at commit time, a transaction might never reach this point because an inconsistent view can provoke arbitrary behavior in the application (e.g., enter an infinite loop). In this paper, we formally introduce a lazy snapshot algorithm that verifies at each object access that the view observed by a transaction is consistent. Validating previously accessed objects is not necessary for that, however, it can be used on-demand to prolong the view's validity. We demonstrate both formally and by measurements that the performance of our approach is quite competitive by comparing other STMs with an STM that uses our algorithm." ] }
0903.5346
2952797823
Youtopia is a platform for collaborative management and integration of relational data. At the heart of Youtopia is an update exchange abstraction: changes to the data propagate through the system to satisfy user-specified mappings. We present a novel change propagation model that combines a deterministic chase with human intervention. The process is fundamentally cooperative and gives users significant control over how mappings are repaired. An additional advantage of our model is that mapping cycles can be permitted without compromising correctness. We investigate potential harmful interference between updates in our model; we introduce two appropriate notions of serializability that avoid such interference if enforced. The first is very general and related to classical final-state serializability; the second is more restrictive but highly practical and related to conflict-serializability. We present an algorithm to enforce the latter notion. Our algorithm is an optimistic one, and as such may sometimes require updates to be aborted. We develop techniques for reducing the number of aborts and we test these experimentally.
There is a growing body of work which adapts classical data integration ideas to the community setting, including substantial theoretical work @cite_15 @cite_11 @cite_19 . Systems like Orchestra @cite_28 , Piazza @cite_26 , Hyperion @cite_0 and the system introduced in @cite_8 focus on maintaining data utility despite significant disagreement. However, none of these enable best-effort cooperation to its fullest extent. They all come with some centralized logical component that is an extensibility bottleneck; usually this is either a global schema or an acyclicity restriction on the mappings, or both. In addition, they do not provide facilities for users to manage the metadata collaboratively.
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_28", "@cite_0", "@cite_19", "@cite_15", "@cite_11" ], "mid": [ "2032623878", "2102965013", "2104248230", "1767402579", "", "2107115563", "" ], "abstract": [ "Intuitively, data management and data integration tools should be well suited for exchanging information in a semantically meaningful way. Unfortunately, they suffer from two significant problems: they typically require a common and comprehensive schema design before they can be used to store or share information, and they are difficult to extend because schema evolution is heavyweight and may break backward compatibility. As a result, many large-scale data sharing tasks are more easily facilitated by non-database-oriented tools that have little support for semantics.The goal of the peer data management system (PDMS) is to address this need: we propose the use of a decentralized, easily extensible data management architecture in which any user can contribute new data, schema information, or even mappings between other peers’ schemas. PDMSs represent a natural step beyond data integration systems, replacing their single logical schema with an interlinked collection of semantic mappings between peers’ individual schemas.This paper considers the problem of schema mediation in a PDMS. Our first contribution is a flexible language for mediating between peer schemas that extends known data integration formalisms to our more complex architecture. We precisely characterize the complexity of query answering for our language. Next, we describe a reformulation algorithm for our language that generalizes both global-as-view and local-as-view query answering algorithms. Then we describe several methods for optimizing the reformulation algorithm and an initial set of experiments studying its performance. Finally, we define and consider several global problems in managing semantic mappings in a PDMS.", "Modern Internet communities need to integrate and query structured information. Employing current information integration infrastructure, data integration is still a very costly effort, since source registration is performed by a central authority which becomes a bottleneck. We propose the community-based integration paradigm which pushes the source registration task to the independent community members. This creates new challenges caused by each community member's lack of a global overview on how her data interacts with the application queries of the community and the data from other sources. How can the source owner maximize the visibility of her data to existing applications, while minimizing the clean-up and reformatting cost associated with publishing? Does her data contradict (or could it contradict in the future) the data of other sources? We introduce RIDE, a visual registration tool that extends schema mapping interfaces like that of MS Biz Talk Server and IBM's Clio with a suggestion component that guides the source owner in the autonomous registration, assisting her in answering these questions. RIDE's implementation features efficient procedures for deciding various levels of self-reliance of a GLAV-style source registration for contributing answers to an application query and checking potential and definite inconsistency across sources.", "In many data sharing settings, such as within the biological and biomedical communities, global data consistency is not always attainable: different sites' data may be dirty, uncertain, or even controversial. Collaborators are willing to share their data, and in many cases they also want to selectively import data from others --- but must occasionally diverge when they disagree about uncertain or controversial facts or values. For this reason, traditional data sharing and data integration approaches are not applicable, since they require a globally consistent data instance. Additionally, many of these approaches do not allow participants to make updates; if they do, concurrency control algorithms or inconsistency repair techniques must be used to ensure a consistent view of the data for all users.In this paper, we develop and present a fully decentralized model of collaborative data sharing, in which participants publish their data on an ad hoc basis and simultaneously reconcile updates with those published by others. Individual updates are associated with provenance information, and each participant accepts only updates with a sufficient authority ranking, meaning that each participant may have a different (though conceptually overlapping) data instance. We define a consistency semantics for database instances under this model of disagreement, present algorithms that perform reconciliation for distributed clusters of participants, and demonstrate their ability to handle typical update and conflict loads in settings involving the sharing of curated data.", "This demo presents Hyperion, a prototype system that supports data sharing for a network of independent Peer Relational Database Management Systems (PDBMSs). The nodes of such a network are assumed to be autonomous PDBMSs that form acquaintances at run-time, and manage mapping tables to define value correspondences among different databases. They also use distributed Event-Condition-Action (ECA) rules to enable and coordinate data sharing. Peers perform local querying and update processing, and also propagate queries and updates to their acquainted peers. The demo illustrates the following key functionalities of Hyperion: (1) the use of (data level) mapping tables to infer new metadata as peers dynamically join the network, (2) the ability to answer queries using data in acquaintances, and (3) the ability to coordinate peers through update propagation.", "", "In this article, we introduce and study a framework, called peer data exchange, for sharing and exchanging data between peers. This framework is a special case of a full-fledged peer data management system and a generalization of data exchange between a source schema and a target schema. The motivation behind peer data exchange is to model authority relationships between peers, where a source peer may contribute data to a target peer, specified using source-to-target constraints, and a target peer may use target-to-source constraints to restrict the data it is willing to receive, but cannot modify the data of the source peer.A fundamental algorithmic problem in this framework is that of deciding the existence of a solution: given a source instance and a target instance for a fixed peer data exchange setting, can the target instance be augmented in such a way that the source instance and the augmented target instance satisfy all constraints of the settingq We investigate the computational complexity of the problem for peer data exchange settings in which the constraints are given by tuple generating dependencies. We show that this problem is always in NP, and that it can be NP-complete even for “acyclic” peer data exchange settings. We also show that the data complexity of the certain answers of target conjunctive queries is in coNP, and that it can be coNP-complete even for “acyclic” peer data exchange settings.After this, we explore the boundary between tractability and intractability for deciding the existence of a solution and for computing the certain answers of target conjunctive queries. To this effect, we identify broad syntactic conditions on the constraints between the peers under which the existence-of-solutions problem is solvable in polynomial time. We also identify syntactic conditions between peer data exchange settings and target conjunctive queries that yield polynomial-time algorithms for computing the certain answers. For both problems, these syntactic conditions turn out to be tight, in the sense that minimal relaxations of them lead to intractability. Finally, we introduce the concept of a universal basis of solutions in peer data exchange and explore its properties.", "" ] }
0903.5346
2952797823
Youtopia is a platform for collaborative management and integration of relational data. At the heart of Youtopia is an update exchange abstraction: changes to the data propagate through the system to satisfy user-specified mappings. We present a novel change propagation model that combines a deterministic chase with human intervention. The process is fundamentally cooperative and gives users significant control over how mappings are repaired. An additional advantage of our model is that mapping cycles can be permitted without compromising correctness. We investigate potential harmful interference between updates in our model; we introduce two appropriate notions of serializability that avoid such interference if enforced. The first is very general and related to classical final-state serializability; the second is more restrictive but highly practical and related to conflict-serializability. We present an algorithm to enforce the latter notion. Our algorithm is an optimistic one, and as such may sometimes require updates to be aborted. We develop techniques for reducing the number of aborts and we test these experimentally.
CDI and the Youtopia system are highly compatible with the Dataspaces vision @cite_17 . Indeed, a Youtopia repository can be seen as a dataspace. However, our initial focus is more restricted: we set out to enable relational data sharing among members of a relatively knowledgeable and motivated community. We believe this setting is associated with unique challenges and opportunities, and deserves a dedicated solution. Such a solution could profitably be integrated into any other dataspace designed for a setting where highly structured data is shared.
{ "cite_N": [ "@cite_17" ], "mid": [ "2029554959" ], "abstract": [ "The most acute information management challenges today stem from organizations relying on a large number of diverse, interrelated data sources, but having no means of managing them in a convenient, integrated, or principled fashion. These challenges arise in enterprise and government data management, digital libraries, \"smart\" homes and personal information management. We have proposed dataspaces as a data management abstraction for these diverse applications and DataSpace Support Platforms (DSSPs) as systems that should be built to provide the required services over dataspaces. Unlike data integration systems, DSSPs do not require full semantic integration of the sources in order to provide useful services. This paper lays out specific technical challenges to realizing DSSPs and ties them to existing work in our field. We focus on query answering in DSSPs, the DSSP's ability to introspect on its content, and the use of human attention to enhance the semantic relationships in a dataspace." ] }
0903.2851
2949554195
We study the problem of decision-theoretic online learning (DTOL). Motivated by practical applications, we focus on DTOL when the number of actions is very large. Previous algorithms for learning in this framework have a tunable learning rate parameter, and a barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large. In this paper, we offer a clean solution by proposing a novel and completely parameter-free algorithm for DTOL. We introduce a new notion of regret, which is more natural for applications with a large number of actions. We show that our algorithm achieves good performance with respect to this new notion of regret; in addition, it also achieves performance close to that of the best bounds achieved by previous algorithms with optimally-tuned parameters, according to previous notions of regret.
There has been a large amount of literature on various aspects of DTOL. The Hedge algorithm of @cite_10 belongs to a more general family of algorithms, called the exponential weights algorithms; these are originally based on Littlestone and Warmuth's Weighted Majority algorithm @cite_4 , and they have been well-studied.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "1988790447", "2093825590" ], "abstract": [ "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case where the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log |A| + m) mistakes on that sequence, where c is fixed constant." ] }
0903.2862
1802404440
We study the tracking problem, namely, estimating the hidden state of an object over time, from unreliable and noisy measurements. The standard framework for the tracking problem is the generative framework, which is the basis of solutions such as the Bayesian algorithm and its approximation, the particle filters. However, the problem with these solutions is that they are very sensitive to model mismatches. In this paper, motivated by online learning, we introduce a new framework -- an explanatory framework -- for tracking. We provide an efficient tracking algorithm for this framework. We provide experimental results comparing our algorithm to the Bayesian algorithm on simulated data. Our experiments show that when there are slight model mismatches, our algorithm vastly outperforms the Bayesian algorithm.
The suboptimality of the Bayesian algorithm under model mismatch has been investigated in other contexts such as classification @cite_15 @cite_1 . The view of the Bayesian algorithm as an online learning algorithm for log-loss is well-known in various communities, including information theory MDL @cite_5 @cite_18 and computational learning theory @cite_0 @cite_6 . ( @cite_23 @cite_4 ) In our work, we look beyond the Bayesian algorithm and log-loss to consider other loss functions and algorithms that are more appropriate for our task.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_1", "@cite_6", "@cite_0", "@cite_23", "@cite_5", "@cite_15" ], "mid": [ "2101460669", "2963663025", "2078458772", "2096772472", "1974892965", "1970041563", "", "1585009072" ], "abstract": [ "The minimum description length (MDL) principle is a powerful method of inductive inference, the basis of statistical modeling, pattern recognition, and machine learning. It holds that the best explanation, given a limited set of observed data, is the one that permits the greatest compression of the data. MDL methods are particularly well-suited for dealing with model selection, prediction, and estimation problems in situations where the models under consideration can be arbitrarily complex, and overfitting the data is a serious concern. This extensive, step-by-step introduction to the MDL Principle provides a comprehensive reference (with an emphasis on conceptual issues) that is accessible to graduate students and researchers in statistics, pattern classification, machine learning, and data mining, to philosophers interested in the foundations of statistics, and to researchers in other applied sciences that involve model selection, including biology, econometrics, and experimental psychology. Part I provides a basic introduction to MDL and an overview of the concepts in statistics and information theory needed to understand MDL. Part II treats universal coding, the information-theoretic notion on which MDL is built, and part III gives a formal treatment of MDL theory as a theory of inductive inference based on universal coding. Part IV provides a comprehensive overview of the statistical theory of exponential families with an emphasis on their information-theoretic properties. The text includes a number of summaries, paragraphs offering the reader a \"fast track\" through the material, and boxes highlighting the most important concepts.", "We show how models for prediction with expert advice can be defined concisely and clearly using hidden Markov models (HMMs); standard HMM algorithms can then be used to efficiently calculate how the expert predictions should be weighted according to the model. We cast many existing models as HMMs and recover the best known running times in each case. We also describe two new models: the switch distribution, which was recently developed to improve Bayesian Minimum Description Length model selection, and a new generalisation of the fixed share algorithm based on runlength coding. We give loss bounds for all models and shed new light on the relationships between them.", "We show that forms of Bayesian and MDL inference that are often applied to classification problems can be inconsistent. This means that there exists a learning problem such that for all amounts of data the generalization errors of the MDL classifier and the Bayes classifier relative to the Bayesian posterior both remain bounded away from the smallest achievable generalization error. From a Bayesian point of view, the result can be reinterpreted as saying that Bayesian inference can be inconsistent under misspecification, even for countably infinite models. We extensively discuss the result from both a Bayesian and an MDL perspective.", "We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) perform favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algorithms' modeling assumptions are \"correct,\" and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (using logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression.", "We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [26]and by Vovk [35]to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the case of the logarithmic loss, the derived algorithm is equivalent to the Bayes algorithm with Jeffreys prior, that was studied by Xie and Barron [38]under probabilistic assumptions. We derive a uniform bound on the regret which holds for any sequence. We also show that if the empirical distribution of the sequence is bounded away from 0 and from 1, then, as the length of the sequence increases to infinity, the difference between this bound and a corresponding bound on the average case regret of the same algorithm (which is asymptotically optimal in that case) is only 1 2. We show that this gap of 1 2 is necessary by calculating the regret of the min–max optimal algorithm for this problem and showing that the asymptotic upper bound is tight. We also study the application of this algorithm to the square loss and show that the algorithm that is derived in this case is different from the Bayes algorithm and is better than it for prediction in the worstcase.", "We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length e, we can bound the additional loss of our algorithm over the best partition by O(k n+k (e k)). For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes O(k n+ k (L k)), where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial.", "", "" ] }
0903.3002
2951182273
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.
The idea of using structure in addition to sparsity has been explored before. An example is group structure, which has received much attention recently. For example, group sparsity has been considered for simultaneous sparse approximation @cite_13 and multi-task compressive sensing @cite_18 from the Bayesian hierarchical modeling point of view. Under the Bayesian hierarchical model framework, data from all sources contribute to the estimation of hyper-parameters in the sparse prior model. The shared prior can then be inferred from multiple sources. recently extend the idea to the tree sparsity in the Bayesian framework @cite_9 @cite_2 . Although the idea can be justified using standard Bayesian intuition, there are no theoretical results showing how much better (and under what kind of conditions) the resulting algorithms perform. In the statistical literature, Lasso has been extended to the group Lasso when there exist group block structured dependences among the sparse coefficients @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_2", "@cite_15", "@cite_13" ], "mid": [ "2511885285", "2049502219", "", "2138019504", "2152279006" ], "abstract": [ "Compressive sensing (CS) is a framework whereby one performs n non-adaptive measurements to constitute an n-dimensional vector v, with v used to recover an m-dimensional approximation ^ u to a desired m-dimensional signal u, with n ? m; this is performed under the assumption that u is sparse in the basis represented by the matrix “, the columns of which define discrete basis vectors. It has been demonstrated that with appropriate design of the compressive measurements used to define v, the decompressive mapping v ! ^ u may be performed with error kui^ uk 2 having asymptotic properties (large n and m > n) analogous to those of the best adaptive transform-coding algorithm applied in the basis “. The mapping v ! ^ constitutes an inverse problem, often solved using ‘1 regularization or related techniques. In most previous research, if multiple compressive measurements fvigi=1;M are performed, each of the associated f^ uigi=1;M are recovered one at a time, independently. In many applications the M “tasks” defined by the mappings vi ! ^ ui are not statistically independent, and it may be possible to improve the performance of the inversion if statistical inter-relationships are exploited. In this paper we address this problem within a multi-task learning setting, wherein the mapping vi ! ^ ui for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal ui, and a shared prior is placed across all of the M tasks. In this multi-task learning framework data from all M tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the M individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayes procedure and fast inference algorithm is developed. Example results are presented on several data sets.", "Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and, therefore, this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.", "", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.", "Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches." ] }
0903.3002
2951182273
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.
Other structures have also been explored in the literature. For example, so-called tonal and transient structures were considered for sparse decomposition of audio signals in @cite_1 , but again without any theory. @cite_17 investigated positive polynomials with structured sparsity from an optimization perspective. The theoretical result there did not address the effectiveness of such methods in comparison to standard sparsity. The closest work to ours is a recent paper @cite_24 , which we learned after finishing this paper. In that paper, a specific case of structured sparsity, referred to as model based sparsity, was considered. It is important to note that some theoretical results were obtained there to show the effectiveness of their method in compressive sensing, although in a more limited scope than results presented here. Moreover, they do not provide a generic framework for structured sparsity. In their algorithm, different schemes have to be specifically designed for different data models, and under specialized assumptions. It remains as an open issue how to develop a general theory for structured sparsity, together with a general algorithm that can be applied to a wide class of such problems.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_17" ], "mid": [ "2125680629", "9496346", "" ], "abstract": [ "Compressive sensing (CS) is an alternative to Shannon Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ? N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.", "A method and system for recording and reproducing, such as on a record carrier, content information and supplemental information relating thereto. The content information may be audio and or video, and the supplemental information may provide author identification and or copy control status. An encoded signal is generated representing the content information and which includes a watermark pattern representing the supplemental information. The watermark pattern cannot be changed without impairing the quality of the content information during reproduction. The supplemental information also includes a control pattern, the watermark being generated by applying a one-way function to such control pattern. This has the advantage that any alteration of the watermark or the control pattern can be detected easily, because it is not computationally feasible to calculate a new control pattern for an altered watermark. Therefore, the supplemental information is well protected against unauthorized manipulation. An attempt to fully replace the watermark pattern will affect the quality of reproduction of the content information. In a copy control method allowing a first generation copy (\"copy-once\"), the original control pattern is processed several times by the one-way function for generating the watermark. Each player or recorder processes the control pattern once before outputting recording it, thus forming a cryptographically protected down-counter.", "" ] }
0903.3218
1678505268
The treatment of Internet traffic is increasingly affected by national policies that require the ISPs in a country to adopt common protocols or practices. Examples include government enforced censorship, wiretapping, and protocol deployment mandates for IPv6 and DNSSEC. If an entire nation's worth of ISPs apply common policies to Internet traffic, the global implications could be significant. For instance, how many countries rely on China or Great Britain (known traffic censors) to transit their traffic? These kinds of questions are surprisingly difficult to answer, as they require combining information collected at the prefix, Autonomous System, and country level, and grappling with incomplete knowledge about the AS-level topology and routing policies. In this paper we develop the first framework for country-level routing analysis, which allows us to answer questions about the influence of each country on the flow of international traffic. Our results show that some countries known for their national policies, such as Iran and China, have relatively little effect on interdomain routing, while three countries (the United States, Great Britain, and Germany) are central to international reachability, and their policies thus have huge potential impact.
In addition to 's work @cite_27 discussed earlier, there are at least two other methods for inferring AS-paths that are prefix specific. M " @cite_33 showed that when an AS has multiple routers distributed across many locations, more than one router needs to be simulated to capture all of the routing diversity within the AS. By simulating multiple quasi-routers per AS, they were able to predict AS-paths with relatively high accuracy (reported 65 training data sets makes it difficult to compare the accuracy of their technique with ours. and more computationally efficient. This allowed us to study all 290,000 prefixes rather than the 1000 prefixes reported in M "
{ "cite_N": [ "@cite_27", "@cite_33" ], "mid": [ "2159281894", "2120652359" ], "abstract": [ "Inferring AS-level end-to-end paths can be a valu- able tool for both network operators and researchers. A widely known technique for inferring end-to-end paths is to perform traceroute from sources to destinations. Unfortunately, traceroute requires the access to source machines and is resource consuming. In this paper, we propose two algorithms for AS-level end-to-end path inference. The key idea of our algorithm is to exploit the AS paths appeared in BGP routing tables and infer AS paths based on the ones. In addition, our algorithms infer AS paths on the granularity of destination prefix instead of destination AS. That is, we infer AS paths from any source AS to any destination prefix. This is essential since routing in the Internet is determined based on destination prefixes instead of destination ASs. The validation results show that our algorithm yields accuracy up to 95 for exact match and accuracy up to 97 for path length match. We further extend our algorithm to infer a set of potential AS paths between a source AS and a destination prefix. We find that on average, 86 of inferred AS path sets are accurate in the sense that one of the paths in the set matches the actual AS path. Note that our algorithms require BGP routing tables only and do not require additional data trace or access to either sources or destinations. In addition, we demonstrate that the accuracy of this BGP-based inference approach cannot go beyond 90 .", "An understanding of the topological structure of the Internet is needed for quite a number of networking tasks, e. g., making decisions about peering relationships, choice of upstream providers, inter-domain traffic engineering. One essential component of these tasks is the ability to predict routes in the Internet. However, the Internet is composed of a large number of independent autonomous systems (ASes) resulting in complex interactions, and until now no model of the Internet has succeeded in producing predictions of acceptable accuracy.We demonstrate that there are two limitations of prior models: (i) they have all assumed that an Autonomous System (AS) is an atomic structure - it is not, and (ii) models have tended to oversimplify the relationships between ASes. Our approach uses multiple quasi-routers to capture route diversity within the ASes, and is deliberately agnostic regarding the types of relationships between ASes. The resulting model ensures that its routing is consistent with the observed routes. Exploiting a large number of observation points, we show that our model provides accurate predictions for unobserved routes, a first step towards developing structural mod-els of the Internet that enable real applications." ] }
0903.3218
1678505268
The treatment of Internet traffic is increasingly affected by national policies that require the ISPs in a country to adopt common protocols or practices. Examples include government enforced censorship, wiretapping, and protocol deployment mandates for IPv6 and DNSSEC. If an entire nation's worth of ISPs apply common policies to Internet traffic, the global implications could be significant. For instance, how many countries rely on China or Great Britain (known traffic censors) to transit their traffic? These kinds of questions are surprisingly difficult to answer, as they require combining information collected at the prefix, Autonomous System, and country level, and grappling with incomplete knowledge about the AS-level topology and routing policies. In this paper we develop the first framework for country-level routing analysis, which allows us to answer questions about the influence of each country on the flow of international traffic. Our results show that some countries known for their national policies, such as Iran and China, have relatively little effect on interdomain routing, while three countries (the United States, Great Britain, and Germany) are central to international reachability, and their policies thus have huge potential impact.
Another AS-path inference algorithm was developed by , @cite_20 who used a structural approach to AS-path prediction. They began with known traceroutes from the iPlane project and used them to infer IP-level paths for chosen src dest pairs. The algorithm works by searching for the closest observation point to the source prefix (by examining a few sample traceroutes from the source) and then uses the known iPlane paths to infer the remaining paths from the source. They do not report the accuracy of the IP-level paths, but we are interested in investigating this technique in future work as an alternate way to infer country paths.
{ "cite_N": [ "@cite_20" ], "mid": [ "2163206651" ], "abstract": [ "Several models have been recently proposed for predicting the latency of end to end Internet paths. These models treat the Internet as a black-box, ignoring its internal structure. While these models are simple, they can often fail systematically; for example, the most widely used models use metric embeddings that predict no benefit to detour routes even though half of all Internet routes can benefit from detours.In this paper, we adopt a structural approach that predicts path latency based on measurements of the Internet's routing topology, PoP connectivity, and routing policy. We find that our approach outperforms Vivaldi, the most widely used black-box model. Furthermore, unlike metric embeddings, our approach successfully predicts 65 of detour routes in the Internet. The number of measurements used in our approach is comparable with that required by black box techniques, but using traceroutes instead of pings." ] }
0903.3218
1678505268
The treatment of Internet traffic is increasingly affected by national policies that require the ISPs in a country to adopt common protocols or practices. Examples include government enforced censorship, wiretapping, and protocol deployment mandates for IPv6 and DNSSEC. If an entire nation's worth of ISPs apply common policies to Internet traffic, the global implications could be significant. For instance, how many countries rely on China or Great Britain (known traffic censors) to transit their traffic? These kinds of questions are surprisingly difficult to answer, as they require combining information collected at the prefix, Autonomous System, and country level, and grappling with incomplete knowledge about the AS-level topology and routing policies. In this paper we develop the first framework for country-level routing analysis, which allows us to answer questions about the influence of each country on the flow of international traffic. Our results show that some countries known for their national policies, such as Iran and China, have relatively little effect on interdomain routing, while three countries (the United States, Great Britain, and Germany) are central to international reachability, and their policies thus have huge potential impact.
Finally, there has been an enormous amount of work developing statistical measures of network properties @cite_24 @cite_1 @cite_13 , including preferential attachment models @cite_17 and many models of the AS network @cite_9 @cite_6 @cite_7 @cite_35 @cite_22 . Some of this work measures node centrality by the impact it would have on network connectivity if the AS was deleted, known as deletion impact @cite_11 @cite_6 . A parallel can potentially be made between node deletion and censorship. For example, deleting a country from the network is conceptually similar to all other ASes collectively routing around that country.
{ "cite_N": [ "@cite_35", "@cite_11", "@cite_22", "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_24", "@cite_13", "@cite_17" ], "mid": [ "2156950966", "2050401089", "2139708889", "2168896238", "2101207682", "2056944867", "2140083824", "1971937094", "1967570846", "2008620264" ], "abstract": [ "We propose a plausible explanation of the power law distributions of degrees observed in the graphs arising in the Internet topology [Faloutsos, Faloutsos, and Faloutsos, SIGCOMM 1999] based on a toy model of Internet growth in which two objectives are optimized simultaneously: \"last mile\" connection costs, and transmission delays measured in hops. We also point out a similar phenomenon, anticipated in [Carlson and Doyle, Physics Review E 1999], in the distribution of file sizes. Our results seem to suggest that power laws tend to arise as a result of complex, multi-objective optimization.", "A common property of many large networks, including the Internet, is that the connectivity of the various nodes follows a scale-free power-law distribution, P(k)=ck^-a. We study the stability of such networks with respect to crashes, such as random removal of sites. Our approach, based on percolation theory, leads to a general condition for the critical fraction of nodes, p_c, that need to be removed before the network disintegrates. We show that for a<=3 the transition never takes place, unless the network is finite. In the special case of the Internet (a=2.5), we find that it is impressively robust, where p_c is approximately 0.99.", "Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet.", "Internet connectivity at the AS level, defined in terms of pairwise logical peering relationships, is constantly evolving. This evolution is largely a response to economic, political, and technological changes that impact the way ASs conduct their business. We present a new framework for modeling this evolutionary process by identifying a set of criteria that ASs consider either in establishing a new peering relationship or in reassessing an existing relationship. The proposed framework is intended to capture key elements in the decision processes underlying the formation of these relationships. We present two decision processes that are executed by an AS, depending on its role in a given peering decision, as a customer or a peer of another AS. When acting as a peer, a key feature of the AS’s corresponding decision model is its reliance on realistic inter-AS traffic demands. To reflect the enormous heterogeneity among customer or peer ASs, our decision models are flexible enough to accommodate a wide range of AS-specific objectives. We demonstrate the potential of this new framework by considering different decision models in various realistic “what if” experiment scenarios. We implement these decision models to generate and study the evolution of the resulting AS graphs over time, and compare them against observed historical evolutionary features of the Internet at the AS level.", "This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in [5] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.", "Abstract The intuitive background for measures of structural centrality in social networks is reviewed and existing measures are evaluated in terms of their consistency with intuitions and their interpretability. Three distinct intuitive conceptions of centrality are uncovered and existing measures are refined to embody these conceptions. Three measures are developed for each concept, one absolute and one relative measure of the centrality of positions in a network, and one reflecting the degree of centralization of the entire network. The implications of these measures for the experimental study of small groups is examined.", "Modeling Internet growth is important both for understanding the current network and to predict and improve its future. To date, Internet models have typically attempted to explain a subset of the following characteristics: network structure, traffic flow, geography, and economy. In this paper we present a discrete, agent-based model, that integrates all of them. We show that the model generates networks with topologies, dynamics, and more speculatively spatial distributions that are similar to the Internet.", "A family of new measures of point and graph centrality based on early intuitions of Bavelas (1948) is introduced. These measures define centrality in terms of the degree to which a point falls on the shortest path between others and there fore has a potential for control of communication. They may be used to index centrality in any large or small network of symmetrical relations, whether connected or unconnected.", "", "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems." ] }
0903.1468
1800306869
We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite.
We have now accumulated the sufficient information to introduce the estimation method. We define the empirical residual error @math and, for every @math , we let our estimator @math be a solution of the optimization problem @cite_16 S ( ) + 2 | |_ 2,1 .
{ "cite_N": [ "@cite_16" ], "mid": [ "2065180801" ], "abstract": [ "We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks." ] }
0903.1468
1800306869
We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite.
In order to study the statistical properties of this estimator, it is useful to derive the optimality condition for a solution of the problem . Since the objective function in is convex, @math is a solution of if and only if @math (the @math -dimensional zero vector) belongs to the subdifferential of the objective function. In turn, this condition is equivalent to the requirement that @math where @math denotes the subdifferential (see, for example, @cite_4 for more information on convex analysis). Note that Thus, @math is a solution of if and only if
{ "cite_N": [ "@cite_4" ], "mid": [ "1509803206" ], "abstract": [ "Background * Inequality constraints * Fenchel duality * Convex analysis * Special cases * Nonsmooth optimization * The Karush-Kuhn-Tucker Theorem * Fixed points * Postscript: infinite versus finite dimensions * List of results and notation." ] }
0903.2168
1909542576
Termination properties of actual Prolog systems with constraints are fragile and difficult to analyse. The lack of the occurs-check, moded and overloaded arithmetical evaluation via is 2 and the occasional nontermination of finite domain constraints are all sources for invalidating termination results obtained by current termination analysers that rely on idealized assumptions. In this paper, we present solutions to address these problems on the level of the underlying Prolog system. Improved unification modes meet the requirements of norm based analysers by offering dynamic occurs-check detection. A generalized finite domain solver overcomes the shortcomings of conventional arithmetic without significant runtime overhead. The solver offers unbounded domains, yet propagation always terminates. Our work improves Prolog's termination and makes Prolog a more reliable target for termination and type analysis. It is part of SWI-Prolog since version 5.6.50.
SICStus Prolog @cite_13 was the first system to generalize finite domain constraints without sacrificing correctness. It uses small integers for domains but signals domain overflows as representation errors and not as silent failures.
{ "cite_N": [ "@cite_13" ], "mid": [ "2012741612" ], "abstract": [ "We describe the design and implementation of a finite domain constraint solver embedded in a Prolog system using an extended unification mechanism via attributed variables as a generic constraint interface. The solver is essentially a scheduler for indexicals, i.e. reactive functional rules encoding local consistency methods performing incremental constraint solving or entailment checking, and global constraints, i.e. general propagators which may use specialized algorithms to achieve a higher degree of consistency or better time and space complexity." ] }
0903.0034
2952812630
A data stream model represents setting where approximating pairwise, or @math -wise, independence with sublinear memory is of considerable importance. In the streaming model the joint distribution is given by a stream of @math -tuples, with the goal of testing correlations among the components measured over the entire stream. In the streaming model, Indyk and McGregor (SODA 08) recently gave exciting new results for measuring pairwise independence. The Indyk and McGregor methods provide @math -approximation under statistical distance between the joint and product distributions in the streaming model. Indyk and McGregor leave, as their main open question, the problem of improving their @math -approximation for the statistical distance metric. In this paper we solve the main open problem posed by of Indyk and McGregor for the statistical distance for pairwise independence and extend this result to any constant @math . In particular, we present an algorithm that computes an @math -approximation of the statistical distance between the joint and product distributions defined by a stream of @math -tuples. Our algorithm requires @math memory and a single pass over the data stream.
In our recent work, @cite_17 , we also address the problem of @math -wise independence for data stream. In contrast to the current paper, in @cite_17 we study the @math norm and use entirely different techniques.
{ "cite_N": [ "@cite_17" ], "mid": [ "12339747" ], "abstract": [ "An improved water-level control method and apparatus automatically maintains the water level in a pool by selectively filling or draining water as required during the period that the circulating system is not operating." ] }
0903.0064
1641660344
A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions, and hence have become targets of manipulation by unscrupulous vendors. We provide theoretical and empirical results demonstrating that while common nearest neighbor algorithms, which are widely used in commercial systems, can be highly susceptible to manipulation, two classes of collaborative filtering algorithms which we refer to as linear and asymptotically linear are relatively robust. These results provide guidance for the design of future collaborative filtering systems.
To the best of our knowledge, the only prior theoretical work on manipulation robustness of CF algorithms is reported in @cite_39 . This work analyzed an NN algorithm that uses the majority rating among a set of neighbors as the prediction of a user's rating in an asymptotic regime of many users, each of whom rates all products. Manipulators rate as honest users would except on one fixed product. A bound is established on the algorithm's prediction error on this product's rating as a function of the percentage of ratings provided by manipulators. In our work, we do not require users to rate all products and do not constrain manipulators to any particular strategies. Further, we study the performance distortion on average, rather than for a single product. Finally, a primary contribution of our work is in establishing manipulation robustness of linear and asymptotically linear CF algorithms, which turn out to be superior to NN algorithms in this dimension.
{ "cite_N": [ "@cite_39" ], "mid": [ "2000855935" ], "abstract": [ "Collaborative recommendation has emerged as an effective technique for personalized information access. However, there has been relatively little theoretical analysis of the conditions under which the technique is effective. To explore this issue, we analyse the robustness of collaborative recommendation: the ability to make recommendations despite (possibly intentional) noisy product ratings. There are two aspects to robustness: recommendation accuracy and stability. We formalize recommendation accuracy in machine learning terms and develop theoretically justified models of accuracy. In addition, we present a framework to examine recommendation stability in the context of a widely-used collaborative filtering algorithm. For each case, we evaluate our analysis using several real-world data-sets. Our investigation is both practically relevant for enterprises wondering whether collaborative recommendation leaves their marketing operations open to attack, and theoretically interesting for the light it sheds on a comprehensive theory of collaborative recommendation." ] }
0903.0064
1641660344
A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions, and hence have become targets of manipulation by unscrupulous vendors. We provide theoretical and empirical results demonstrating that while common nearest neighbor algorithms, which are widely used in commercial systems, can be highly susceptible to manipulation, two classes of collaborative filtering algorithms which we refer to as linear and asymptotically linear are relatively robust. These results provide guidance for the design of future collaborative filtering systems.
Distortion due to manipulation may also be viewed as a loss of utility in a sequential decision problem induced by errors in initial beliefs. Our analysis is based on ideas similar to those that have been used to study the latter topic, which is discussed in @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2600183932" ], "abstract": [ "An observer of a process View the MathML source believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P(xt|x1,...,xt−1) and Q(xt|x1,...,xt−1) for t=1,...,n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P." ] }
0903.0742
1530264610
We introduce hierarchical neighbor graphs, a new architecture for connecting ad hoc wireless nodes distributed in a plane. The structure has the flavor of hierarchical clustering and requires only local knowledge and minimal computation at each node to be formed and repaired. Hence, it is a suitable interconnection model for an ad hoc wireless sensor network. The structure is able to use energy efficiently by reorganizing dynamically when the battery power of heavily utilized nodes degrades and is able to achieve throughput, energy efficiency and network lifetimes that compare favorably with the leading proposals for data collation in sensor networks such as LEACH (Heinzelman et. al., 2002). Additionally, hierarchical neighbor graphs have low power stretch i.e. the power required to connect nodes through the network is a small factor higher than the power required to connect them directly. Our structure also compares favorably to mathematical structures proposed for connecting points in a plane e.g. nearest-neighbor graphs (Ballister et. al., 2005), @math -graphs (Ruppert and Seidel, 1991), in that it has expected constant degree and does not require any significant computation or global information to be formed.
Another method related to our approach in the sense that clustering is implicit is CLUSTERPOW introduced by Kawadia and Kumar @cite_21 . No leader or gateway is explicitly selected rather the clustered structure of the network is manifested in the way routing is done. Each node can transmit at different finite number of power levels. A route is a non increasing sequence of power levels.
{ "cite_N": [ "@cite_21" ], "mid": [ "2122859412" ], "abstract": [ "In this paper, we consider the problem of power control when nodes are nonhomogeneously dispersed in space. In such situations, one seeks to employ per packet power control depending on the source and destination of the packet. This gives rise to a joint problem which involves not only power control but also clustering. We provide three solutions for joint clustering and power control. The first protocol, CLUSTERPOW, aims to increase the network capacity by increasing spatial reuse. We provide a simple and modular architecture to implement CLUSTERPOW at the network layer. The second, Tunnelled CLUSTERPOW, allows a finer optimization by using encapsulation, but we do not know of an efficient way to implement it. The last, MINPOW, whose basic idea is not new, provides an optimal routing solution with respect to the total power consumed in communication. Our contribution includes a clean implementation of MINPOW at the network layer without any physical layer support. We establish that all three protocols ensure that packets ultimately reach their intended destinations. We provide a software architectural framework for our implementation as a network layer protocol. The architecture works with any routing protocol, and can also be used to implement other power control schemes. Details of the implementation in Linux are provided." ] }
0903.0742
1530264610
We introduce hierarchical neighbor graphs, a new architecture for connecting ad hoc wireless nodes distributed in a plane. The structure has the flavor of hierarchical clustering and requires only local knowledge and minimal computation at each node to be formed and repaired. Hence, it is a suitable interconnection model for an ad hoc wireless sensor network. The structure is able to use energy efficiently by reorganizing dynamically when the battery power of heavily utilized nodes degrades and is able to achieve throughput, energy efficiency and network lifetimes that compare favorably with the leading proposals for data collation in sensor networks such as LEACH (Heinzelman et. al., 2002). Additionally, hierarchical neighbor graphs have low power stretch i.e. the power required to connect nodes through the network is a small factor higher than the power required to connect them directly. Our structure also compares favorably to mathematical structures proposed for connecting points in a plane e.g. nearest-neighbor graphs (Ballister et. al., 2005), @math -graphs (Ruppert and Seidel, 1991), in that it has expected constant degree and does not require any significant computation or global information to be formed.
On the mathematical front, nearest neighbor models have been studied by H "aggstr "om and Meester @cite_18 who proved that there is a critical value @math dependent on the dimension of the space such that if each point is connected to its @math nearest neighbors for any @math , an infinite component exists almost surely. Restricting this model to a square box of area @math , Xue and Kumar @cite_7 showed that @math must be at least 0.074 @math for the graph to be connected. This lower bound was improved to 0.3043 @math by Ballister, Bollob 'as, Sarkar and Walters @cite_1 who also showed that the threshold for connectivity is at most @math as well as corresponding results for the directed version of the problem. The advantage our model enjoys over this model is that by selecting neighbors carefully from the set of proximate nodes, hierarchical neighbor graphs achieve constant degree in expectation. Additionally our hierarchical structure also ensures paths with fewer hops.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_7" ], "mid": [ "2015859644", "2056986231", "" ], "abstract": [ "Consider a Poisson process X in Rd with density 1. We connect each point of X to its k nearest neighbors by undirected edges. The number k is the parameter in this model. We show that, for k = 1, no percolation occurs in any dimension, while, for k = 2, percolation occurs when the dimension is sufficiently large. We also show that if percolation occurs, then there is exactly one infinite cluster. Another percolation model is obtained by putting balls of radius zero around each point of X and let the radii grow linearly in time until they hit another ball. We show that this model exists and that there is no percolation in the limiting configuration. Finally we discuss some general properties of percolation models where balls placed at Poisson points are not allowed to overlap (but are allowed to be tangent). 0 1996 John Wiley & Sons, Inc.", "Let P be a Poisson process of intensity one in a square S n of area n. We construct a random geometric graph G n , k by joining each point of P to its k ≡ k(n) nearest neighbours. Recently, Xue and Kumar proved that if k ≤ 0.074 log n then the probability that G n , k is connected tends to 0 as n → ∞ while, if k ≥ 5.1774 log n, then the probability that G n , k is connected tends to 1 as n → ∞. They conjectured that the threshold for connectivity is k = (1 + o(1)) log n. In this paper we improve these lower and upper bounds to 0.3043 log n and 0.5139 log n, respectively, disproving this conjecture. We also establish lower and upper bounds of 0.7209 log n and 0.9967 log n for the directed version of this problem. A related question concerns coverage. With G n , k as above, we surround each vertex by the smallest (closed) disc containing its k nearest neighbours. We prove that if k ≤ 0.7209 log n then the probability that these discs cover S n tends to 0 as n → ∞ while, if k > 0.9967 log n, then the probability that the discs cover S n tends to 1 as n → ∞.", "" ] }
0903.0742
1530264610
We introduce hierarchical neighbor graphs, a new architecture for connecting ad hoc wireless nodes distributed in a plane. The structure has the flavor of hierarchical clustering and requires only local knowledge and minimal computation at each node to be formed and repaired. Hence, it is a suitable interconnection model for an ad hoc wireless sensor network. The structure is able to use energy efficiently by reorganizing dynamically when the battery power of heavily utilized nodes degrades and is able to achieve throughput, energy efficiency and network lifetimes that compare favorably with the leading proposals for data collation in sensor networks such as LEACH (Heinzelman et. al., 2002). Additionally, hierarchical neighbor graphs have low power stretch i.e. the power required to connect nodes through the network is a small factor higher than the power required to connect them directly. Our structure also compares favorably to mathematical structures proposed for connecting points in a plane e.g. nearest-neighbor graphs (Ballister et. al., 2005), @math -graphs (Ruppert and Seidel, 1991), in that it has expected constant degree and does not require any significant computation or global information to be formed.
Finally, we end by mentioning that hierarchical neighbor graphs bear more than a passing resemblance to the skip list data structure @cite_19 and its biased version @cite_2 .
{ "cite_N": [ "@cite_19", "@cite_2" ], "mid": [ "2070991879", "2121029207" ], "abstract": [ "Skip lists are data structures that use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.", "We design a variation of skip lists that performs well for generally biased access sequences. Given n items, each with a positive weight wi, 1 ≤ i ≤ n, the time to access item i is O(1 + log (W wi)), where W=∑i=1nwi; the data structure is dynamic. We present two instantiations of biased skip lists, one of which achieves this bound in the worst case, the other in the expected case. The structures are nearly identical; the deterministic one simply ensures the balance condition that the randomized one achieves probabilistically. We use the same method to analyze both." ] }
0903.1137
2950232715
Complexity theory is a useful tool to study computational issues surrounding the elicitation of preferences, as well as the strategic manipulation of elections aggregating together preferences of multiple agents. We study here the complexity of determining when we can terminate eliciting preferences, and prove that the complexity depends on the elicitation strategy. We show, for instance, that it may be better from a computational perspective to elicit all preferences from one agent at a time than to elicit individual preferences from multiple agents. We also study the connection between the strategic manipulation of an election and preference elicitation. We show that what we can manipulate affects the computational complexity of manipulation. In particular, we prove that there are voting rules which are easy to manipulate if we can change all of an agent's vote, but computationally intractable if we can change only some of their preferences. This suggests that, as with preference elicitation, a fine-grained view of manipulation may be informative. Finally, we study the connection between predicting the winner of an election and preference elicitation. Based on this connection, we identify a voting rule where it is computationally difficult to decide the probability of a candidate winning given a probability distribution over the votes.
Procaccia and Rosenschein studied the average-case complexity of manipulating @cite_14 . Worst-case results like those here may not apply to elections in practice. They consider elections obeying junta distributions, which concentrate on hard instances. They prove that scoring rules, which are NP-hard to manipulate in the worst case, are computationally easy on average. In a related direction, Conitzer and Sandholm have shown that it is impossible to create a voting rule that is usually hard to manipulate if a large fraction of instances are weakly monotone and manipulation can make either of exactly two candidates win @cite_3 .
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "1493942848", "1566914083" ], "abstract": [ "Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.", "Aggregating the preferences of self-interested agents is a key problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the Gibbard-Satterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success, exhibiting rules under which the problem of finding a beneficial manipulation is NP-hard, #P-hard, or even PSPACE-hard. Thus, under these rules, it is unlikely that a computationally efficient algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper, we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly satisfy it. We also discuss approaches for potentially circumventing this impossibility result." ] }
0903.1137
2950232715
Complexity theory is a useful tool to study computational issues surrounding the elicitation of preferences, as well as the strategic manipulation of elections aggregating together preferences of multiple agents. We study here the complexity of determining when we can terminate eliciting preferences, and prove that the complexity depends on the elicitation strategy. We show, for instance, that it may be better from a computational perspective to elicit all preferences from one agent at a time than to elicit individual preferences from multiple agents. We also study the connection between the strategic manipulation of an election and preference elicitation. We show that what we can manipulate affects the computational complexity of manipulation. In particular, we prove that there are voting rules which are easy to manipulate if we can change all of an agent's vote, but computationally intractable if we can change only some of their preferences. This suggests that, as with preference elicitation, a fine-grained view of manipulation may be informative. Finally, we study the connection between predicting the winner of an election and preference elicitation. Based on this connection, we identify a voting rule where it is computationally difficult to decide the probability of a candidate winning given a probability distribution over the votes.
Faliszewski studied a form of preference manipulation, called micro-bribery'' in which individual preferences of agents can be manipulated @cite_4 . Note that the resulting orders may not be transitive. Interestingly, they proved that for the Llull and Copeland rules, it is polynomial for the chair to perform such manipulation of individual preferences, but computationally intractable when the chair can only manipulate whole votes. This contrasts with the results here where we prove that there are rules like the cup and Copeland rule which are easy to manipulate by a coalition if we can change whole votes, but computationally intractable when we can change only individual preferences.
{ "cite_N": [ "@cite_4" ], "mid": [ "103664523" ], "abstract": [ "Control of elections refers to attempts by an agent to, via such actions as addition deletion partition of candidates or voters, ensure that a given candidate wins (Bartholdi, Tovey, & Trick 1992). An election system in which such an agent's computational task is NP-hard is said to be resistant to the given type of control. Aside from election systems with an NP-hard winner problem, the only systems known to be resistant to all the standard control types are highly artificial election systems created by hybridization (Hemaspaandra, Hemaspaandra, & Rothe 2007b). In this paper, we prove that an election system developed by the 13th century mystic Ramon Llull and the well-studied Copeland election system are both resistant to all the standard types of (constructive) electoral control other than one variant of addition of candidates. This is the most comprehensive resistance to control yet achieved by any natural election system whose winner problem is in P. In addition, we show that Llull and Copeland voting are very broadly resistant to bribery attacks, and we integrate the potential irrationality of voter preferences into many of our results." ] }
0903.1139
2950314343
Constraint propagation is one of the techniques central to the success of constraint programming. To reduce search, fast algorithms associated with each constraint prune the domains of variables. With global (or non-binary) constraints, the cost of such propagation may be much greater than the quadratic cost for binary constraints. We therefore study the computational complexity of reasoning with global constraints. We first characterise a number of important questions related to constraint propagation. We show that such questions are intractable in general, and identify dependencies between the tractability and intractability of the different questions. We then demonstrate how the tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when constraints can be safely generalized, when decomposing constraints will reduce the amount of pruning, and when combining constraints is tractable.
Analysis of tractability and intractability is not new in constraint programming. Identifying properties under which a constraint satisfaction problem is tractable has been studied for a long time. For example, Freuder @cite_12 , Dechter and Pearl @cite_13 @cite_33 or @cite_17 gave increasingly general conditions on the structure of the underlying (hyper)graph to obtain a backtrack-free resolution of a problem. van Beek and Dechter @cite_30 and @cite_39 presented conditions on the semantics of the individual constraints that make the problem tractable. Finally, @cite_36 showed that when the constraints composing a problem are defined as disjunctions of other constraints of specified types, then the whole problem is tractable. However, these lines of research are concerned with a constraint satisfaction problem as a whole, and do not say much about individual particular constraints.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_36", "@cite_39", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2041737564", "1999038767", "1861000399", "1985525233", "", "", "2034674470" ], "abstract": [ "Constraint networks have been shown to be useful in formulating such diverse problems as scene labeling, natural language parsing, and temporal reasoning. Given a constraint network, we often wish to (i) find a solution that satisfies the constraints and (ii) find the corresponding minimal network where the constraints are as explicit as possible. Both tasks are known to be NP-complete in the general case. Task (1) is usually solved using a backtracking algorithm, and task (ii) is often solved only approximately by enforcing various levels of local consistency. In this paper, we identify a property of binary constraint called row convexity and show its usefulness in deciding when a form of local consistency called path consistency is sufficient to guarantee that a network is both minimal and globally consistent. Globally consistent networks have the property that a solution can be found without backtracking. We show that one can test for the row convexity property efficiently and we show, by examining applications of constraint networks discussed in the literature, that our results are useful in practice. Thus, we identify a class of binary constraint networks for which we can solve both tasks (i) and (ii) efficiently. Finally, we generalize the results for binary constraint networks to networks with nonbinary constraints.", "Abstract The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.", "Many combinatorial search problems can be expressed as 'constraint satisfaction problems', and this class of problems is known to be NP-complete in general. In this paper we investigate 'disjunctive constraints', that is, constraints which have the form of the disjunction of two constraints of specified types. We show that when the constraint types involved in the disjunction have a certain property, which we call 'independence', and when a certain restricted class of problems is tractable, then the class of all problems involving these disjunctive constraints is tractable. We give examples to show that many known examples of tractable constraint classes arise in this way, and derive new tractable classes which have not previously been identified.", "This paper studies constraint satisfaction over connected row-convex (CRC) constraints. It shows that CRC constraints are closed under composition, intersection, and transposition, the basic operations of path-consistency algorithms. This establishes that path consistency over CRC constraints produces a minimal and decomposable network and is thus a polynomial-time decision procedure for CRC networks. This paper also presents a new path-consistency algorithm for CRC constraints running in time O(n(3)d(2)) and space O(n(2)d), where n is the number of variables and d is the size of the largest domain, improving the traditional time and space complexity by orders of magnitude. The paper also shows how to construct CRC constraints by conjunction and disjunction of a set of basic CRC constraints, highlighting how CRC constraints generalize monotone constraints and presenting interesting subclasses of CRC constraints. Experimental results show that the algorithm behaves well in practice. (C) 1999 Elsevier Science B.V. All rights reserved.", "", "", "Abstract We compare tractable classes of constraint satisfaction problems (CSPs). We first give a uniform presentation of the major structural CSP decomposition methods. We then introduce a new class of tractable CSPs based on the concept of hypertree decomposition recently developed in Database Theory, and analyze the cost of solving CSPs having bounded hypertree-width. We provide a framework for comparing parametric decomposition-based methods according to tractability criteria and compare the most relevant methods. We show that the method of hypertree decomposition dominates the others in the case of general CSPs (i.e., CSPs of unbounded arity). We also make comparisons for the restricted case of binary CSPs. Finally, we consider the application of decomposition methods to the dual graph of a hypergraph. In fact, this technique is often used to exploit binary decomposition methods for nonbinary CSPs. However, even in this case, the hypertree-decomposition method turns out to be the most general method." ] }
0902.4658
1526888200
All major on-line social networks, such as MySpace, Facebook, LiveJournal, and Orkut, are built around the concept of friendship. It is not uncommon for a social network participant to have over 100 friends. A natural question arises: are they all real friends of hers, or does she mean something different when she calls them "friends?" Speaking in other words, what is the relationship between off-line (real, traditional) friendship and its on-line (virtual) namesake? In this paper, we use sociological data to suggest that there is a significant difference between the concepts of virtual and real friendships. We further investigate the structure of on-line friendship and observe that it follows the Pareto (or double Pareto) distribution and is subject to age stratification but not to gender segregation. We introduce the concept of digital personality that quantifies the willingness of a social network participant to engage in virtual friendships.
The mechanisms of friendship'' allow the users to establish new connections and enable social searching (locating and maintaining offline connections online, in a virtual setting, to learn more about them, date with them, or even engage in casual sex) @cite_0 . It has been shown that through friendships'' users impact their friends' decisions @cite_14 , affect the predictability of the friends' actions and adopt behaviours exhibited by their friends @cite_1 , and influence the behavior patterns of their friends @cite_15 @cite_9 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_1", "@cite_0", "@cite_15" ], "mid": [ "2165190056", "2141667528", "", "1997879853", "1987871578" ], "abstract": [ "Traditional online social network sites use a single monolithic \"friends\" relationship to link users. However, users may have more in common with strangers, suggesting the use of a \"similarity network\" to recommend content. This paper examines the usefulness of this distinction in propagating new content. Using both macroscopic and microscopic social dynamics, we present an analysis of Essembly, an ideological social network that semantically distinguishes between friends and ideological allies and nemeses. Although users have greater similarity with their allies than their friends and nemeses, surprisingly, the allies network does not affect voting behavior, despite being as large as the friends network. In contrast, users are influenced differently by their friends and nemeses, indicating that people use these networks for distinct purposes. We suggest resulting design implications for social content aggregation services and recommender systems.", "Characterizing the relationship that exists between a person's social group and his her personal behavior has been a long standing goal of social network analysts. In this paper, we apply data mining techniques to study this relationship for a population of over 10 million people, by turning to online sources of data. The analysis reveals that people who chat with each other (using instant messaging) are more likely to share interests (their Web searches are the same or topically similar). The more time they spend talking, the stronger this relationship is. People who chat with each other are also more likely to share other personal characteristics, such as their age and location (and, they are likely to be of opposite gender). Similar findings hold for people who do not necessarily talk to each other but do have a friend in common. Our analysis is based on a well-defined mathematical formulation of the problem, and is the largest such study we are aware of.", "", "Large numbers of college students have become avid Facebook users in a short period of time. In this paper, we explore whether these students are using Facebook to find new people in their offline communities or to learn more about people they initially meet offline. Our data suggest that users are largely employing Facebook to learn more about people they meet offline, and are less likely to use the site to initiate new connections.", "Online social networks pose an interesting problem: how to best characterize the different classes of user behavior. Traditionally, user behavior characterization methods, based on user individual features, are not appropriate for online networking sites. In these environments, users interact with the site and with other users through a series of multiple interfaces that let them to upload and view content, choose friends, rank favorite content, subscribe to users and do many other interactions. Different interaction patterns can be observed for different groups of users. In this paper, we propose a methodology for characterizing and identifying user behaviors in online social networks. First, we crawled data from YouTube and used a clustering algorithm to group users that share similar behavioral pattern. Next, we have shown that attributes that stem from the user social interactions, in contrast to attributes relative to each individual user, are good discriminators and allow the identification of relevant user behaviors. Finally, we present and discuss experimental results of the use of proposed methodology. A set of useful profiles, derived from the analysis of the YouTube sample is presented. The identification of different classes of user behavior has the potential to improve, for instance, recommendation systems for advertisements in online social networks." ] }
0902.4569
2951940883
In this paper, a many-sources large deviations principle (LDP) for the transient workload of a multi-queue single-server system is established where the service rates are chosen from a compact, convex and coordinate-convex rate region and where the service discipline is the max-weight policy. Under the assumption that the arrival processes satisfy a many-sources LDP, this is accomplished by employing Garcia's extended contraction principle that is applicable to quasi-continuous mappings. For the simplex rate-region, an LDP for the stationary workload is also established under the additional requirements that the scheduling policy be work-conserving and that the arrival processes satisfy certain mixing conditions. The LDP results can be used to calculate asymptotic buffer overflow probabilities accounting for the multiplexing gain, when the arrival process is an average of processes. The rate function for the stationary workload is expressed in term of the rate functions of the finite-horizon workloads when the arrival processes have increments.
In our LDP analysis, we follow the lead of many recent papers on the analysis of scheduling algorithms @cite_15 @cite_49 @cite_16 @cite_8 @cite_47 @cite_3 @cite_50 @cite_6 by considering logarithmic asymptotics to the probabilities of certain rare events. Maximum weight scheduling policy falls under class of the generalized @math -rule policies and is known to be stabilizing under very mild conditions @cite_45 @cite_44 @cite_51 @cite_25 @cite_54 . A refined analysis of this policy shows that it minimizes the workload in the heavy traffic regime @cite_40 @cite_31 @cite_27 over a large class of stationary online policies. This optimality of the max-weight policies also carries over to Large Deviations based tail asymptotes: the work-conserving version of these policies is known to minimize the exponent of the tail asymptote of the stationary workload over a large class of stationary, online and work-conserving policies @cite_49 .
{ "cite_N": [ "@cite_31", "@cite_15", "@cite_8", "@cite_54", "@cite_3", "@cite_6", "@cite_44", "@cite_40", "@cite_45", "@cite_27", "@cite_49", "@cite_50", "@cite_47", "@cite_16", "@cite_51", "@cite_25" ], "mid": [ "", "2168965212", "1976570693", "2108327442", "2079031591", "1639994867", "", "2081597576", "2105177639", "1999102639", "", "", "", "2125630120", "2003346154", "1510598052" ], "abstract": [ "", "We consider a multiclass multiplexer with support for multiple service classes and dedicated buffers for each service class. Under specific scheduling policies for sharing bandwidth among these classes, we seek the asymptotic (as the buffer size goes to infinity) tail of the buffer overflow probability for each dedicated buffer. We assume dependent arrival and service processes as is usually the case in models of bursty traffic. In the standard large deviations methodology, we provide a lower and a matching (up to first degree in the exponent) upper bound on the buffer overflow probabilities. We introduce a novel optimal control approach to address these problems. In particular, we relate the lower bound derivation to a deterministic optimal control problem, which we explicitly solve. Optimal state trajectories of the control problem correspond to typical congestion scenarios. We explicitly and in detail characterize the most likely modes of overflow. We specialize our results to the generalized processor sharing policy (GPS) and the generalized longest queue first policy (GLQF). The performance of strict priority policies is obtained as a corollary. We compare the GPS and GLQF policies and conclude that GLQF achieves smaller overflow probabilities than GPS for all arrival and service processes for which our analysis holds. Our results have important implications for traffic management of high-speed networks and can be used as a basis for an admission control mechanism which guarantees a different loss probability for each class.", "In this correspondence, we consider a cellular network consisting of a base station and N receivers. The channel states of the receivers are assumed to be identical and independent of each other. The goal is to compare the throughput of two different scheduling policies (a queue-length-based (QLB) policy and a greedy policy) given an upper bound on the queue overflow probability or the delay violation probability. We consider a multistate channel model, where each channel is assumed to be in one of L states. Given an upper bound on the queue overflow probability or an upper bound on the delay violation probability, we show that the total network throughput of the (QLB) policy is no less than the throughput of the greedy policy for all N. We also obtain a lower bound on the throughput of the (QLB) policy. For sufficiently large N, the lower bound is shown to be tight, strictly increasing with N, and strictly larger than the throughput of the greedy policy. Further, for a simple multistate channel model-ON-OFF channel, we prove that the lower bound is tight for all N", "We consider the following queuing system which arises as a model of a wireless link shared by multiple users. There is a finite number N of input flows served by a server. The system operates in discrete time t = 0,1,2,…. Each input flow can be described as an irreducible countable Markov chain; waiting customers of each flow are placed in a queue. The sequence of server states m(t), t = 0,1,2,…, is a Markov chain with finite number of states M. When the server is in state m, it can serve mim customers of flow i (in one time slot).The scheduling discipline is a rule that in each time slot chooses the flow to serve based on the server state and the state of the queues. Our main result is that a simple online scheduling discipline, Modified Largest Weighted Delay First, along with its generalizations, is throughput optimal; namely, it ensures that the queues are stable as long as the vector of average arrival rates is within the system maximum stability region.", "We consider a single server discrete-time system with K users where the server picks operating points from a compact, convex and co-ordinate convex set in R+ K. For this system we analyse the performance of a stablising policy that at any given time picks operating points from the allowed rate region that maximise a weighted sum of rate, where the weights depend upon the workloads of the users. Assuming a large deviations principle (LDP) for the arrival processes in the Skorohod space of functions that are right-continuous with left-hand limits we establish an LDP for the workload process using a generalised version of the contraction principle to derive the corresponding rate function. With the LDP result available we then analyse the tail probabilities of the workloads under different buffering scenarios.", "In this paper, we study discrete-time priority queueing systems fed by a large number of arrival streams. We first provide bounds on the actual delay asymptote in terms of the virtual delay asymptote. Then, under suitable assumptions on the arrival process to the queue, we show that these asymptotes are the same. As an application of this result, we then consider a priority queueing system with two queues. Using the earlier result, we derive an upper bound on the tail probability of the delay. Under certain assumptions on the rate function of the arrival process, we show that the upper bound is tight. We then consider a system with Markovian arrivals and numerically evaluate the delay tail probability and validate these results with simulations.", "", "We consider a general single-server multiclass queueing system that incurs a delay cost Ck(Tk) for each class k job that resides Tk units of time in the system. This paper derives a scheduling policy that minimizes the total cumulative delay cost when the system operates during a finite time horizon. Denote the marginal delay cost function and the (possibly non-stationary) average processing time of class k by ck = C'k and 1 uk, respectively, and let ak(t) be the \"age\" or time that the oldest class k job has been waiting at time t. We call the scheduling policy that at time t serves the oldest waiting job of that class k with the highest index uk(t)ck(ak(t)), the generalized cu rule. As a dynamic priority rule that depends on very little data, the generalized cu rule is attractive to implement. We show that, with nondecreasing convex delay costs, the generalized cu rule is asymptotically optimal if the system operates in heavy traffic and give explicit expressions for the associated performance characteristics: the delay (throughput time) process and the minimum cumulative delay cost. The optimality result is robust in that it holds for a countable number of classes and several homogeneous servers in a nonstationary, deterministic or stochastic environment where arrival and service processes can be general and interdependent.", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >", "In this work we consider a problem related to the equilibrium Statistical Mechanics of Spin Glasses, namely the study of the Gibbs measure of the Random Energy Model. For solving this problem new results of independent interest on sums of spacings for i.i.d. Gaussian random variables are presented. Then we give a precise description of the support of the Gibbs measure below the critical temperature.", "", "", "", "Multiuser scheduling in a wireless context, where channel state information is exploited at the base station, can result in significant throughput gains to users. However, when QoS constraints are imposed (in the form of overflow probabilities), the benefits of multiuser scheduling are not clear. In this paper, we address this question for independent and identically distributed ON-OFF channel models, and study a ldquomultiuserrdquo formulation of effective capacity with QoS constraints. We consider a channel-aware greedy rule as well as the channel-aware max-queue rule, and showed that these algorithms that yield the same long-term throughput without QoS constraints have very different performance when QoS constraints are imposed. Next, we study the effective capacity for varying channel burstiness. From results on multiuser scheduling, we expect the long-term throughput to grow with increasing channel burstiness. However, we show that the throughput with QoS constraints decreases with increasing channel burstiness. The intuitive justification for this is that with increasing burstiness, even though the the long-term throughput increases, the channel access delay increases as well resulting in poor QoS performance.", "It is well known that head-of-line blocking limits the throughput of an input-queued switch with first-in-first-out (FIFO) queues. Under certain conditions, the throughput can be shown to be limited to approximately 58.6 . It is also known that if non-FIFO queueing policies are used, the throughput can be increased. However, it has not been previously shown that if a suitable queueing policy and scheduling algorithm are used, then it is possible to achieve 100 throughput for all independent arrival processes. In this paper we prove this to be the case using a simple linear programming argument and quadratic Lyapunov function. In particular, we assume that each input maintains a separate FIFO queue for each output and that the switch is scheduled using a maximum weight bipartite matching algorithm. We introduce two maximum weight matching algorithms: longest queue first (LQF) and oldest cell first (OCF). Both algorithms achieve 100 throughput for all independent arrival processes. LQF favors queues with larger occupancy, ensuring that larger queues will eventually be served. However, we find that LQF can lead to the permanent starvation of short queues. OCF overcomes this limitation by favoring cells with large waiting times.", "We study a processing system comprised of parallel queues, whose individual service rates are specified by a global service mode (configuration). The issue is how to switch the system between various possible service modes, so as to maximize its throughput and maintain stability under the most workload-intensive input traffic traces (arrival processes). Stability preserves the job inflow–outflow balance at each queue on the traffic traces. Two key families of service policies are shown to maximize throughput, under the mild condition that traffic traces have long-term average workload rates. In the first family of cone policies, the service mode is chosen based on the system backlog state belonging to a corresponding cone. Two distinct policy classes of that nature are investigated, MaxProduct and FastEmpty. In the second family of batch policies (BatchAdapt), jobs are collectively scheduled over adaptively chosen horizons, according to an asymptotically optimal, robust schedule. The issues of nonpreemptive job processing and non-negligible switching times between service modes are addressed. The analysis is extended to cover feed-forward networks of such processing systems nodes. The approach taken unifies and generalizes prior studies, by developing a general trace-based modeling framework (sample-path approach) for addressing the queueing stability problem. It treats the queueing structure as a deterministic dynamical system and analyzes directly its evolution trajectories. It does not require any probabilistic superstructure, which is typically used in previous approaches. Probability can be superposed later to address finer performance questions (e.g., delay). The throughput maximization problem is seen to be primarily of structural nature. The developed methodology appears to have broader applicability to other queueing systems." ] }
0902.4822
1514171909
We present a novel characterization of how a program stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clusters. It is based on the estimation of stack distance probability distributions. The analysis requires the observation of a very small subset of memory accesses, and yields a reasonable to very accurate prediction in constant time.
Cycle-accurate simulators return a cache event in response to each instruction. They require a handle on the application being executed @cite_19 or an exhaustive trace of the execution @cite_23 @cite_8 . Although trace compression methods exist, these simulators are slow compared to other predictors @cite_1 .
{ "cite_N": [ "@cite_19", "@cite_1", "@cite_23", "@cite_8" ], "mid": [ "1574450869", "2146245440", "1742916775", "" ], "abstract": [ "Due to the increasing gap between processor speed and memory access time, a large fraction of a program's execution time is spent in accesses to the various levels in the memory hierarchy. Hence, cache-aware programming is of prime importance. For efficiently utilizing the memory subsystem, many architecture-specific characteristics must be taken into account: cache size, replacement strategy, access latency, number of memory levels, etc.In this paper, we present a simulator for the accurate performance prediction of sequential and parallel programs on shared memory systems. It assists the programmer in locating the critical parts of the code that have the greatest impact on the overall performance. Our simulator is based on the Latency-of-Data-Access Model, that focuses on the modeling of the access times to different memory levels.We describe the design of our simulator, its configuration and its usage in an example application.", "Modern Application Specific Instruction Set Processors (ASIPs) have customizable caches, where the size, associativity and line size can all be customized to suit a particular application. To find the best cache size suited for a particular embedded system, the application(s) is are executed, traces obtained, and caches simulated. Typically, program trace files can range from a few megabytes to several gigabytes. Simulation of cache performance using large program trace files is a time consuming process. In this paper, a novel instruction cache simulation methodology that can operate directly on a compressed program trace file without the need for decompression is presented. This feature allowed our simulation methodology to have an average speed up of 9.67 times compared to the existing state of the art tool (Dinero IV cache simulator), for a range of applications from the Mediabench suite.", "This paper presents a new method of quantifying and visualizing the locality characteristics of any reference stream. After deriving a locality function, we show the correspondence between features of the locality function and common low-level program structures. We then apply the method to determine the locality characteristics of reference streams generated by a variety of synthetic models. These characteristics are shown to be substantially different from those of the reference trace used to determine the parameters of the models. We conclude that these synthetic models have serious inadequacies for evaluating the performance of memory hierarchies.", "" ] }
0902.4822
1514171909
We present a novel characterization of how a program stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clusters. It is based on the estimation of stack distance probability distributions. The analysis requires the observation of a very small subset of memory accesses, and yields a reasonable to very accurate prediction in constant time.
How well a program behaves relative to cache has been explained in the literature with the notions of program locality @cite_16 @cite_3 @cite_2 . Program locality has a variety of descriptions. Reducing the description size has always been a challenge for performance prediction. Programs can be decomposed into building blocks @cite_11 @cite_22 @cite_13 . Resulting descriptions are still substantial and they do not apply to all kinds of caches.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_2", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "", "2096230186", "2155239436", "2079632486", "2156697773", "2044638745" ], "abstract": [ "", "Modern computer systems, as well as the Internet, use caching to maximize their efficiency. Nowadays, caching occurs in many different system layers. Analysis of these layers will lead to a deeper understanding of cache performance. ... comes from the uniprocessor environment. Spatial locality implies that the next data item in the address space is most likely to be used next, while temporal locality implies that the last data item used is most likely to be used next. Implementation is typically based on a fast but expensive memory (the price is affordable because, by definition, cache memory is small). Even if we use the same technology for the main memory and cache memory, the cache memory will be faster because smaller memories have a shorter access time. Recent research tries to split the CPU cache into two subcaches: one for spatial locality and one for temporal locality. # SMP On the SMP level, spatial and temporal", "There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this paper we take a first step in this direction. We propose a framework for describing how reference streams are transformed as they pass through the Internet, based on three operations: aggregation, disaggregation, and filtering. We also propose metrics to capture the temporal locality of reference streams in this framework. We argue that these metrics (marginal entropy and interreference coefficient of variation) are more natural and more useful than previously proposed metrics for temporal locality; and we show that these metrics provide insight into the nature of reference stream transformations in the Web.", "The property of locality in program behavior has been studied and modelled extensively because of its application to memory design, code optimization, multiprogramming etc. We propose a k order Markov chain based scheme to model the sequence of time intervals between successive references to the same address in memory during program execution. Each unique address in a program is modelled separately. To validate our model, which we call the Inter-Reference Gap (IRG) model, we show substantial improvements in three different areas where it is applied. (1) We improve upon the miss ratio for the Least Recently Used (LRU) memory replacement algorithm by up to 37 . (2) We achieve up to 22 space-time product improvement over the Working Set (WS) algorithm for dynamic memory management. (3) A new trace compression technique is proposed which compresses up to 2.5 with zero error in WS simulations and up to 3.7 error in the LRU simulations. All these results are obtained experimentally, via trace driven simulations over a wide range of cache traces, page reference traces, object traces and database traces.", "Performance prediction across platforms is increasingly important as developers can choose from a wide range of execution platforms. The main challenge remains to perform accurate predictions at a low-cost across different architectures. In this paper, we derive an affordable method approaching cross-platform performance translation based on relative performance between two platforms. We argue that relative performance can be observed without running a parallel application in full. We show that it suffices to observe very short partial executions of an application since most parallel codes are iterative and behave predictably manner after a minimal startup period. This novel prediction approach is observation-based. It does not require program modeling, code analysis, or architectural simulation. Our performance results using real platforms and production codes demonstrate that prediction derived from partial executions can yield high accuracy at a low cost. We also assess the limitations of our model and identify future research directions on observationbased performance prediction.", "Embedded systems generally interact in some way with the outside world. This may involve measuring sensors and controlling actuators, communicating with other systems, or interacting with users. These functions impose real-time constraints on system design. Verification of these specifications requires computing an upper bound on the worst-case execution time (WCET) of a hardware software system. Furthermore, it is critical to derive a tight upper bound on WCET in order to make efficient use of system resources. The problem of bounding WCET is particularly difficult on modern processors. These processors use cache-based memory systems that vary memory access time based on the dynamic memory access pattern of the program. This must be accurately modeled in order to tightly bound WCET. Several analysis methods have been proposed to bound WCET on processors with instruction caches. Existing approaches either search all possible program paths, an intractable problem, or they use highly pessimistic assumptions to limit the search space. In this paper we present a more effective method for modeling instruction cache activity and computing a tight bound on WCET. The method uses an integer linear programming formulation and does not require explicit enumeration of program paths. The method is implemented in the program cinderella and we present some experimental results of this implementation." ] }
0902.4822
1514171909
We present a novel characterization of how a program stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clusters. It is based on the estimation of stack distance probability distributions. The analysis requires the observation of a very small subset of memory accesses, and yields a reasonable to very accurate prediction in constant time.
Monte Carlo performance models represent a program as inter-dependent statistical generators of stall conditions @cite_5 @cite_12 @cite_17 . These models are fast. The average number of cache misses in a run is correct even for complex processors. However, the cache misses generators used in these works are still specific to a cache configuration.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "", "2164718075", "2056971515" ], "abstract": [ "", "Cycle accurate simulation has long been the primary tool for micro-architecture design and evaluation. Though accurate, the slow speed often imposes constraints on the extent of design exploration. In this work, we propose a fast, accurate Monte-Carlo based model for predicting processor performance. We apply this technique to predict the CPI of in-order architectures and validate it against the Itanium-2. The Monte Carlo model uses micro-architecture independent application characteristics, and cache, branch predictor statistics to predict CPI with an average error of less than 7 . Since prediction is achieved in a few seconds, the model can be used for fast design space exploration that can efficiently cull the space for cycle-accurate simulations. Besides accurately predicting CPI, the model also breaks down CPI into various components, where each component quantifies the effect of a particular stall condition (branch misprediction, cache miss, etc.) on overall CPI. Such a CPI decomposition can help processor designers quickly identify and resolve critical performance bottlenecks", "The authors present new and efficient algorithms for simulating alternative direct-mapped and set-associative caches and use them to quantify the effect of limited associativity on the cache miss ratio. They introduce an algorithm, forest simulation, for simulating alternative direct-mapped caches and generalize one, which they call all-associativity simulation, for simulating alternative direct-mapped, set-associative, and fully-associative caches. The authors find that although all-associativity simulation is theoretically less efficient than forest simulation or stack simulation (a commonly used simulation algorithm), in practice it is not much slower and allows the simulation of many more caches with a single pass through an address trace. The authors also provide data and insight into how varying associatively affects the miss ratio. >" ] }
0902.4822
1514171909
We present a novel characterization of how a program stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clusters. It is based on the estimation of stack distance probability distributions. The analysis requires the observation of a very small subset of memory accesses, and yields a reasonable to very accurate prediction in constant time.
For prediction, stack distances are usually recorded in a . The precision of a histogram (i.e. the range of its bins) is usually the size of a cache line. Stack distance histograms contain the number of cache misses for every cache size. Stack distance histograms are widely used for cross-platform performance prediction @cite_0 @cite_4 @cite_14 . They are lighter than application traces when the cache line size is known. However, their size is still substantial and the whole trace still needs to be collected.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_4" ], "mid": [ "2150881394", "2062840343", "2146812414" ], "abstract": [ "This paper describes a toolkit for semi-automatically measuring and modeling static and dynamic characteristics of applications in an architecture-neutral fashion. For predictable applications, models of dynamic characteristics have a convex and differentiable profile. Our toolkit operates on application binaries and succeeds in modeling key application characteristics that determine program performance. We use these characterizations to explore the interactions between an application and a target architecture. We apply our toolkit to SPARC binaries to develop architecture-neutral models of computation and memory access patterns of the ASCI Sweep3D and the NAS SP, BT and LU benchmarks. From our models, we predict the L1, L2 and TLB cache miss counts as well as the overall execution time of these applications on an Origin 2000 system. We evaluate our predictions by comparing them against measurements collected using hardware performance counters.", "The widening gap between CPU and memory speed has made caches an integral feature of modern high- performance processors. The high degree of configurability of cache memory can require extensive design space exploration and is generally performed using execution-driven or trace-driven simulation. Execution-driven simulators can be highly accurate but require a detailed development flow and may impose performance costs. Trace-driven simulators are an efficient alternative but maintaining large traces can present storage and portability problems. We propose a distribution-driven trace generation methodology as an alternative to traditional execution- and trace- driven simulation. An adaptation of the Least Recently Used Stack Model is used to concisely capture the key locality features in a trace and a two-state Markov chain model is used for trace generation. Simulation and analysis of a variety of embedded application traces demonstrate the cacheability characteristics of the synthetic traces are generally very well preserved and similar to their real trace, and we also highlight the potential performance improvement over ISA emulation.", "As multiprocessor systems-on-chip become a reality, performance modeling becomes a challenge. To quickly evaluate many architectures, some type of high-level simulation is required, including high-level cache simulation. We propose to perform this cache simulation by defining a metric to represent memory behavior independently of cache structure and back-annotate this into the original application. While the annotation phase is complex, requiring time comparable to normal address trace based simulation, it need only be performed once per application set and thus enables simulation to be sped up by a factor of 20 to 50 over trace based simulation. This is important for embedded systems, as software is often evaluated against many input sets and many architectures. Our results show the technique is accurate to within 20 of miss rate for uniprocessors and was able to reduce the die area of a multiprocessor chip by a projected 14 over a naive design by accurately sizing caches for each processor." ] }
0902.3485
2952466854
We study the use of viral marketing strategies on social networks to maximize revenue from the sale of a single product. We propose a model in which the decision of a buyer to buy the product is influenced by friends that own the product and the price at which the product is offered. The influence model we analyze is quite general, naturally extending both the Linear Threshold model and the Independent Cascade model, while also incorporating price information. We consider sales proceeding in a cascading manner through the network, i.e. a buyer is offered the product via recommendations from its neighbors who own the product. In this setting, the seller influences events by offering a cashback to recommenders and by setting prices (via coupons or discounts) for each buyer in the social network. Finding a seller strategy which maximizes the expected revenue in this setting turns out to be NP-hard. However, we propose a seller strategy that generates revenue guaranteed to be within a constant factor of the optimal strategy in a wide variety of models. The strategy is based on an influence-and-exploit idea, and it consists of finding the right trade-off at each time step between: generating revenue from the current user versus offering the product for free and using the influence generated from this sale later in the process. We also show how local search can be used to improve the performance of this technique in practice.
The problem of social contagion or spread of influence was first formulated by the sociological community, and introduced to the computer science community by Domingos and Richardson @cite_0 . An influential paper by Kempe, Kleinberg and Tardos @cite_6 solved the target set selection problem posed by @cite_0 and sparked interest in this area from a theoretical perspective (see @cite_1 ). This work has mostly been limited to the influence maximization paradigm, where influence has been taken to be a proxy for the revenue generated through a sale. Although similar to our work in spirit, there is no notion of price in this model, and therefore, our central problem of setting prices to encourage influence spread requires a more complicated model.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_6" ], "mid": [ "2042123098", "1897619428", "" ], "abstract": [ "One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers---also known as viral marketing---can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases.", "The flow of information or influence through a large social network can be thought of as unfolding with the dynamics of an epidemic: as individuals become aware of new ideas, technologies, fads, rumors, or gossip, they have the potential to pass them on to their friends and colleagues, causing the resulting behavior to cascade through the network. We consider a collection of probabilistic and game-theoretic models for such phenomena proposed in the mathematical social sciences, as well as recent algorithmic work on the problem by computer scientists. Building on this, we discuss the implications of cascading behavior in a number of on-line settings, including word-of-mouth effects (also known as “viral marketing”) in the success of new products, and the influence of social networks in the growth of on-line", "" ] }
0902.3485
2952466854
We study the use of viral marketing strategies on social networks to maximize revenue from the sale of a single product. We propose a model in which the decision of a buyer to buy the product is influenced by friends that own the product and the price at which the product is offered. The influence model we analyze is quite general, naturally extending both the Linear Threshold model and the Independent Cascade model, while also incorporating price information. We consider sales proceeding in a cascading manner through the network, i.e. a buyer is offered the product via recommendations from its neighbors who own the product. In this setting, the seller influences events by offering a cashback to recommenders and by setting prices (via coupons or discounts) for each buyer in the social network. Finding a seller strategy which maximizes the expected revenue in this setting turns out to be NP-hard. However, we propose a seller strategy that generates revenue guaranteed to be within a constant factor of the optimal strategy in a wide variety of models. The strategy is based on an influence-and-exploit idea, and it consists of finding the right trade-off at each time step between: generating revenue from the current user versus offering the product for free and using the influence generated from this sale later in the process. We also show how local search can be used to improve the performance of this technique in practice.
A recent work by Hartline, Mirrokni and Sundararajan @cite_4 is similar in flavor to our work, and also considers extending social contagion ideas with pricing information, but the model they examine differs from our model in a several aspects. The main difference is that they assume that the seller is allowed to approach arbitrary nodes in the network at any time and offer their product at a price chosen by the seller, while in our model the cascade of recommendations determines the timing of an offer and this cannot be directly manipulated. In essence, the model proposed in @cite_4 is akin to advertising the product to arbitrary nodes, bypassing the network structure to encourage a desired set of early adopters. Our model restricts such direct advertising as it is likely to be much less effective than a direct recommendation from a friend, especially when the recommender has an incentive to convince the potential buyer to purchase the product (for instance, the recommender might personalize the recommendation, increasing its effectiveness). Despite the different models, the algorithms proposed by us and @cite_4 are similar in spirit and are based on an influence-and-exploit strategy.
{ "cite_N": [ "@cite_4" ], "mid": [ "2110373679" ], "abstract": [ "We discuss the use of social networks in implementing viral marketing strategies. While influence maximization has been studied in this context (see Chapter 24 of [10]), we study revenue maximization, arguably, a more natural objective. In our model, a buyer's decision to buy an item is influenced by the set of other buyers that own the item and the price at which the item is offered. We focus on algorithmic question of finding revenue maximizing marketing strategies. When the buyers are completely symmetric, we can find the optimal marketing strategy in polynomial time. In the general case, motivated by hardness results, we investigate approximation algorithms for this problem. We identify a family of strategies called influence-and-exploit strategies that are based on the following idea: Initially influence the population by giving the item for free to carefully a chosen set of buyers. Then extract revenue from the remaining buyers using a 'greedy' pricing strategy. We first argue why such strategies are reasonable and then show how to use recently developed set-function maximization techniques to find the right set of buyers to influence." ] }
0902.3210
2150415860
In two-tier networks comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays) with universal frequency reuse, the near-far effect from cross-tier interference creates dead spots where reliable coverage cannot be guaranteed to users in either tier. Equipping the macrocell and femtocells with multiple antennas enhances robustness against the near-far problem. This work derives the maximum number of simultaneously transmitting multiple antenna femtocells meeting a per-tier outage probability constraint. Coverage dead zones are presented wherein cross-tier interference bottlenecks cellular and femtocell coverage. Two operating regimes are shown namely 1) a cellular-limited regime in which femtocell users experience unacceptable cross-tier interference and 2) a hotspot-limited regime wherein both femtocell users and cellular users are limited by hotspot interference. Our analysis accounts for the per-tier transmit powers, the number of transmit antennas (single antenna transmission being a special case) and terrestrial propagation such as the Rayleigh fading and the path loss exponents. Single-user (SU) multiple antenna transmission at each tier is shown to provide significantly superior coverage and spatial reuse relative to multiuser (MU) transmission. We propose a decentralized carrier-sensing approach to regulate femtocell transmission powers based on their location. Considering a worst-case cell-edge location, simulations using typical path loss scenarios show that our interference management strategy provides reliable cellular coverage with about 60 femtocells per cell-site.
Prior research in tiered networks have mainly considered an operator planned underlay of a macrocell with single multiple microcells @cite_14 @cite_16 . A microcell has a much larger radio range (100-500 m) than a femtocell, and generally implies centralized deployment, i.e. by the service-provider. This allows the operator to either load balance users or preferentially assign high data rate cellular users to the microcell @cite_2 @cite_13 because of its inherently larger capacity. In contrast, femtocells are consumer installed and the traffic requirements at femtocells are user determined without any operator influence. Consequently, decentralized strategies for interference management may be preferred @cite_25 @cite_24 @cite_19 @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_24", "@cite_19", "@cite_2", "@cite_16", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2046535175", "2122496159", "2128177517", "2153171210", "1966998226", "2101477095", "2162787433", "2102465185" ], "abstract": [ "We present a general cell-design methodology for the optimal design of a multitier wireless cellular network. Multitier networks are useful when there are a multitude of traffic types with drastically different parameters and or different requirements, such as different mobility parameters or quality-of-service requirements. In such situations, it may be cost-effective to build a multitude of cellular infrastructures, each serving a particular traffic type. The network resources (e.g., the radio channels) are then partitioned among the multitude of tiers. In general terms, we are interested in quantifying the cost reduction due to the multitier network design, as opposed to a single-tier network. Our study is motivated by the expected proliferation of personal communication services, which will serve different mobility platforms and support multimedia applications through a newly deployed infrastructure based on the multitier approach.", "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.", "In a two tier cellular network - comprised of a central macrocell underlaid with shorter range femtocell hotspots - cross-tier interference limits overall capacity with universal frequency reuse. To quantify near-far effects with universal frequency reuse, this paper derives a fundamental relation providing the largest feasible cellular Signal-to-Interference-Plus-Noise Ratio (SINR), given any set of feasible femtocell SINRs. We provide a link budget analysis which enables simple and accurate performance insights in a two-tier network. A distributed utility- based SINR adaptation at femtocells is proposed in order to alleviate cross-tier interference at the macrocell from cochannel femtocells. The Foschini-Miljanic (FM) algorithm is a special case of the adaptation. Each femtocell maximizes their individual utility consisting of a SINR based reward less an incurred cost (interference to the macrocell). Numerical results show greater than 30 improvement in mean femtocell SINRs relative to FM. In the event that cross-tier interference prevents a cellular user from obtaining its SINR target, an algorithm is proposed that reduces transmission powers of the strongest femtocell interferers. The algorithm ensures that a cellular user achieves its SINR target even with 100 femtocells cell-site (with typical cellular parameters) and requires a worst case SINR reduction of only 16 at femtocells. These results motivate design of power control schemes requiring minimal network overhead in two-tier networks with shared spectrum.", "Hierarchical wireless overlay networks have been proposed as an attractive alternative and extension of cellular network architectures to provide the necessary cell capacities to effectively support next-generation wireless data applications. In addition, they allow for flexible mobility management strategies and quality-of-service differentiation. One of the crucial problems in hierarchical overlay networks is the assignment of wireless data users to the different layers of the overlay architecture. In this paper, we present a framework and several analytical results pertaining to the performance of two assignment strategies based on the user's velocity and the amount of data to be transmitted. The main contribution is to prove that the minimum average number of users in the system, as well as the minimum expected system load for an incoming user, are the same under both assignment strategies. We provide explicit analytical expressions as well as unique characterizations of the optimal thresholds on the velocity and amount of data to be transmitted. These results are very general and hold for any distribution of user profiles and any call arrival rates. We also show that intelligent assignment strategies yield significant gains over strategies that are oblivious to the user profiles. Adaptive and on-line strategies are derived that do not require any a priori knowledge of the user population and the network parameters. Extensive simulations are conducted to support the theoretical results presented and conclude that the on-line strategies achieve near-optimal performance when compared with off-line strategies.", "This paper examines the effect of soft handoff on the uplink user capacity of a code division multiple access system consisting of a single macrocell in which a single hotspot microcell is embedded. The users of these two base stations operate over the same frequency band. In the soft-handoff scenario studied here, both macrocell and microcell base stations serve each system user, and the two received copies of a desired user's signal are summed using maximal ratio combining. Exact and approximate analytical methods are developed to compute uplink user capacity. Simulation results demonstrate a 20 increase in user capacity compared to hard handoff. In addition, simple approximate methods are presented for estimating soft-handoff capacity and are shown to be quite accurate.", "This work studies a specific two-tier CDMA system in which a microcell attracts only a small number of users and gives them high-speed access while the umbrella macrocell serves multiple simultaneous low-rate users. The microcell, referred to as a data access point (DAP), operates on the same frequency as the macrocell and uses the same chip rate. The DAP users adapt their spreading factor in accordance with interference conditions, whereas the macrocell users have a fixed data rate. The analysis here presents a scheduling method that maximizes the total DAP throughput. The schedule indicates which DAP users should be given access and at what data rate. We also devise a second access scheme which maximizes throughput while ensuring each user is assigned one slot per frame. Our results show that the DAP can support at most two simultaneous users. Further, throughput gains of optimal access are more evident when the DAP contains multiple potential users.", "In this paper, the feasibility of user deployed femtocells in the same frequency band as an existing macrocell network is investigated. Key requirements for co-channel operation of femtocells such as auto-configuration and public access are discussed. A method for power control for pilot and data that ensures a constant femtocell radius in the downlink and a low pre-definable uplink performance impact to the macrocells is proposed, and the theoretical performance of randomly deployed femtocells in such a hierarchical cell structure is analysed for one example of a cellular UMTS network using system level simulations. The resulting impact on the existing macrocellular network is also investigated.", "Two-tier femtocell networks- comprising a conventional cellular network plus embedded femtocell hotspots- offer an economically viable solution to achieving high cellular user capacity and improved coverage. With universal frequency reuse and DS-CDMA transmission however, the ensuing cross-tier interference causes unacceptable outage probability. This paper develops an uplink capacity analysis and interference avoidance strategy in such a two-tier CDMA network. We evaluate a network-wide area spectral efficiency metric called the operating contour (OC) defined as the feasible combinations of the average number of active macrocell users and femtocell base stations (BS) per cell-site that satisfy a target outage constraint. The capacity analysis provides an accurate characterization of the uplink outage probability, accounting for power control, path loss and shadowing effects. Considering worst case interference at a corner femtocell, results reveal that interference avoidance through a time-hopped CDMA physical layer and sectorized antennas allows about a 7x higher femtocell density, relative to a split spectrum two-tier network with omnidirectional femtocell antennas. A femtocell exclusion region and a tier selection based handoff policy offers modest improvements in the OCs. These results provide guidelines for the design of robust shared spectrum two-tier networks." ] }
0902.3210
2150415860
In two-tier networks comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays) with universal frequency reuse, the near-far effect from cross-tier interference creates dead spots where reliable coverage cannot be guaranteed to users in either tier. Equipping the macrocell and femtocells with multiple antennas enhances robustness against the near-far problem. This work derives the maximum number of simultaneously transmitting multiple antenna femtocells meeting a per-tier outage probability constraint. Coverage dead zones are presented wherein cross-tier interference bottlenecks cellular and femtocell coverage. Two operating regimes are shown namely 1) a cellular-limited regime in which femtocell users experience unacceptable cross-tier interference and 2) a hotspot-limited regime wherein both femtocell users and cellular users are limited by hotspot interference. Our analysis accounts for the per-tier transmit powers, the number of transmit antennas (single antenna transmission being a special case) and terrestrial propagation such as the Rayleigh fading and the path loss exponents. Single-user (SU) multiple antenna transmission at each tier is shown to provide significantly superior coverage and spatial reuse relative to multiuser (MU) transmission. We propose a decentralized carrier-sensing approach to regulate femtocell transmission powers based on their location. Considering a worst-case cell-edge location, simulations using typical path loss scenarios show that our interference management strategy provides reliable cellular coverage with about 60 femtocells per cell-site.
The subject of this work is related to Huang @cite_4 which derives per-tier transmission capacities with spectrum underlay and spectrum overlay. In contrast to their work which assumes relay-assisted cell-edge users, our work proposes to improve coverage by regulating femtocell transmit powers. Hunter @cite_22 have derived transmission capacities in an network with spatial diversity. Our work has extended this analysis to a cellular-underlaid network.
{ "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2149165606", "2137079066" ], "abstract": [ "Spectrum sharing between wireless networks improves the efficiency of spectrum usage, and thereby alleviates spectrum scarcity due to growing demands for wireless broadband access. To improve the usual underutilization of the cellular uplink spectrum, this paper addresses spectrum sharing between a cellular uplink and a mobile ad hoc networks. These networks access either all frequency subchannels or their disjoint subsets, called spectrum underlay and spectrum overlay, respectively. Given these spectrum sharing methods, the capacity trade-off between the coexisting networks is analyzed based on the transmission capacity of a network with Poisson distributed transmitters. This metric is defined as the maximum density of transmitters subject to an outage constraint for a given signal-to-interference ratio (SIR). Using tools from stochastic geometry, the transmission-capacity trade-off between the coexisting networks is analyzed, where both spectrum overlay and underlay as well as successive interference cancellation (SIC) are considered. In particular, for small target outage probability, the transmission capacities of the coexisting networks are proved to satisfy a linear equation, whose coefficients depend on the spectrum sharing method and whether SIC is applied. This linear equation shows that spectrum overlay is more efficient than spectrum underlay. Furthermore, this result also provides insight into the effects of network parameters on transmission capacities, including link diversity gains, transmission distances, and the base station density. In particular, SIC is shown to increase the transmission capacities of both coexisting networks by a linear factor, which depends on the interference-power threshold for qualifying canceled interferers.", "This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent alpha > 2 in which nodes use: (1) static beamforming through M sectorized antennas, for which the increase in transmission capacity is shown to be thetas(M2) if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigenbeamforming (maximal ratio transmission combining), in which the increase is shown to be thetas(M 2 alpha ); (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-m fading for increasing m. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains." ] }
0902.3583
2570121038
Let @math be a uniformly distributed random @math -SAT formula with @math variables and @math clauses. We present a polynomial time algorithm that finds a satisfying assignment of @math with high probability for constraint densities @math , where @math . Previously no efficient algorithm was known to find satisfying assignments with a nonvanishing probability beyond @math [A. Frieze and S. Suen, J. Algorithms, 20 (1996), pp. 312-355].
Quite a few papers deal with efficient algorithms for random @math -SAT, contributing either rigorous results, non-rigorous evidence based on physics arguments, or experimental evidence. Table summarizes the part of this work that is most relevant to us. The best rigorous result (prior to this work) is due to Frieze and Suen @cite_14 , who proved that SCB'' succeeds for densities @math , where @math is increasing to @math as @math . SCB can be considered a (restricted) DPLL-algorithm. More precisely, SCB combines the shortest clause rule, which is a generalization of Unit Clause, with (very limited) backtracking. Conversely, it is known that DPLL-type algorithms require an exponential running time for densities beyond @math @cite_8 .
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2051580875", "2001495051" ], "abstract": [ "We consider the performance of two algorithms, GUC and SC studied by M. T. Chao and J. Franco SIAM J. Comput.15(1986), 1106?1118;Inform. Sci.51(1990), 289?314 and V. Chvatal and B. Reed in“Proceedings of the 33rd IEEE Symposium on Foundations of Computer Science, 1992,” pp. 620?627, when applied to a random instance ? of a boolean formula in conjunctive normal form withnvariables and ?cn? clauses of sizekeach. For the case wherek=3, we obtain the exact limiting probability that GUC succeeds. We also consider the situation when GUC is allowed to have limited backtracking, and we improve an existing threshold forcbelow which almost all ? is satisfiable. Fork?4, we obtain a similar result regarding SC with limited backtracking.", "For each k ≤ 4, we give τ k > 0 such that a random k-CNF formula F with n variables and ⌊r k n⌋ clauses is satisfiable with high probability, but ORDERED-DLL takes exponential time on F with uniformly positive probability. Using results of [2], this can be strengthened to a high probability result for certain natural backtracking schemes and extended to many other DPLL algorithms." ] }
0902.3583
2570121038
Let @math be a uniformly distributed random @math -SAT formula with @math variables and @math clauses. We present a polynomial time algorithm that finds a satisfying assignment of @math with high probability for constraint densities @math , where @math . Previously no efficient algorithm was known to find satisfying assignments with a nonvanishing probability beyond @math [A. Frieze and S. Suen, J. Algorithms, 20 (1996), pp. 312-355].
Montanari, Ricci-Tersenghi, and Semerjian @cite_9 provide evidence that Belief Propagation guided decimation may succeed up to density @math . This algorithm is based on a very different paradigm than the others mentioned in Table . The basic idea is to run a message passing algorithm ( Belief Propagation'') to compute for each variable the marginal probability that this variable takes the value true false in a uniformly random satisfying assignment. Then, the decimation step selects a variable, assigns it the value true false with the corresponding marginal probability, and simplifies the formula. Ideally, repeating this procedure will yield a satisfying assignment, provided that Belief Propagation keeps yielding the correct marginals. Proving (or disproving) this remains a major open problem.
{ "cite_N": [ "@cite_9" ], "mid": [ "2962961919" ], "abstract": [ "Index of refraction measurements are made by means of an optical device in which a coherent light beam is divided into an object beam and a reference beam, each of which is directed through a separate path in a common light transmitting medium. The object beam is also transmitted en route through a test volume that accommodates a substance to be tested. It is subsequently recombined with the reference beam to form a single output beam. The output beam is received by a photo detector and its intensity is measured. The intensity of the detected output beam is related to any phase shift between the object and reference beams. The phase shift in turn is a measure of the index of refraction of the test substance. Information relating to density, temperature and pressure of the test substance can be derived from measured index of refraction values by using conventional conversion formulas. An operating range control is provided by introducing a volume of pressurized gas into the paths of the object and reference beams. The operating range of the instrument is set by adjusting the pressure of the gas in the absence of a test substance." ] }
0902.3583
2570121038
Let @math be a uniformly distributed random @math -SAT formula with @math variables and @math clauses. We present a polynomial time algorithm that finds a satisfying assignment of @math with high probability for constraint densities @math , where @math . Previously no efficient algorithm was known to find satisfying assignments with a nonvanishing probability beyond @math [A. Frieze and S. Suen, J. Algorithms, 20 (1996), pp. 312-355].
Survey Propagation is a modification of Belief Propagation that aims to approximate the marginal probabilities induced by a particular (non-uniform) probability distribution on the set of satisfying assignments @cite_11 . It can be combined with a decimation procedure as well to obtain a heuristic for a satisfying assignment. There is (non-rigorous) evidence that for most of the satisfiable regime (actually @math ) Belief and Survey Propagation are essentially equivalent @cite_15 . Hence, there is no evidence that Survey Propagation finds satisfying assignments beyond @math for general @math .
{ "cite_N": [ "@cite_15", "@cite_11" ], "mid": [ "2168290833", "1982531027" ], "abstract": [ "An instance of a random constraint satisfaction problem defines a random subset 𝒮 (the set of solutions) of a large product space X N (the set of assignments). We consider two prototypical problem ensembles (random k -satisfiability and q -coloring of random regular graphs) and study the uniform measure with support on S . As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states (“clusters”) and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may also beat this threshold.", "We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005" ] }
0902.3858
2949927837
The use of formal methods provides confidence in the correctness of developments. Yet one may argue about the actual level of confidence obtained when the method itself -- or its implementation -- is not formally checked. We address this question for the B, a widely used formal method that allows for the derivation of correct programs from specifications. Through a deep embedding of the B logic in Coq, we check the B theory but also implement B tools. Both aspects are illustrated by the description of a proved prover for the B logic.
in a proof assistant consists in mechanizing a logic by encoding its syntax and semantic into a logic ( @cite_8 @cite_5 @cite_2 ). In a embedding, the encoding is partially based on a direct translation of the guest logic into constructs of the host logic. In a embedding the syntax and the semantic are formalised as datatypes. At a fundamental level, taking the view presented in Sec. , the deep embedding of a logic is simply a definition of the set of all sequents (the terms) and a predicate marking those that are (the inference rules of the guest logic being encoded as constructors of this predicate).
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "1584721038", "1541992615", "1864574667" ], "abstract": [ "", "Theorem provers were also called 'proof checkers' because that is what they were in the beginning. They have grown powerful, however, capable in many cases to automatically produce complicated proofs. In particular, higher order logic based theorem provers such as HOL and PVS became popular because the logic is well known and very expressive. They are generally considered to be potential platforms to embed a programming logic for the purpose of formal verification. In this paper we investigate a number of most commonly used methods of embedding programming logics in such theorem provers and expose problems we discover. We will also propose an alternative approach : hybrid embedding.", "Formal reasoning about computer programs can be based directly on the semantics of the programming language, or done in a special purpose logic like Hoare logic. The advantage of the first approach is that it guarantees that the formal reasoning applies to the language being used (it is well known, for example, that Hoare’s assignment axiom fails to hold for most programming languages). The advantage of the second approach is that the proofs can be more direct and natural." ] }
0902.4185
1782581930
We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and the first and second moment considerations. Our main result is the location of the hard region in the planted ensemble. In a part of that hard region, instances have with high probability a single satisfying assignment.
A large part of our results is based on the heuristic cavity method approach @cite_16 . We were also able to prove part of our results for the @math -in- @math SAT problem on random regular graphs using computations of the second moment and the expander property. This includes some results about the equivalence between the planted and random ensembles in the satisfiable phase, and the uniqueness of the satisfying assignment in the unsatisfiable phase. Completing and extending these proofs to the other locked factorized CSPs should be possible although more involved.
{ "cite_N": [ "@cite_16" ], "mid": [ "2022083710" ], "abstract": [ "So far the problem of a spin glass on a Bethe lattice has been solved only at the replica symmetric level, which is wrong in the spin glass phase. Because of some technical difficulties, attempts at deriving a replica symmetry breaking solution have been confined to some perturbative regimes, high connectivity lattices or temperature close to the critical temperature. Using the cavity method, we propose a general non perturbative solution of the Bethe lattice spin glass problem at a level of approximation which is equivalent to a one step replica symmetry breaking solution. The results compare well with numerical simulations. The method can be used for many finite connectivity problems appearing in combinatorial optimization." ] }
0902.2206
2953133476
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.
@cite_2 provides computationally efficient randomization schemes for dimensionality reduction. Instead of performing a dense @math dimensional matrix vector multiplication to reduce the dimensionality for a vector of dimensionality @math to one of dimensionality @math , as is required by the algorithm of @cite_0 , he only requires @math of that computation by designing a matrix consisting only of entries @math . Pioneered by @cite_6 , there has been a line of work @cite_10 @cite_16 on improving the complexity of random projection by using various code-matrices in order to preprocess the input vectors. Some of our theoretical bounds are derivable from that of @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_16", "@cite_10" ], "mid": [ "2033143885", "2152402969", "1502916507", "2037757210", "1982515170", "2124659530" ], "abstract": [ "Random projection methods give distributions over k×d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a finite set of vectors x i ∈ℝd the resulting vectors Ψx i ∈ℝk approximately preserve the original metric with constant probability. First, we show that any matrix (composed with a random ±1 diagonal matrix) is a good random projector for a subset of vectors in ℝd . Second, we describe a family of tensor product matrices which we term Lean Walsh. We show that using Lean Walsh matrices as random projections outperforms, in terms of running time, the best known current result (due to Matousek) under comparable assumptions.", "We introduce a new low-distortion embedding of l2d into lpO(log n) (p=1,2), called the Fast-Johnson-Linden-strauss-Transform. The FJLT is faster than standard random projections and just as easy to implement. It is based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform. Sparse random projections are unsuitable for low-distortion embeddings. We overcome this handicap by exploiting the \"Heisenberg principle\" of the Fourier transform, ie, its local-global duality. The FJLT can be used to speed up search algorithms based on low-distortion embeddings in l1 and l2. We consider the case of approximate nearest neighbors in l2d. We provide a faster algorithm using classical projections, which we then further speed up by plugging in the FJLT. We also give a faster algorithm for searching over the hypercube.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space---where k is logarithmic in n and independent of d--so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a spherically random k-dimensional hyperplane through the origin. We give two constructions of such embeddings with the property that all elements of the projection matrix belong in -1, 0, +1 . Such constructions are particularly well suited for database environments, as the computation of the embedding reduces to evaluating a single aggregate over k random partitions of the attributes.", "The JohnsonLindenstrauss lemma asserts that an n-point set in any Euclidean space can be mapped to a Euclidean space of dimension k = O(e-2 log n) so that all distances are preserved up to a multiplicative factor between 1 - e and 1 + e. Known proofs obtain such a mapping as a linear map Rn ’ Rk with a suitable random matrix. We give a simple and self-contained proof of a version of the JohnsonLindenstrauss lemma that subsumes a basic versions by Indyk and Motwani and a version more suitable for efficient computations due to Achlioptas. (Another proof of this result, slightly different but in a similar spirit, was given independently by Indyk and Naor.) An even more general result was established by Klartag and Mendelson using considerably heavier machinery. Recently, Ailon and Chazelle showed, roughly speaking, that a good mapping can also be obtained by composing a suitable Fourier transform with a linear mapping that has a sparse random matrix M; a mapping of this form can be evaluated very fast. In their result, the nonzero entries of M are normally distributed. We show that the nonzero entries can be chosen as random ± 1, which further speeds up the computation. We also discuss the case of embeddings into Rk with the l1 norm. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008", "The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel technique for performing fast dimension reduction with small distortion from ed2 to ed2 in time O(max d log d,k3 ). For k in [Ω(log d), O(d1 2)] this beats time O(dk) achieved by naive multiplication by random dense matrices, an approach followed by several authors as a variant of the seminal result by Johnson and Lindenstrauss (JL) from the mid 80's. In this work we show how to significantly improve the running time to O(d log k) for k = O(d1 2−Δ), for any arbitrary small fixed Δ. This beats the better of FJLT and JL. Our analysis uses a powerful measure concentration bound due to Talagrand applied to Rademacher series in Banach spaces (sums of vectors in Banach spaces with random signs). The set of vectors used is a real embedding of dual BCH code vectors over GF(2). We also discuss the number of random bits used and reduction to e1 space. The connection between geometry and discrete coding theory discussed here is interesting in its own right and may be useful in other algorithmic applications as well." ] }
0902.2206
2953133476
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.
A related construction is the CountMin sketch of @cite_3 which stores counts in a number of replicates of a hash table. This leads to good concentration inequalities for range and point queries.
{ "cite_N": [ "@cite_3" ], "mid": [ "1865797552" ], "abstract": [ "We introduce a new sublinear space data structure—the Count-Min Sketch— for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known — typically from 1 e 2 to 1 e in factor." ] }
0902.2206
2953133476
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.
@cite_15 propose a hash kernel to deal with the issue of computational efficiency by a very simple algorithm: high-dimensional vectors are compressed by adding up all coordinates which have the same hash value --- one only needs to perform as many calculations as there are nonzero terms in the vector. This is a significant computational saving over locality sensitive hashing @cite_2 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_2" ], "mid": [ "1502916507", "", "2037757210" ], "abstract": [ "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "", "A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space---where k is logarithmic in n and independent of d--so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a spherically random k-dimensional hyperplane through the origin. We give two constructions of such embeddings with the property that all elements of the projection matrix belong in -1, 0, +1 . Such constructions are particularly well suited for database environments, as the computation of the embedding reduces to evaluating a single aggregate over k random partitions of the attributes." ] }
0902.1693
2145782763
We consider the multivariate interlace polynomial introduced by Courcelle (Electron. J. Comb. 15(1), 2008), which generalizes several interlace polynomials defined by Arratia, Bollobas, and Sorkin (J. Comb. Theory Ser. B 92(2):199---233, 2004) and by Aigner and van der Holst (Linear Algebra Appl., 2004). We present an algorithm to evaluate the multivariate interlace polynomial of a graph with n vertices given a tree decomposition of the graph of width k. The best previously known result (Courcelle, Electron. J. Comb. 15(1), 2008) employs a general logical framework and leads to an algorithm with running time f(k)?n, where f(k) is doubly exponential in k. Analyzing the GF(2)-rank of adjacency matrices in the context of tree decompositions, we give a faster and more direct algorithm. Our algorithm uses @math arithmetic operations and can be efficiently implemented in parallel.
The monadic second order logic approach is very general and can be applied not only to the interlace polynomial but to a much wider class of graph polynomials @cite_24 . However, it does not consider characteristic properties of the actual graph polynomial. In this paper, we restrict ourselves to the interlace polynomial so as to exploit its specific properties and to gain a more efficient algorithm (Algorithm ). Our algorithm performs @math arithmetic operations to evaluate Courcelle's multivariate interlace polynomial (and thus any other version of the interlace polynomial mentioned above) on an @math -vertex graph given a tree decomposition of width @math (Theorem ). The algorithm can be implemented in parallel using depth polylogarithmic in @math (). Apart from evaluating the interlace polynomial, our approach can also be used to compute coefficients of the interlace polynomial, for example so called @math -truncations [Section 5] courcelle_interlace_final (). Our approach is not via logic but via the @math -rank of adjacency matrices, which is specific to the interlace polynomial.
{ "cite_N": [ "@cite_24" ], "mid": [ "2109716789" ], "abstract": [ "We discuss the parametrized complexity of counting and evaluation problems on graphs where the range of counting is denable in monadic second-order logic (MSOL). We show that for bounded tree-width these problems are solvable in polynomial time. The same holds for bounded clique width in the cases, where the decomposition, which establishes the bound on the clique-width, can be computed in polynomial time and for problems expressible by monadic second-order formulas without edge set quantication. Such quantications are allowed in the case of graphs with bounded tree-width. As applications we discuss in detail how this aects the parametrized complexity of the permanent and the hamiltonian of a matrix, and more generally, various generating functions of MSOL denable graph properties. Finally, our results are also applicable to SAT and ]SAT. ? 2001 Elsevier Science B.V. All rights reserved." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
General community discovery approaches are based on the connections between vertices in a network. A fast community discovery algorithm in very large network was proposed with approximate linear time complexity @math , where n is the number of nodes @cite_6 . The general methods like GN algorithm can be used to discover communities in weighted networks by mapping them onto unweighted networks @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_6" ], "mid": [ "1983345514", "2047940964" ], "abstract": [ "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow--minimum-cut theorem.", "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Research and learning resources form a network, and the connections are the relations among resources. Different from the communities in general complex networks, semantic communities in the relational network were discovered according to the roles of relations during reasoning on relations @cite_27 .
{ "cite_N": [ "@cite_27" ], "mid": [ "2108933770" ], "abstract": [ "The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the semantic link network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Many works are on the collaboration networks and citation networks of scientific research. Most of them focus on the characteristics of collaboration networks. For the structure of social science collaboration network, disciplinary cohesion from @math to @math was studied @cite_2 . The structure of scientific collaboration networks including the shortest paths, weighted networks, and centrality was studied @cite_30 @cite_10 @cite_13 . Coauthor relations were used to study the collaborations between researchers especially the mathematician, and the distribution of relations between papers of Mathematical Review against the number of authors was studied @cite_25 @cite_31 . Relations between researchers were analyzed in Ed "o rs collaboration graph, and the shortest path lengths between researchers were studied @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_35", "@cite_2", "@cite_31", "@cite_10", "@cite_25" ], "mid": [ "45859669", "1671906456", "1967880836", "2089644168", "2160053408", "2125315567", "" ], "abstract": [ "", "Using data from computer databases of scientific papers in physics, biomedical research, and computer science, we have constructed networks of collaboration between scientists in each of these disciplines. In these networks two scientists are considered connected if they have coauthored one or more papers together. We have studied many statistical properties of our networks, including numbers of papers written by authors, numbers of authors per paper, numbers of collaborators that scientists have, typical distance through the network from one scientist to another, and a variety of measures of connectedness within a network, such as closeness and betweenness. We further argue that simple networks such as these cannot capture the variation in the strength of collaborative ties and propose a measure of this strength based on the number of papers coauthored by pairs of scientists, and the number of other scientists with whom they worked on those papers. Using a selection of our results, we suggest a variety of possible ways to answer the question, \"Who is the best connected scientist?\"", "Abstract Patrick Ion (Mathematical Reviews) and Jerry Grossman (Oakland University) maintain a collection of data on Paul Erdos, his co-authors and their co-authors. These data can be represented by a graph, also called the Erdos collaboration graph. In this paper, some techniques for analysis of large networks (different approaches to identify ‘interesting’ individuals and groups, analysis of internal structure of the main core using pre-specified blockmodeling and hierarchical clustering) and visualizations of their parts, are presented on the case of Erdos collaboration graph, using the program Pajek .", "Has sociology become more socially integrated over the last 30 years? Recent work in the sociology of knowledge demonstrates a direct linkage between social interaction patterns and the structure of ideas, suggesting that scientific collaboration networks affect scientific practice. I test three competing models for sociological collaboration networks and find that a structurally cohesive core that has been growing steadily since the early 1960s characterizes the discipline's coauthorship network. The results show that participation in the sociology collaboration network depends on research specialty and that quantitative work is more likely to be coauthored than non-quantitative work. However, structural embeddedness within the network core given collaboration is largely unrelated to specialty area. This pattern is consistent with a loosely overlapping specialty structure that has potentially integrative implications for theoretical development in sociology.", "Scientific collaboration has become a major issue in science policy. The tremendous growth of collaboration among nations and research institutions witnessed during the last twenty years is a function of the internal dynamics of science as well as science policy initiatives. The need to survey and follow up the collaboration issue calls for statistical indicators sensitive enough to reveal the structure and change of collaborative networks. In this context, bibliometric analysis of co-authored scientific articles is one promising approach. This paper discusses the relationship between collaboration and co-authorship, the nature of bibliometric data, and exemplifies how they can be refined and used to analyse various aspects of collaboration.", "The structure of scientific collaboration networks is investigated. Two scientists are considered connected if they have authored a paper together and explicit networks of such connections are constructed by using data drawn from a number of databases, including MEDLINE (biomedical research), the Los Alamos e-Print Archive (physics), and NCSTRL (computer science). I show that these collaboration networks form “small worlds,” in which randomly chosen pairs of scientists are typically separated by only a short path of intermediate acquaintances. I further give results for mean and distribution of numbers of collaborators of authors, demonstrate the presence of clustering in the networks, and highlight a number of apparent differences in the patterns of collaboration between the fields studied.", "" ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Evolutions of the social networks of scientific collaborations in mathematics and neuro-science were studied @cite_1 . The research result shows that the social network of collaboration network is scale-free; and, the node separation decreases with the increase of connections.
{ "cite_N": [ "@cite_1" ], "mid": [ "2145845082" ], "abstract": [ "The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991–98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Social network in academic research can be extracted from the webpages and paper metadata provided by the online databases @cite_32 ; furthermore, relations among researchers are mined in academic social networks @cite_8 . Social structure in scientific research was studied based on the citations @cite_29 .
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_8" ], "mid": [ "", "2130831177", "2022322548" ], "abstract": [ "", "This paper addresses the issue of extraction of an academic researcher social network. By researcher social network extraction, we are aimed at finding, extracting, and fusing the 'semantic '-based profiling information of a researcher from the Web. Previously, social network extraction was often undertaken separately in an ad-hoc fashion. This paper first gives a formalization of the entire problem. Specifically, it identifies the 'relevant documents' from the Web by a classifier. It then proposes a unified approach to perform the researcher profiling using conditional random fields (CRF). It integrates publications from the existing bibliography datasets. In the integration, it proposes a constraints-based probabilistic model to name disambiguation. Experimental results on an online system show that the unified approach to researcher profiling significantly outperforms the baseline methods of using rule learning or classification. Experimental results also indicate that our method to name disambiguation performs better than the baseline method using unsupervised learning. The methods have been applied to expert finding. Experiments show that the accuracy of expert finding can be significantly improved by using the proposed methods.", "This paper addresses several key issues in the ArnetMiner system, which aims at extracting and mining academic social networks. Specifically, the system focuses on: 1) Extracting researcher profiles automatically from the Web; 2) Integrating the publication data into the network from existing digital libraries; 3) Modeling the entire academic network; and 4) Providing search services for the academic network. So far, 448,470 researcher profiles have been extracted using a unified tagging approach. We integrate publications from online Web databases and propose a probabilistic framework to deal with the name ambiguity problem. Furthermore, we propose a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues. Search services such as expertise search and people association search have been provided based on the modeling results. In this paper, we describe the architecture and main features of the system. We also present the empirical evaluation of the proposed methods." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Resources in research networks are ranked in level. Research resources were ranked by approach considering the mutual influences between relevant resources @cite_28 . Object based ranking approach can help search and recommend different resources such as papers, conferences, journals and researchers etc.
{ "cite_N": [ "@cite_28" ], "mid": [ "2110896767" ], "abstract": [ "In contrast with the current Web search methods that essentially do document-level ranking and retrieval, we are exploring a new paradigm to enable Web search at the object level. We collect Web information for objects relevant for a specific application domain and rank these objects in terms of their relevance and popularity to answer user queries. Traditional PageRank model is no longer valid for object popularity calculation because of the existence of heterogeneous relationships between objects. This paper introduces PopRank, a domain-independent object-level link analysis model to rank the objects within a specific domain. Specifically we assign a popularity propagation factor to each type of object relationship, study how different popularity propagation factors for these heterogeneous relationships could affect the popularity ranking, and propose efficient approaches to automatically decide these factors. Our experiments are done using 1 million CS papers, and the experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph." ] }
0902.1911
1782129356
Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
Researchers and papers are often ranked in coauthor network and citation network respectively. A co-ranking framework of researchers and papers was proposed, in which researchers and papers were ranked in a heterogeneous network combining the coauthor network and citation network by coauthor relations @cite_24 .
{ "cite_N": [ "@cite_24" ], "mid": [ "2162010993" ], "abstract": [ "Recent graph-theoretic approaches have demonstrated remarkable successes for ranking networked entities, but most of their applications are limited to homogeneous networks such as the network of citations between publications. This paper proposes a novel method for co-ranking authors and their publications using several networks: the social network connecting the authors, the citation network connecting the publications, as well as the authorship network that ties the previous two together. The new co-ranking framework is based on coupling two random walks, that separately rank authors and documents following the PageRankparadigm. As a result, improved rankings of documents and their authors depend on each other in a mutually reinforcing way, thus taking advantage of the additional information implicit in the heterogeneous network of authors and documents." ] }
0902.2209
2950558814
We consider an online scheduling problem, motivated by the issues present at the joints of networks using ATM and TCP IP. Namely, IP packets have to broken down to small ATM cells and sent out before their deadlines, but cells corresponding to different packets can be interwoven. More formally, we consider the online scheduling problem with preemptions, where each job j is revealed at release time r_j, has processing time p_j, deadline d_j and weight w_j. A preempted job can be resumed at any time. The goal is to maximize the total weight of all jobs completed on time. Our main result are as follows: we prove that if all jobs have processing time exactly k, the deterministic competitive ratio is between 2.598 and 5, and when the processing times are at most k, the deterministic competitive ratio is Theta(k log k).
It is known that the general problem without a bound on processing times has an unbounded deterministic competitive ratio @cite_1 , so different directions of research were considered. One is to see if randomisation helps, and indeed in @cite_0 a constant competitive randomized algorithm was given, although with a big constant. Another direction of research is to consider resource augmentation, and in @cite_11 a deterministic online algorithm was presented, which has constant competitive ratio provided that the algorithm is allowed a constant speedup of its machine compared to the adversary. Finally a third direction is to restrict to instances with bounded processing time.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_11" ], "mid": [ "2080158054", "2099261768", "" ], "abstract": [ "We consider the problem of maximizing the number of jobs completed by their deadline in an online single processor system where the jobs are preemptable and have release times. So in the standard three field scheduling notation, this is the online version of the problem 1 |ri; pmtn| Σ(1 - Ui). We present a deterministic algorithm Lax, and show that for every instance I, it is the case that either Lax, or the well-known deterministic algorithm SRPT (Shortest Remaining Processing Time), is constant competitive on I. An immediate consequence of this result is a constant competitive randomized algorithm for this problem. It is known that no constant competitive deterministic algorithm exists for this problem.", "The problem of uniprocessor scheduling under conditions of overload is investigated. The system objective is to maximize the number of tasks that complete by their deadlines. For this performance metric it is shown that, in general, any on-line algorithm may perform arbitrarily poorly as compared to a clairvoyant scheduler. Restricted instances of the general problem for which on-line schedulers ran provide a guaranteed level of performance are identified, and on-line algorithms presented for these special cases. >", "" ] }
0902.2209
2950558814
We consider an online scheduling problem, motivated by the issues present at the joints of networks using ATM and TCP IP. Namely, IP packets have to broken down to small ATM cells and sent out before their deadlines, but cells corresponding to different packets can be interwoven. More formally, we consider the online scheduling problem with preemptions, where each job j is revealed at release time r_j, has processing time p_j, deadline d_j and weight w_j. A preempted job can be resumed at any time. The goal is to maximize the total weight of all jobs completed on time. Our main result are as follows: we prove that if all jobs have processing time exactly k, the deterministic competitive ratio is between 2.598 and 5, and when the processing times are at most k, the deterministic competitive ratio is Theta(k log k).
Our model is sometimes called the , as opposed to @cite_9 , in which an interrupted job can only be processed from the very beginning. @cite_1 form another related model, in which all the job parameters are reals, the time is continuous, and uniform weights are assumed.
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "2047204419", "2099261768" ], "abstract": [ "We consider the following scheduling problem. The input is a set of jobs with equal processing times, where each job is specified by its release time and deadline. The goal is to determine a single-processor nonpreemptive schedule that maximizes the number of completed jobs. In the online version, each job arrives at its release time. We give two online algorithms with competitive ratios below @math and show several lower bounds on the competitive ratios. First, we give a barely random @math -competitive algorithm that uses only one random bit. We also show a lower bound of @math on the competitive ratio of barely random algorithms that randomly choose one of two deterministic algorithms. If the two algorithms are selected with equal probability, we can further improve the bound to @math . Second, we give a deterministic @math -competitive algorithm in the model that allows restarts, and we show that in this model the ratio @math is optimal. For randomized algorithms with restarts we show a lower bound of @math .", "The problem of uniprocessor scheduling under conditions of overload is investigated. The system objective is to maximize the number of tasks that complete by their deadlines. For this performance metric it is shown that, in general, any on-line algorithm may perform arbitrarily poorly as compared to a clairvoyant scheduler. Restricted instances of the general problem for which on-line schedulers ran provide a guaranteed level of performance are identified, and on-line algorithms presented for these special cases. >" ] }
0902.2260
1736109902
This paper addresses the fundamental characteristics of information exchange via multihop network coding over two-way relaying in a wireless ad hoc network. The end-to-end rate regions achieved by time-division multihop (TDMH), MAC-layer network coding (MLNC) and PHY-layer network coding (PLNC) are first characterized. It is shown that MLNC does not always achieve better rates than TDMH, time sharing between TDMH and MLNC is able to achieve a larger rate region, and PLNC dominates the rate regions achieved by TDMH and MLNC. An opportunistic scheduling algorithm for MLNC and PLNC is then proposed to stabilize the two-way relaying system for Poisson arrivals whenever the rate pair is within the Shannon rate regions of MLNC and PLNC. To understand the two-way transmission limits of multihop network coding, the sum-rate optimization with or without certain traffic pattern and the end-to-end diversity-multiplexing tradeoffs (DMTs) of two-way transmission over multiple relay nodes are also analyzed.
Traditionally (pre-network coding), information exchange between two users via a relay has been accomplished by a time-division multihop (TDMH) protocol in four time slots Frequency-division could be used as well. In this paper the comparisons focus on time-division systems only, and comparisons can be applied to frequency-division systems similarly. , as shown in Fig. (a). Intuitively, the MLNC and PLNC protocols save one time slot. In @cite_12 , two-way relaying for cellular systems was considered, while @cite_21 and @cite_28 proposed an MLNC algorithm effective for wireless mesh networks in heavy traffic. The network coding protocol in Fig. (b) can be further reduced to two slots if advanced joint coding decoding -- i.e. analog network coding (ANC) -- is allowed. In this case, both source nodes send their packets to the relay node simultaneously during the first slot; then the relay node either amplifies and broadcasts the signals, or broadcasts the XOR-ed packets after decoding by successive interference cancellation @cite_7 @cite_24 @cite_9 @cite_30 . The achievable rates for analog network coding were studied in @cite_17 @cite_9 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_28", "@cite_9", "@cite_21", "@cite_24", "@cite_12", "@cite_17" ], "mid": [ "2149166346", "2152496949", "1532718431", "2167794812", "2149863032", "2135073150", "", "2079886783" ], "abstract": [ "We consider a multiuser two-way relay network where multiple pairs of users communicate with their preassigned partners, using a common intermediate relay node, in a two-phase communication scenario employing code division multiple access (CDMA). By taking advantage of the bidirectional communication structure, we first propose that each pair of partners share a common spreading signature and design a jointly demodulate-and-XOR forward (JD-XOR-F) relaying scheme, where all users transmit to the relay simultaneously followed by the relay broadcasting an estimate of the XORed symbol for each user pair. We derive the decision rules and the corresponding bit error rates (BERs) at the relay and at the users' receivers. We then investigate the joint power control and receiver optimization problem for each phase for this multiuser two-way relay network with JD-XOR-F relaying. We solve each optimization problem by constructing the iterative power control and receiver updates that converge to the corresponding unique optimum. Simulation results are presented to demonstrate the performance of the proposed multiuser two-way JD-XOR-F relaying scheme in conjunction with the joint power control and receiver optimization algorithms. Specifically, we observe significant power savings and user capacity improvement with the proposed communication scheme as compared to the designs with a \"one-way\" communication perspective.", "Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding.", "This paper applies network coding to wireless mesh networks and presents the first implementation results. It introduces COPE, an opportunistic approach to network coding, where each node snoops on the medium, learns the status of its neighbors, detects coding opportunities, and codes as long as the recipients can decode. This flexible design allows COPE to efficiently support multiple unicast flows, even when traffic demands are unknown and bursty, and the senders and receivers are dynamic. We evaluate COPE using both emulation and testbed implementation. Our results show that COPE substantially improves the network throughput, and as the number of flows and the contention level increases, COPE’s throughput becomes many times higher than current 802.11 mesh networks.", "This paper introduces and analyzes relaying techniques that increase the achievable throughput in multi-hop wireless networks by applying network coding over bi-directional traffic flows. We term each such technique as bi-directional amplification of throughput (BAT)-relaying. While network coding is normally performed by combining decoded packets, here we introduce a relaying method based on amplify-and-forward (AF), where the relay node utilizes the inherent combining of packets provided by simultaneous transmissions over a multiple access channel. Under low noise levels, AF BAT-relaying offers a superior throughput performance. The unconventionality of AF BAT relaying opens many possibilities for further research.", "This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.", "It has recently been recognized that the wireless networks represent a fertile ground for devising communication modes based on network coding. A particularly suitable application of the network coding arises for the two-way relay channels, where two nodes communicate with each other assisted by using a third, relay node. Such a scenario enables application of physical network coding, where the network coding is either done (a) jointly with the channel coding or (b) through physical combining of the communication flows over the multiple access channel. In this paper we first group the existing schemes for physical network coding into two generic schemes, termed 3-step and 2-step scheme, respectively. We investigate the conditions for maximization of the two-way rate for each individual scheme: (1) the decode-and-forward (DF) 3-step schemes (2) three different schemes with two steps: amplify-and-forward (AF), JDF and denoise-and-forward (DNF). While the DNF scheme has a potential to offer the best two-way rate, the most interesting result of the paper is that, for some SNR configurations of the source - relay links, JDF yields identical maximal two-way rate as the upper bound on the rate for DNF.", "", "Relaying is a fundamental building block of wireless networks. Sophisticated relaying strategies at the physical layer have been developed for a single flow, but multiple flows are typically handled by time sharing the channel between the flows at the network level. In this paper, time-sharing when forwarding two data streams at the relay is compared to joint relaying and network coding that allows the relay to combine data streams. Two commonly occurring blocks in wireless networks with both unicast and multicast traffic are considered. It is shown that joint relaying and network coding can achieve gains and even double the throughput for certain channel conditions." ] }
0902.2260
1736109902
This paper addresses the fundamental characteristics of information exchange via multihop network coding over two-way relaying in a wireless ad hoc network. The end-to-end rate regions achieved by time-division multihop (TDMH), MAC-layer network coding (MLNC) and PHY-layer network coding (PLNC) are first characterized. It is shown that MLNC does not always achieve better rates than TDMH, time sharing between TDMH and MLNC is able to achieve a larger rate region, and PLNC dominates the rate regions achieved by TDMH and MLNC. An opportunistic scheduling algorithm for MLNC and PLNC is then proposed to stabilize the two-way relaying system for Poisson arrivals whenever the rate pair is within the Shannon rate regions of MLNC and PLNC. To understand the two-way transmission limits of multihop network coding, the sum-rate optimization with or without certain traffic pattern and the end-to-end diversity-multiplexing tradeoffs (DMTs) of two-way transmission over multiple relay nodes are also analyzed.
Finally, network coding can be used to exploit cooperative diversity between source and destination nodes @cite_29 @cite_4 . Since network coding is able to provide diversity as well as throughput gain, it is of interest to understand the diversity-multiplexing tradeoffs (DMTs) The diversity-multiplexing tradeoff (DMT) for point-to-point multiple input and multiple output (MIMO) channels was found in @cite_5 , and has become a popular metric for comparing transmission protocols. of MLNC and PLNC and determine if they are better than TDMH's. Since we consider two-way transmission over multiple relays, this plurality of relays may cooperate in a number of different ways or not at all, and each cooperation scenario leads to a different DMT result for TDMH, MLNC and PLNC.
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_4" ], "mid": [ "2129766733", "2107933947", "2124699605" ], "abstract": [ "Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. We propose the point of view that both types of gains can be simultaneously obtained for a given multiple-antenna channel, but there is a fundamental tradeoff between how much of each any coding scheme can get. For the richly scattered Rayleigh-fading channel, we give a simple characterization of the optimal tradeoff curve and use it to evaluate the performance of existing multiple antenna schemes.", "This paper proposes a network coding approach to cooperative diversity featuring the algebraic superposition of channel codes over a finite field. The scenario under consideration is one in which two ldquopartnersrdquo - node A and node B - cooperate in transmitting information to a single destination; each partner transmits both locally generated information and relayed information that originated at the other partner. A key observation is that node B already knows node A's relayed information (because it originated at node B) and can exploit that knowledge when decoding node A's local information. This leads to an encoding scheme in which each partner transmits the algebraic superposition of its local and relayed information, and the superimposed codeword is interpreted differently at the two receivers i.e., at the other partner and at the destination node, based on their different a priori knowledge. Decoding at the destination is then carried out by iterating between the codewords from the two partners. It is shown via simulation that the proposed scheme provides substantial coding gain over other cooperative diversity techniques, including those based on time multiplexing and signal (Euclidean space) superposition.", "This paper investigates the diversity gain offered by implementing network coding (R. , 2000) over wireless communication links. The network coding algorithm is applied to both a wireless network containing a distributed antenna system (DAS) as well as one that supports user cooperation between users. The results show that network-coded DAS leads to better diversity performance as compared to conventional DAS, at a lower hardware cost and higher spectral efficiency. In the case of user cooperation, network coding yields additional diversity, especially when there are multiple users" ] }
0902.0585
2950999040
The random assignment problem asks for the minimum-cost perfect matching in the complete @math bipartite graph @math with i.i.d. edge weights, say uniform on @math . In a remarkable work by Aldous (2001), the optimal cost was shown to converge to @math as @math , as conjectured by M 'ezard and Parisi (1987) through the so-called cavity method. The latter also suggested a non-rigorous decentralized strategy for finding the optimum, which turned out to be an instance of the Belief Propagation (BP) heuristic discussed by Pearl (1987). In this paper we use the objective method to analyze the performance of BP as the size of the underlying graph becomes large. Specifically, we establish that the dynamic of BP on @math converges in distribution as @math to an appropriately defined dynamic on the Poisson Weighted Infinite Tree, and we then prove correlation decay for this limiting dynamic. As a consequence, we obtain that BP finds an asymptotically correct assignment in @math time only. This contrasts with both the worst-case upper bound for convergence of BP derived by Bayati, Shah and Sharma (2005) and the best-known computational cost of @math achieved by Edmonds and Karp's algorithm (1972).
Although it seems cunningly simple, the assignment problem has led to rich development in combinatorial probability and algorithm design since the early 1960s. Partly motivated to obtain insights for better algorithm design, the question of finding asymptotics of the average cost of @math became of great interest (see @cite_15 @cite_16 @cite_11 @cite_21 @cite_12 @cite_13 @cite_6 ). In 1987, through cavity method based calculations, M 'ezard and Parisi @cite_0 conjectured that, for Exponential(1) edge weights, @math This was rigorously established by Aldous @cite_3 more than a decade later, leading to the formalism of the objective method'' (see survey by Aldous and Steele @cite_19 ). In 2003, an exact version of the above conjecture was independently established by Nair, Prabhakar and Sharma @cite_5 and Linusson and W .astlund @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_6", "@cite_3", "@cite_0", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2028955990", "2065271495", "", "1504317671", "2093966655", "1872905819", "2070993566", "2003351966", "", "", "", "" ], "abstract": [ "The random assignment problem is to choose a minimum-cost perfect matching in a complete n µ n bipartite graph, whose edge weights are chosen randomly from some distribution such as the exponential distribution with mean 1. In this case it is known that the expectation does not grow unboundedly with n, but approaches a limiting value c* between 1.51 and 2. The limit is conjectured to be c* = π 2 6, while a recent conjecture has it that for finite n, the expected cost is EA* = ⌆n i=11 I2.", "The lower bound 1+1 e+O(n^-^1^+^@e)@k 1.368 is established for the expected minimal cost in the n x n random Assignment Problem where the cost matrix entries are drawn independently from the Uniform(0, 1) probability distribution. The expected number of independent zeroes created in the initial assignment of the Hungarian Algorithm is asymptotically equal to 2-e^-^1^ ^e-e^-^e^^^-^^^1^^^ ^^^e+o(1))[email protected]", "", "Author(s): Aldous, DJ | Abstract: The random assignment (or bipartite matching) problem asks about An = minπ ∑ni=1 c(i, π(i)) where (c(i, j)) is a n × n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is over permutations π. Mezard and Parisi (1987) used the replica method from statistical physics to argue nonrigorously that EAn → ζ(2) = π2 6. Aldous (1992) identified the limit in terms of a matching problem on a limit infinite tree. Here we construct the optimal matching on the infinite tree. This yields a rigorous proof of the ζ(2) limit and of the conjectured limit distribution of edge-costs and their rank-orders in the optimal matching. It also yields the asymptotic essential uniqueness property: every almost-optimal matching coincides with the optimal matching except on a small proportion of edges. © 2001 John Wiley a Sons, Inc. Random Struct. Alg., 18, 381-418, 2001.", "We show that the replica symmetric solution of the matching problem (bipartite or not) with independent random distances is stable. We compute the fluctuations and get the O(1 N) corrections to the length of the optimal matching in a generic sample On montre que la solution symetrique dans les repliques du probleme d'appariement (bipartite ou pas) dans lequel les distances sont des variables aleatoires est stable. On calcule les fluctuations et on obtient les conections d'ordre 1 N pour la longueur de l'appariement optimal dans un echantillon generique", "This survey describes a general approach to a class of problems that arise in combinatorial probability and combinatorial optimization. Formally, the method is part of weak convergence theory, but in concrete problems the method has a flavor of its own. A characteristic element of the method is that it often calls for one to introduce a new, infinite, probabilistic object whose local properties inform us about the limiting properties of a sequence of finite problems.", "Suppose that there are n jobs and n machines and it costs cij to execute job i on machine j. The assignment problem concerns the determination of a one-to-one assignment of jobs onto machines so as to minimize the cost of executing all the jobs. When the cij are independent and identically distributed exponentials of mean 1, Parisi [Technical Report cond-mat 9801176, xxx LANL Archive, 1998] made the beautiful conjecture that the expected cost of the minimum assignment equals @math . Coppersmith and Sorkin [Random Structures Algorithms 15 (1999), 113–144] generalized Parisi's conjecture to the average value of the smallest k-assignment when there are n jobs and m machines. Building on the previous work of Sharma and Prabhakar [Proc 40th Annu Allerton Conf Communication Control and Computing, [2002], 657–666] and Nair [Proc 40th Annu Allerton Conf Communication Control and Computing, [2002], 667–673], we resolve the Parisi and Coppersmith-Sorkin conjectures. In the process we obtain a number of combinatorial results which may be of general interest.© 2005 Wiley Periodicals, Inc. Random Struct. Alg. 2005Supported by the Stanford Graduate Fellowship and Stanford Networking Research Center Grant 1005544-1-WAAXI.Supported in part by the NSF Grant ANI-9985446.Supported by Stanford Office of Technology Grant 2DTA112, Stanford Networking Research Center Grant 1005545-1-WABCJ, and NSF Grant ANI-9985446.", "Given an n by n matrix X, the assignment problem asks for a set of n entries, one from each column and row, with the minimum sum. It is shown that the expected value of this minimum sum is less than 3, independent of n, if X consists of independent random variables uniformly distributed from 0 to 1.", "", "", "", "" ] }
0902.0469
1644856702
Since the seminal work from F. Cohen in the eighties, abstract virology has seen the apparition of successive viral models, all based on Turing-equivalent formalisms. But considering recent malware such as rootkits or k-ary codes, these viral models only partially cover these evolved threats. The problem is that Turing-equivalent models do not support interactive computations. New models have thus appeared, offering support for these evolved malware, but loosing the unified approach in the way. This article provides a basis for a unified malware model founded on process algebras and in particular the Join-Calculus. In terms of expressiveness, the new model supports the fundamental definitions based on self-replication and adds support for interactions, concurrency and non-termination allows the definition of more complex behaviors. Evolved malware such as rootkits can now be thoroughly modeled. In terms of detection and prevention, the fundamental results of undecidability and isolation still hold. However the process-based model has permitted to establish new results: identification of fragments from the Join-Calculus where malware detection becomes decidable, formal definition of the non-infection property, approximate solutions to restrict malware propagation.
Considering malware, a recent article underlines the fact that interactions with the execution environment, concurrency and also non-termination prove to be important computation functionalities @cite_21 . In effect, malware, being resilient and adaptive by nature, intensively use these functionalities to survive and infect new systems. Looking at the theoretical models existing in abstract virology, they mainly focus on the self-replication capacity which is defined in a purely functional way @cite_0 , [Chpt.2-3] FI05 , @cite_7 . Unfortunately, these models rely on Turing-equivalent formalisms which can hardly support interactive computations. With the apparition of interaction-based viral techniques, new models have thus been introduced to cope with this drawback, but loosing the unified approach in the way. The apparition of k-ary malware is an obvious example. In effect, these malware heavily rely on concurrency by a distribution of the malicious code over several executing parts. A new model based on Boolean functions has been provided to model their evolving interdependence over time @cite_26 . A second relevant example is the apparition of reactive non-terminating techniques such as stealth currently deployed in rootkits. Different models have been provided to cover stealth based either on steganography @cite_29 or graph theory @cite_19 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_29", "@cite_21", "@cite_0", "@cite_19" ], "mid": [ "2047672410", "2122686415", "", "", "1517901482", "1954848638" ], "abstract": [ "This paper presents a new class of (malicious) codes denoted k-ary codes. Instead of containing the whole instructions composing the program’s action, this type of codes is composed of k distinct parts which constitute a partition of the entire code. Each of these parts contains only a subset of the instructions. When considered alone (e.g. by an antivirus) every part cannot be distinguished from a normal uninfected program while their respective action combined according to different possible modes results in the offensive behaviour. In this paper, we presents a formalisation of this type of codes by means of Boolean functions and give their detailed taxonomy. We first show that classical malware are just a particular instance of this general model then we specifically address the case of k-ary codes. We give some complexity results about their detection based on the interaction between the different parts. As a general result, the detection is proved to be NP-complete.", "We are concerned with theoretical aspects of computer viruses. For this, we suggest a new definition of viruses which is clearly based on the iteration theorem and above all on Kleene's recursion theorem. We in this study capture in a natural way previous definitions, and in particular the one of Adleman. We establish generic virus constructions and we illustrate them by various examples. Lastly, we show the results on virus detection.", "", "", "In recent years the detection of computer viruses has become common place. It appears that for the most part these viruses have been ‘benign’ or only mildly destructive. However, whether or not computer viruses have the potential to cause major and prolonged disruptions of computing environments is an open question.", "A magnetic sensing element having a reduced electrical resistance and a large exchange anisotropic magnetic field between a free layer and antiferromagnetic layers for exchange biasing is provided. The magnetic sensing element includes second antiferromagnetic layers and a free magnetic layer, and the length of the second antiferromagnetic layers in a height direction in side regions disposed at the lateral sides of a track width region is larger than the length of the free magnetic layer in the height direction in the track width region." ] }
0902.0620
2951795716
Cake-cutting protocols aim at dividing a cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n 3 of players, and show that this protocol has a DGEF of 1 + (n^2) 2 . This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.
More recently, Brams, Jones, and Klamler @cite_6 proposed to minimize envy in terms of the maximum number of players that a player may envy. Their notion of measuring envy differs from our notion of DGEF in various ways, the most fundamental of which is that their notion takes an egalitarian'' approach to reducing the number of envy-relations (namely, via minimizing the most-envious player's envy, in terms of decreasing the number of this single player's envy-relations). In contrast, the DGEF aims at a utilitarian'' approach (namely, via minimizing overall envy, in terms of increasing the total number of guaranteed envy-free-relations among all players). That is to say that, although these notions may seem to be very similar at first glance, the approach presented in @cite_6 is not sensitive to a reduction in the number of envy-relations on the part of any other than the most-envious player, whereas the DGEF does take each single improvement into account and adapts accordingly. The DGEF, thus, is a more specific, more fine-tuned measure. Note also that Brams, Jones, and Klamler @cite_6 focus primarily on presenting a new protocol and less so on introducing a new notion for measuring envy.
{ "cite_N": [ "@cite_6" ], "mid": [ "2166670127" ], "abstract": [ "Properties of discrete cake-cutting procedures that use a minimal number of cuts (n A¢â‚¬â€œ 1 if there are n players) are analyzed. None is always envy-free or efficient, but divide-and-conquer (D&C) minimizes the maximum number of players that any single player may envy. It works by asking n ? 2 players successively to place marks on a cake that divide it into equal or approximately equal halves, then halves of these halves, and so on. Among other properties, D&C (i) ensures players of more than 1 n shares if their marks are different and (ii) is strategyproof for risk-averse players. However, D&C may not allow players to obtain proportional, connected pieces if they have unequal entitlements. Possible applications of D&C to land division are briefly discussed." ] }
0902.0620
2951795716
Cake-cutting protocols aim at dividing a cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n 3 of players, and show that this protocol has a DGEF of 1 + (n^2) 2 . This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.
Another approach is due to @cite_9 , who define various metrics for the evaluation of envy in order to classify the degree of envy in a society,'' and they use the term degree of envy'' in the quite different setting of multiagent allocation of resources.
{ "cite_N": [ "@cite_9" ], "mid": [ "2164681065" ], "abstract": [ "Mechanisms for dividing a set of goods amongst a number of autonomous agents need to balance efficiency and fairness requirements. A common interpretation of fairness is envy-freeness, while efficiency is usually understood as yielding maximal overall utility. We show how to set up a distributed negotiation framework that will allow a group of agents to reach an allocation of goods that is both efficient and envy-free." ] }
0902.0620
2951795716
Cake-cutting protocols aim at dividing a cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n 3 of players, and show that this protocol has a DGEF of 1 + (n^2) 2 . This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.
Besides, we stress that our approach of approximating envy-freeness differs from other lines of research that also deal with approximating fairness. For example, @cite_22 propose to seek for minimum-envy allocations of goods in terms of the value difference of the utility functions of envied players, and Edmonds and Pruhs @cite_15 @cite_14 approximate fairness in cake-cutting protocols by allowing merely approximately fair pieces (in terms of their value to the players) and by using only approximate cut queries (in terms of exactness).
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_22" ], "mid": [ "2621248669", "2009194376", "2121240598" ], "abstract": [ "We consider the well-known cake cutting problem in which a protocol wants to divide a cake among n ≥ 2 players in such a way that each player believes that they got a fair share. The standard Robertson-Webb model allows the protocol to make two types of queries, Evaluation and Cut, to the players. A deterministic divide-and-conquer protocol with complexity O(n log n) is known. We provide the first an Ω(n log n) lower bound on the complexity of any deterministic protocol in the standard model. This improves previous lower bounds, in that the protocol is allowed to assign to a player a piece that is a union of intervals and only guarantee approximate fairness. We accomplish this by lower bounding the complexity to find, for a single player, a piece of cake that is both rich in value, and thin in width. We then introduce a version of cake cutting in which the players are able to cut with only finite precision. In this case, we can extend the Ω(n log n) lower bound to include randomized protocols.", "We give a randomized algorithm for the well known caking cutting problem that achieves approximate fairness, and has complexity O(n), when all players are honest. The heart of this result involves extending the standard offline multiple-choice balls and bins analysis to the case where the underlying resources bins machines have different utilities to different players balls jobs.", "We study the problem of fairly allocating a set of indivisible goods to a set of people from an algorithmic perspective. fair division has been a central topic in the economic literature and several concepts of fairness have been suggested. The criterion that we focus on is envy-freeness. In our model, a monotone utility function is associated with every player specifying the value of each subset of the goods for the player. An allocation is envy-free if every player prefers her own share than the share of any other player. When the goods are divisible, envy-free allocations always exist. In the presence of indivisibilities, we show that there exist allocations in which the envy is bounded by the maximum marginal utility, and present a simple algorithm for computing such allocations. We then look at the optimization problem of finding an allocation with minimum possible envy. In the general case the problem is not solvable or approximable in polynomial time unless P = NP. We consider natural special cases (e.g.additive utilities) which are closely related to a class of job scheduling problems. Approximation algorithms as well as inapproximability results are obtained. Finally we investigate the problem of designing truthful mechanisms for producing allocations with bounded envy." ] }
0902.1394
2953153012
This paper addresses the following foundational question: what is the maximum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? As shown in this paper, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). To circumvent the complexity emerging when directly dealing with delay, we express performance in term of a convenient metric, called "stream diffusion metric". We show that it is directly related to the end-to-end minimum delay achievable in a P2P streaming network. In a homogeneous scenario, we derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. In this bound, k-step Fibonacci sequences do emerge, and appear to set the fundamental laws that characterize the optimal operation of chunk-based systems.
The few available theoretical works mostly focus on the flow-based systems, as they have been defined in subsection . In such case, a fluidic approach is typically used to evaluate performance and the bandwidth available on each link plays a limited role with respect to the delay performance, which ultimately depend on the delay characterizing a path between the source node and a generic end-peer. This is the case in @cite_12 and @cite_4 . Moreover, there are also other studies that address the issue of how to maximize throughput by using various techniques, such as network coding @cite_9 or pull-based streaming protocol @cite_2 .
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_12", "@cite_2" ], "mid": [ "2119772493", "2155638714", "", "2134157110" ], "abstract": [ "With the constraints of network topologies and link capacities, achieving the optimal end-to-end throughput in data networks has been known as a fundamental but computationally hard problem. In this paper, we seek efficient solutions to the problem of achieving optimal throughput in data networks, with single or multiple unicast, multicast and broadcast sessions. Although previous approaches lead to solving NP-complete problems, we show the surprising result that, facilitated by the recent advances of network coding, computing the strategies to achieve the optimal end-to-end throughput can be performed in polynomial time. This result holds for one or more communication sessions, as well as in the overlay network model. Supported by empirical studies, we present the surprising observation that in most topologies, applying network coding may not improve the achievable optimal throughput; rather, it facilitates the design of significantly more efficient algorithms to achieve such optimality.", "We develop a simple stochastic fluid model that seeks to expose the fundamental characteristics and limitations of P2P streaming systems. This model accounts for many of the essential features of a P2P streaming system, including the peers' realtime demand for content, peer churn (peers joining and leaving), peers with heterogeneous upload capacity, limited infrastructure capacity, and peer buffering and playback delay. The model is tractable, providing closed-form expressions which can be used to shed insight on the fundamental behavior of P2P streaming systems. The model shows that performance is largely determined by a critical value. When the system is of moderate-to-large size, if a certain ratio of traffic loads exceeds the critical value, the system performs well; otherwise, the system performs poorly. Furthermore, large systems have better performance than small systems since they are more resilient to bandwidth fluctuations caused by peer churn. Finally, buffering can dramatically improve performance in the critical region, for both small and large systems. In particular, buffering can bring more improvement than can additional infrastructure bandwidth.", "", "Most of the real deployed peer-to-peer streaming systems adopt pull-based streaming protocol. In this paper, we demonstrate that, besides simplicity and robustness, with proper parameter settings, when the server bandwidth is above several times of the raw streaming rate, which is reasonable for practical live streaming system, simple pull-based P2P streaming protocol is nearly optimal in terms of peer upload capacity utilization and system throughput even without intelligent scheduling and bandwidth measurement. We also indicate that whether this near optimality can be achieved depends on the parameters in pull-based protocol, server bandwidth and group size. Then we present our mathematical analysis to gain deeper insight in this characteristic of pull-based streaming protocol. On the other hand, the optimality of pull-based protocol comes from a cost -tradeoff between control overhead and delay, that is, the protocol has either large control overhead or large delay. To break the tradeoff, we propose a pull-push hybrid protocol. The basic idea is to consider pull-based protocol as a highly efficient bandwidth-aware multicast routing protocol and push down packets along the trees formed by pull-based protocol. Both simulation and real-world experiment show that this protocol is not only even more effective in throughput than pull-based protocol but also has far lower delay and much smaller overhead. And to achieve near optimality in peer capacity utilization without churn, the server bandwidth needed can be further relaxed. Furthermore, the proposed protocol is fully implemented in our deployed GridMedia system and has the record to support over 220,000 users simultaneously online." ] }
0901.4835
2052260075
Despite providing similar functionality, multiple network services may require the use of different interfaces to access the functionality, and this problem will only become worse with the widespread deployment of ubiquitous computing environments. One way around this problem is to use interface adapters that adapt one interface into another. Chaining these adapters allows flexible interface adaptation with fewer adapters, but the loss incurred because of imperfect interface adaptation must be considered. This study outlines a matrix-based mathematical basis for analysing the chaining of lossy interface adapters. The authors also show that the problem of finding an optimal interface adapter chain is NP-complete with a reduction from 3SAT.
The mathematics in this paper was motivated by the interface adapter framework @cite_2 used by the Active Surroundings middleware for ubiquitous computing environments @cite_20 . In order to support a transparent computing experience despite a user moving around locations where similar services may have different interfaces, the framework uses interface adapters to adapt interfaces. @cite_2 defines the problem informally and shows the effectiveness of a greedy algorithm based on uniform cost search @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_20", "@cite_2" ], "mid": [ "2122410182", "2792354429", "2132136548" ], "abstract": [ "From the Publisher: Intelligent Agents - Stuart Russell and Peter Norvig show how intelligent agents can be built using AI methods, and explain how different agent designs are appropriate depending on the nature of the task and environment. Artificial Intelligence: A Modern Approach is the first AI text to present a unified, coherent picture of the field. The authors focus on the topics and techniques that are most promising for building and analyzing current and future intelligent systems. The material is comprehensive and authoritative, yet cohesive and readable. State of the Art - This book covers the most effective modern techniques for solving real problems, including simulated annealing, memory-bounded search, global ontologies, dynamic belief networks, neural networks, adaptive probabilistic networks, inductive logic programming, computational learning theory, and reinforcement learning. Leading edge AI techniques are integrated into intelligent agent designs, using examples and exercises to lead students from simple, reactive agents to advanced planning agents with natural language capabilities.", "", "A key feature of ubiquitous computing is service continuity which allows a user to transparently continue his task regardless of his movement. For service continuity, the underlying system needs to not only discover a service satisfying a user's request, but also provide an interface differences resolution scheme if the interface of the service found is not the same as that of the service requested. For resolving interface mismatches, one of solutions is to use an interface adapter. The most serious problem in the interface adapter-based approach is the overhead of adapter generation. There are many research efforts about adapter generation load reduction and this paper focuses on an adapter chaining scheme to reduce the number of necessary adapters among different service interfaces. We propose a construction-time adaptation loss evaluation scheme and an adapter chain construction algorithm, which finds an adapter chain with minimal adaptation loss." ] }
0901.4835
2052260075
Despite providing similar functionality, multiple network services may require the use of different interfaces to access the functionality, and this problem will only become worse with the widespread deployment of ubiquitous computing environments. One way around this problem is to use interface adapters that adapt one interface into another. Chaining these adapters allows flexible interface adaptation with fewer adapters, but the loss incurred because of imperfect interface adaptation must be considered. This study outlines a matrix-based mathematical basis for analysing the chaining of lossy interface adapters. The authors also show that the problem of finding an optimal interface adapter chain is NP-complete with a reduction from 3SAT.
Other work have also used interface adapters to resolve service interface mismatches. Some attempt to aid developers create interface adapters using template-based approaches @cite_11 @cite_14 or mapping specifications @cite_18 . Others reduce the number of required interface adapters by chaining them together @cite_6 , while others use a chain of interface adapters to provide backwards compatibility as interfaces evolve @cite_13 @cite_16 . These chaining approaches ignore that one chain may be worse than others in terms of lossiness.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_6", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "1592076137", "2100095098", "2168075985", "2000434158", "", "1941670126" ], "abstract": [ "Compositional reuse of software components requires standardized specification techniques if applications are created by combining third party components Adequate techniques need to be used in order to specify not only technical but also business related aspects of software components The different specification aspects of software components are summarized in a multi-layer specification framework with formal specification techniques defined for each level of abstraction The use of formal specification techniques is a prerequisite for compatibility tests on component specifications Compatibility tests are necessary for the identification of required components, which are traded on component markets The focus of this paper is to present an algorithm for compatibility test on interface level, where Interface Definition Language (IDL) has been used as formal specification language In order to test characteristics where e.g the order of parameter values or the order of consisting data types within a complex data type are not identical with the specification, adapters are generated for mapping the component interfaces.", "In today's Web, many functionality-wise similar Web services are offered through heterogeneous interfaces (operation definitions) and business protocols (ordering constraints defined on legal operation invocation sequences). The typical approach to enable interoperation in such a heterogeneous setting is through developing adapters. There have been approaches for classifying possible mismatches between service interfaces and business protocols to facilitate adapter development. However, the hard job is that of identifying, given two service specifications, the actual mismatches between their interfaces and business protocols. In this paper we present novel techniques and a tool that provides semi-automated support for identifying and resolution of mismatches between service interfaces and protocols, and for generating adapter specification. We make the following main contributions: (i) we identify mismatches between service interfaces, which leads to finding mismatches of type of signature, merge split, and extra missing messages; (ii) we identify all ordering mismatches between service protocols and generate a tree, called mismatch tree, for mismatches that require developers' input for their resolution. In addition, we provide semi-automated support in analyzing the mismatch tree to help in resolving such mismatches. We have implemented the approach in a tool inside IBM WID (WebSphere Integration Developer). Our experiments with some real-world case studies show the viability of the proposed approach. The methods and tool are significant in that they considerably simplify the problem of adapting services so that interoperation is possible.", "To programmatically discover and interact with services in ubiquitous computing environments, an application needs to solve two problems: (1) is it semantically meaningful to interact with a service? If the task is \"printing a file\", a printer service would be appropriate, but a screen rendering service or CD player service would not. (2) If yes, what are the mechanics of interacting with the service - remote invocation mechanics, names of methods, numbers and types of arguments, etc.? Existing service frameworks such as Jini and UPnP conflate these problems - two services are \"semantically compatible\" if and only if their interface signatures match. As a result, interoperability is severely restricted unless there is a single, globally agreed-upon, unique interface for each service type. By separating the two subproblems and delegating different parts of the problem to the user and the system, we show how applications can interoperate with services even when globally unique interfaces do not exist for certain services.", "In this paper, we define the problem of simultaneously deploying multiple versions of a web service in the face of independently developed unsupervised clients. We then propose a solution in the form of a design technique called Chain of Adapters and argue that this approach strikes a good balance between the various requirements. We recount our experiences in automating the application of the technique and provide an initial analysis of the performance degradations it may occasion. The Chain of Adapters technique is particularly suitable for self-managed systems since it makes many version-related reconfiguration tasks safe, and thus subject to automation.", "", "The push toward business process automation has generated the need for integrating different enterprise applications involved in such processes. The typical approach to integration and to process automation is based on the use of adapters and message brokers. The need for adapters in Web services mainly comes from two sources: one is the heterogeneity at the higher levels of the interoperability stack, and the other is the high number of clients, each of which can support different interfaces and protocols, thereby generating the need for providing multiple interfaces to the same service. In this paper, we characterize the problem of adaptation of web services by identifying and classifying different kinds of adaptation requirements. Then, we focus on business protocol adapters, and we classify the different ways in which two protocols may differ. Next, we propose a methodology for developing adapters in Web services, based on the use of mismatch patterns and service composition technologies." ] }
0901.4835
2052260075
Despite providing similar functionality, multiple network services may require the use of different interfaces to access the functionality, and this problem will only become worse with the widespread deployment of ubiquitous computing environments. One way around this problem is to use interface adapters that adapt one interface into another. Chaining these adapters allows flexible interface adaptation with fewer adapters, but the loss incurred because of imperfect interface adaptation must be considered. This study outlines a matrix-based mathematical basis for analysing the chaining of lossy interface adapters. The authors also show that the problem of finding an optimal interface adapter chain is NP-complete with a reduction from 3SAT.
Analyzing the chaining of lossy interface adapters is in many ways similar to depedency analysis in software architecture @cite_1 @cite_5 @cite_4 @cite_10 . These are designed to support maintenance of large software systems and usually consider a lossy connection between software components to be the exception and not the norm. Techniques used in software architecture such as code analysis @cite_3 or fault injection @cite_12 could also be the basis for deriving the method dependency matrixes for interface adapters.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2134769553", "2148238464", "1966831167", "2169063818", "2169291221", "2148553060" ], "abstract": [ "Typical distributed transaction environments are a heterogeneous collection of hardware and software resources. An example of such an environment is an electronic store front where users can launch a number of different transactions to complete one or more interactions with the system. One of the challenges in managing such an environment is to figure out the root cause of a performance or throughput problem that manifests itself at a user access point, and to take appropriate action, preferably in an automated way. Our paper addresses this problem by analyzing the dependency relationship among various software components. We also provide a theoretical insight into how a set of transactions can be generated to pinpoint the root cause of a performance problem that is manifested at the user access point.", "In a programming project, it is easy to lose track of which files need to be reprocessed or recompiled after a change is made in some part of the source. Make provides a simple mechanism for maintaining up-to-date versions of programs that result from many operations on a number of files. It is possible to tell Make the sequence of commands that create certain files, and the list of files that require other files to be current before the operations can be done. Whenever a change is made in any part of the program, the Make command will create the proper files simply, correctly, and with a minimum amount of effort. The basic operation of Make is to find the name of a needed target in the description, ensure that all of the files on which it depends exist and are up to date, and then create the target if it has not been modified since its generators were. The description file really defines the graph of dependencies;Make does a depth-first search of this graph to determine what work is really necessary. Make also provides a simple macro substitution facility and the ability to encapsulate commands in a single file for convenient administration.", "The proliferation of large software systems written in high level programming languages insures the utility of analysis programs which examine interprocedural communications. Often these analysis programs need to reduce the dynamic relations between procedures to a static data representation. This paper presents one such representation, a directed, acyclic graph named the call graph of a program. We delineate the programs representable by an acyclic call graph and present an algorithm for constructing it using the property that its nodes may be linearly ordered. We prove the correctness of the algorithm and discuss the results obtained from an implementation of the algorithm in the PFORT Verifier [1].", "Dependence analysis is useful for software maintenance because it indicates the possible effects of a software modification on the rest of a program. This helps the software maintainer evaluate the appropriateness of a software modification, drive regression testing, and determine the vulnerability of critical sections of code. A definition of interprocedural dependence analysis is given, and its implementation in a prototype tool that supports software maintenance is described. >", "An approach to managing the architecture of large software systems is presented. Dependencies are extracted from the code by a conventional static analysis, and shown in a tabular form known as the 'Dependency Structure Matrix' (DSM). A variety of algorithms are available to help organize the matrix in a form that reflects the architecture and highlights patterns and problematic dependencies. A hierarchical structure obtained in part by such algorithms, and in part by input from the user, then becomes the basis for 'design rules' that capture the architect's intent about which dependencies are acceptable. The design rules are applied repeatedly as the system evolves, to identify violations, and keep the code and its architecture in conformance with one another. The analysis has been implemented in a tool called LDM which has been applied in several commercial projects; in this paper, a case study application to Haystack, an information retrieval system, is described.", "We describe a methodology for identifying and characterizing dynamic dependencies between system components in distributed application environments such as e-commerce systems. The methodology relies on active perturbation of the system to identify dependencies and the use of statistical modeling to compute dependency strengths. Unlike more traditional passive techniques, our active approach requires little initial knowledge of the implementation details of the system and has the potential to provide greater coverage and more direct evidence of causality for the dependencies it identifies. We experimentally demonstrate the efficacy of our approach by applying it to a prototypical e-commerce system based on the TPC-W Web commerce benchmark, for which the active approach correctly identifies and characterizes 41 of 42 true dependencies out of a potential space of 140 dependencies. Finally, we consider how the dependencies computed by our approach can be used to simplify and guide the task of root-cause analysis, an important part of problem determination." ] }
0901.3150
2949834189
Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.
In the case of collaborative filtering, we are interested in finding a matrix @math of minimum rank that matches the known entries @math . Each known entry thus provides an affine constraint. Cand es and Recht @cite_6 introduced the incoherent model for @math . Within this model, they proved that, if @math is random, the convex relaxation correctly reconstructs @math as long as @math . On the other hand, from a purely information theoretic point of view (i.e. disregarding algorithmic considerations), it is clear that @math observations should allow to reconstruct @math with arbitrary precision. Indeed this point was raised in @cite_6 and proved in @cite_1 , through a counting argument.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2951150271", "2949959192" ], "abstract": [ "How many random entries of an n by m, rank r matrix are necessary to reconstruct the matrix within an accuracy d? We address this question in the case of a random matrix with bounded rank, whereby the observed entries are chosen uniformly at random. We prove that, for any d>0, C(r,d)n observations are sufficient. Finally we discuss the question of reconstructing the matrix efficiently, and demonstrate through extensive simulations that this task can be accomplished in nPoly(log n) operations, for small rank.", "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information." ] }
0901.3150
2949834189
Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.
The present paper describes an efficient algorithm that reconstructs a rank- @math matrix from @math random observations. The most complex component of our algorithm is the SVD in step @math . We were able to treat realistic data sets with @math . This must be compared with the @math complexity of semidefinite programming @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2949959192" ], "abstract": [ "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information." ] }
0901.3150
2949834189
Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.
Cai, Cand es and Shen @cite_0 recently proposed a low-complexity procedure to solve the convex program posed in @cite_6 . Our spectral method is akin to a single step of this procedure, with the important novelty of the trimming step that improves significantly its performances. Our analysis techniques might provide a new tool for characterizing the convex relaxation as well.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2951328719", "2949959192" ], "abstract": [ "This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4 of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.", "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information." ] }
0901.3150
2949834189
Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.
Theorem can also be compared with a copious line of work in the theoretical computer science literature @cite_19 @cite_16 @cite_17 . An important motivation in this context is the development of fast algorithms for low-rank approximation. In particular, Achlioptas and McSherry @cite_17 prove a theorem analogous to , but holding only for @math (in the case of square matrices).
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_17" ], "mid": [ "", "2021680564", "1970950689" ], "abstract": [ "", "Experimental evidence suggests that spectral techniques are valuable for a wide range of applications. A partial list of such applications include (i) semantic analysis of documents used to cluster documents into areas of interest, (ii) collaborative filtering --- the reconstruction of missing data items, and (iii) determining the relative importance of documents based on citation link structure. Intuitive arguments can explain some of the phenomena that has been observed but little theoretical study has been done. In this paper we present a model for framing data mining tasks and a unified approach to solving the resulting data mining problems using spectral analysis. These results give strong justification to the use of spectral techniques for latent semantic indexing, collaborative filtering, and web site ranking.", "Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm." ] }
0901.3329
1769566533
The discovery of events in time series can have important implications, such as identifying microlensing events in astronomical surveys, or changes in a patient's electrocardiogram. Current methods for identifying events require a sliding window of a fixed size, which is not ideal for all applications and could overlook important events. In this work, we develop probability models for calculating the significance of an arbitrary-sized sliding window and use these probabilities to find areas of significance. Because a brute force search of all sliding windows and all window sizes would be computationally intractable, we introduce a method for quickly approximating the results. We apply our method to over 100,000 astronomical time series from the MACHO survey, in which 56 different sections of the sky are considered, each with one or more known events. Our method was able to recover 100 of these events in the top 1 of the results, essentially pruning 99 of the data. Interestingly, our method was able to identify events that do not pass traditional event discovery procedures.
Existing methods in scan statistics currently do not address the problem of large data sets (e.g., millions or billions of time series). In many applications, one requires an event detection algorithm that can be performed quickly on many time series. In addition, because much of the data for these applications has noise that is difficult to understand and model, it is critical to develop a method that is independent of the noise characteristics. The current methods in scan statistics are generally based on particular noise models, such as Poisson distributions @cite_11 and binomial probabilities @cite_21 . Both of these characteristics exist in astronomy, and thus our goal in this paper is to address both of these issues. Due to the intractability of current scan statistics methods, we cannot compare our method. Thus, we are compelled to compare to anomaly detection due to its speed, efficiency and that it does not require first modeling the noise.
{ "cite_N": [ "@cite_21", "@cite_11" ], "mid": [ "2105550309", "2331623931" ], "abstract": [ "The scan statistic evaluates whether an apparent cluster of disease in time is due to chance. The statistic employs a ‘moving window’ of length w and finds the maximum number of cases revealed through the window as it scans or slides over the entire time period T. Computation of the probability of observing a certain size cluster, under the hypothesis of a uniform distribution, is infeasible when N, the total number of events, is large, and w is of moderate or small size relative to T. We give an approximation that is an asymptotic upper bound, easy to compute, and, for the purposes of hypothesis testing, more accurate than other approximations presented in the literature. The approximation applies both when N is fixed, and when N has a Poisson distribution. We illustrate the procedure on a data set of trisomic spontaneous abortions observed in a two year period in New York City.", "This article investigates the accuracy of approximations for the distribution of ordered m-spacings for i.i.d. uniform observations in the interval (0, 1). Several Poisson approximations and a compound Poisson approximation are studied. The result of a simulation study is included to assess the accuracy of these approximations. A numerical procedure for evaluating the moments of the ordered m-spacings is developed and evaluated for the most accurate approximation." ] }
0901.3467
2949508987
This paper presents new FEC codes for the erasure channel, LDPC-Band, that have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML) decoding. Indeed, these codes feature simultaneously a sparse parity check matrix, which allows an efficient use of iterative LDPC decoding, and a generator matrix with a band structure, which allows fast ML decoding on the erasure channel. The combination of these two decoding algorithms leads to erasure codes achieving a very good trade-off between complexity and erasure correction capability.
LDPC codes are another class of binary codes providing good level of decoding performance with extremely fast encoding decoding algorithms @cite_1 @cite_4 . Indeed, the classical iterative decoding algorithm for the erasure channel, based on the work of Zyablov @cite_9 , has a linear decoding complexity. The drawback of this algorithm is that it does not reach the performance of ML decoding. Moreover, the sparsity of the matrices also reduces the ML performance compared to full random matrices.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_4" ], "mid": [ "", "2095882513", "2228670397" ], "abstract": [ "", "In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area.", "This document describes two Fully-Specified FEC Schemes, LDPC- Staircase and LDPC-Triangle, and their application to the reliable delivery of objects on packet erasure channels. These systematic FEC codes belong to the well known class of Low Density Parity Check'' (LDPC) codes, and are large block FEC codes in the sense of RFC3453." ] }
0901.3467
2949508987
This paper presents new FEC codes for the erasure channel, LDPC-Band, that have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML) decoding. Indeed, these codes feature simultaneously a sparse parity check matrix, which allows an efficient use of iterative LDPC decoding, and a generator matrix with a band structure, which allows fast ML decoding on the erasure channel. The combination of these two decoding algorithms leads to erasure codes achieving a very good trade-off between complexity and erasure correction capability.
Recently, two independent works @cite_12 @cite_13 proposed a hybrid iterative-ML decoding algorithm, where ML decoding is only used when the iterative decoding does not succeed to decode a received codeword.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "2159361790", "1602682102" ], "abstract": [ "The design of low-density parity-check (LDPC) codes under hybrid iterative maximum likelihood decoding is addressed for the binary erasure channel (BEC). Specifically, we focus on generalized irregular repeat-accumulate (GeIRA) codes, which offer both efficient encoding and design flexibility. We show that properly designed GeIRA codes tightly approach the performance of an ideal maximum distance separable (MDS) code, even for short block sizes. For example, our (2048,1024) code reaches a codeword error rate of 10-5 at channel erasure probability isin= 0.450, where an ideal (2048,1024) MDS code would reach the same error rate at isin = 0.453.", "This work focuses on the decoding algorithm of the LDPC large block FEC codes for the packet erasure channel, also called AL-FEC (Application-Level Forward Error Correction). More specifically this work details the design and the performance of a hybrid decoding scheme, that starts with the Zyablov iterative decoding algorithm, a rapid but suboptimal algorithm in terms of erasure recovery capabilities, and, when required, continues with a Gaussian elimination algorithm. For practical reasons this work focuses on two LDPC codes for the erasure channel, namely LDPC-staircase and LDPC-triangle codes. Nevertheless the decoding scheme proposed can be used with other LDPC codes without any problem. The performance experiments carried out show that the erasure recovery capabilities of LDPC-triangle codes are now extremely close to that of an ideal code, even with small block sizes. This is all the more true with small code rates: whereas the Zyablov iterative decoding scheme becomes unusable as the code rate decreases, the Gaussian elimination makes the LDPC-triangle codes almost ideal. In all the tests, when carefully implemented, the LDPC-triangle codec featuring the proposed decoding scheme is fast, and in particular always significantly faster than the reference Reed-Solomon on GF( @math ) codec. The erasure recovery capabilities of LDPC-staircase codes are also significantly improved, even if they remain a little bit farther from an ideal code. Nevertheless, a great advantage is the fact that LDPC-staircase codes remain significantly faster than LDPC-triangle codes, which, for instance, enables their use with larger blocks. All these results make these codes extremely attractive for many situations and contradict the common belief that using Gaussian elimination is not usable because of a prohibitive processing load. Moreover the proposed approach offers an important flexibility in practice, and depending on the situation, one can either choose to favor erasure recovery capabilities or the processing time." ] }
0901.3467
2949508987
This paper presents new FEC codes for the erasure channel, LDPC-Band, that have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML) decoding. Indeed, these codes feature simultaneously a sparse parity check matrix, which allows an efficient use of iterative LDPC decoding, and a generator matrix with a band structure, which allows fast ML decoding on the erasure channel. The combination of these two decoding algorithms leads to erasure codes achieving a very good trade-off between complexity and erasure correction capability.
The idea of window based encoding in LDPC codes has been proposed by Haken, Luby and al. in patent @cite_8 . However, the goal of this patent is only to minimize memory access during encoding, by localizing the access in a window that slides over the input file. Independantly of whether our proposal falls into the scope of this patent or not, we see that, from a purely scientific point of view, the goal of @cite_8 completely departs from the approach discussed in the current paper, as well as the theoretic tools we introduce to achieve our goals.
{ "cite_N": [ "@cite_8" ], "mid": [ "1822933787" ], "abstract": [ "An encoder encodes an output symbol from input symbols of an input file by determining, for a given output symbol, a list AL that indicates W associated input symbols, within a subset S of the input symbols comprising the input file, to be associated with the output symbol, where W is a positive integer, where at least two output symbols have different values for W associated therewith, where W is greater than one for at least one output symbol, and where the number of possible output symbols is much larger than the number of input symbols in the input file, and generating an output symbol value from a predetermined function of the W associated input symbols indicated by AL. The subset S can be a window that slides over the input file to cover all of the input symbols in a period. The window can be a fixed or variable size. Where the window moves over the file and reaches an edge, the window can wrap around or can cover extended input symbols." ] }
0901.3987
2100412460
Consider a lossy communication channel for unicast with zero-delay feedback. For this communication scenario, a simple retransmission scheme is optimum with respect to delay. An alternative approach is to use random linear coding in automatic repeat-request (ARQ) mode. We extend the work of Shrader and Ephremides in [1], by deriving an expression for the delay of random linear coding over a field of infinite size. Simulation results for various field sizes are also provided.
As far as we know, @cite_4 is the only paper in the literature to deal with this specific model. Related but different work in a unicast setting can be found in @cite_9 @cite_6 , but in the model addressed in these papers packets arrive to the sender as a block, so there is no loss in waiting. Reference @cite_1 addresses delay in forward-error correction schemes for sender nodes with a finite buffer capacity.
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_4", "@cite_6" ], "mid": [ "2123938134", "2116661213", "2153043354", "2097651478" ], "abstract": [ "We consider the following packet coding scheme: The coding node has a fixed, finite memory in which it stores packets formed from an incoming packet stream, and it sends packets formed from random linear combinations of its memory contents. We analyze the scheme in two settings: as a self-contained component in a network providing reliability on a single link, and as a component employed at intermediate nodes in a block-coded end-to-end connection. We believe that the scheme is a good alternative to automatic repeat request when feedback is too slow, too unreliable, or too difficult to implement.", "This paper analyzes the gains in delay performance resulting from network coding. We consider a model of file transmission to multiple receivers from a single base station. Using this model, we show that gains in delay performance from network coding with or without channel side information can be substantial compared to conventional scheduling methods for downlink transmission.", "In this work we consider coding over packets that randomly arrive to a source node for transmission to a single destination. We present a queueing model for a random linear coding scheme that adapts to the amount of traffic at the source node. If there is only one packet in the queue when the channel becomes free, then reliable transmission is carried out by retransmitting lost packets. If there are at least two packets in the queue, then random linear coding is carried out over the number of packets available in the queue when the channel becomes free. We provide a bulk-service queuing model and results on the delay of this random linear coding scheme, and show that its delay performance asymptotically approaches that of a retransmission scheme.", "In an unreliable packet network setting, we study the performance gains of optimal transmission strategies in the presence and absence of coding capability at the transmitter, where performance is measured in delay and throughput. Although our results apply to a large class of coding strategies including maximum-distance separable (MDS) and Digital Fountain codes, we use random network codes in our discussions because these codes have a greater applicability for complex network topologies. To that end, after introducing a key setting in which performance analysis and comparison can be carried out, we provide closed-form as well as asymptotic expressions for the delay performance with and without network coding. We show that the network coding capability can lead to arbitrarily better delay performance as the system parameters scale when compared to traditional transmission strategies without coding. We further develop a joint scheduling and random-access scheme to extend our results to general wireless network topologies." ] }
0901.3987
2100412460
Consider a lossy communication channel for unicast with zero-delay feedback. For this communication scenario, a simple retransmission scheme is optimum with respect to delay. An alternative approach is to use random linear coding in automatic repeat-request (ARQ) mode. We extend the work of Shrader and Ephremides in [1], by deriving an expression for the delay of random linear coding over a field of infinite size. Simulation results for various field sizes are also provided.
Random linear coding @cite_11 is known to work particularly well in the multicast case @cite_8 , and multicast problems similar to the ones we discuss here are investigated in @cite_5 @cite_10 .
{ "cite_N": [ "@cite_8", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2130350209", "2542994128", "2963627748", "" ], "abstract": [ "A novel randomized network coding approach for robust, distributed transmission and compression of information in networks is presented, and its advantages over routing-based approaches is demonstrated.", "In this work we address the stability and delay performance of a multicast erasure channel with random arrivals at the source node. We consider both a standard retransmission (ARQ) scheme as well as random linear coding. Our results show that while random linear coding may outperform re-transmissions for heavy traffic, the delay incurred by the use of random linear codes is significantly higher when the source is lightly loaded", "For a packet erasure broadcast channel with three receivers, we propose a new coding algorithm that makes use of feedback to dynamically adapt the code. Our algorithm is throughput optimal, and we conjecture that it also achieves an asymptotically optimal average decoding delay at the receivers. We consider heavy traffic asymptotics, where the load factor rho approaches 1 from below with either the arrival rate (lambda) or the channel parameter (mu) being fixed at a number less than 1. We verify through simulations that our algorithm achieves an asymptotically optimal decoding delay of O (1 1-rho).", "" ] }
0901.1479
2949121573
We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.
transmission as a way of de-correlating the packet losses and increasing the performance of FEC was first proposed in @cite_4 . It got more attention recently, e.g., in @cite_2 @cite_10 @cite_1 @cite_9 @cite_6 @cite_12 . Multipath was also studied in the context of Multiple Description Coding @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "1605904650", "", "89205691", "2096341944", "", "2123260101", "2112721727", "2039682623" ], "abstract": [ "", "", "The use of forward error correction (FEC) coding is often proposed to combat the effects of network packet losses for error-resilient video transmission on packet-switched networks. On the other hand, path diversity has recently been proposed to improve network transport for both singledescription (SD) and multiple-description (MD) coded video. In this work we model and analyze an SD coded video transmission system employing packet-level FEC in combination with path diversity. In particular, we provide a precise analytical approach to evaluating the efficacy of path diversity in reducing the burstiness of network packet-loss processes. We use this approach to quantitatively demonstrate the advantages of path diversity in improving end-to-end video transport performance using packet-level FEC.", "Delivery of real time streaming applications, such as voice and video over IP, in packet switched networks is based on dividing the stream into packets and shipping each of the packets on an individual basis to the destination through the network. The basic implicit assumption on these applications is that shipping all the packets of an application is done, most of the time, over a single path along the network. In this work, we present a model in which packets of a certain session are dispersed over multiple paths, in contrast to the traditional approach. The dispersion may be performed by network nodes for various reasons such as load-balancing, or implemented as a mechanism to improve quality, as will be presented in this work. To study the effect of packet dispersion on the quality of voice over IP (VoIP) applications, we focus on the effect of the network loss on the applications, where we propose to use the Noticeable Loss Rate (NLR) as a measure (negatively) correlated with the voice quality. We analyze the NLR for various packet dispersion strategies over paths experiencing memoryless (Bernoulli) or bursty (Gilbert model) losses, and compare them to each other. Our analysis reveals that in many situations the use of packet dispersion reduces the NLR and thus improves session quality. The results suggest that the use of packet dispersion can be quite beneficial for these applications.", "", "Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In our previous work, we have shown that substantial reduction in packet loss can be achieved by sending packets at appropriate sending rates to a receiver from multiple senders, using disjoint paths, and by protecting packets with forward error correction. In this paper, we propose a path diversity with forward error correction (PDF) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes. We propose a scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system.", "Reliability is critical to a variety of network applications. Unfortunately, due to lack of QoS support across ISP boundaries, it is difficult to achieve even two 9s (99 ) reliability in toadyism Internet. In this paper, we propose SmartTunnel, an end-to-end approach to achieving reliability. A SmartTunnel is a logical point-to-point tunnel between two end points that spans multiple physical network paths. It achieves reliability by strategically allocating traffic onto multiple paths and performing FEC coding. Such an end-to-end approach requires no explicit QoS support from intermediate ISPs, and is therefore easy to deploy in today's Internet. To fully realize the potential of SmartTunnel, we analytically derive near-optimal traffic allocation schemes that minimize loss rates. We extensively evaluate our approach using trace-driven simulations, ns-2 simulations, and experiments on PlanetLab. Our results clearly demonstrate that SmartTunnel is effective in achieving high reliability.", "Video communication over lossy packet networks such as the Internet is hampered by limited bandwidth and packet loss. This paper presents a system for providing reliable video communication over these networks, where the system is composed of two subsystems: (1) multiple state video encoder decoder and (2) a path diversity transmission system. Multiple state video coding combats the problem of error propagation at the decoder by coding the video into multiple independently decodable streams, each with its own prediction process and state. If one stream is lost the other streams can still be decoded to produce usable video, and furthermore, the correctly received streams provide bidirectional (previous and future) information that enables improved state recovery for the corrupted stream. This video coder is a form of multiple description coding (MDC), and its novelty lies in its use of information from the multiple streams to perform state recovery at the decoder. The path diversity transmission system explicitly sends different subsets of packets over different paths, as opposed to the default scenarios where the packets proceed along a single path, thereby enabling the end- to-end video application to effectively see an average path behavior. We refer to this as path diversity. Generally, seeing this average path behavior provides better performance than seeing the behavior of any individual random path. For example, the probability that all of the multiple paths are simultaneously congested is much less than the probability that a single path is congested. The resulting path diversity provides the multiple state video decoder with an appropriate virtual channel to assist in recovering from lost packets, and can also simplify system design, e.g. FEC design. We propose two architectures for achieving path diversity, and examine the effectiveness of path diversity in communicating video over a lossy packet network." ] }
0901.1479
2949121573
We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.
@cite_10 the authors study a multipath FEC system by simulations only, on artificially generated graphs. They also give a heuristic to select from a number of candidate paths a set of highly disjoint paths with relatively small propagation delays.
{ "cite_N": [ "@cite_10" ], "mid": [ "2123260101" ], "abstract": [ "Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In our previous work, we have shown that substantial reduction in packet loss can be achieved by sending packets at appropriate sending rates to a receiver from multiple senders, using disjoint paths, and by protecting packets with forward error correction. In this paper, we propose a path diversity with forward error correction (PDF) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes. We propose a scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system." ] }
0901.1479
2949121573
We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.
As in most other approaches, we assume that the background cross-traffic is much larger than our own, and thus the load we impose on a path does not affect its loss statistics. Scenarios where this assumption does not hold were studied in @cite_18 in the context of a single path FEC, and in @cite_14 for multipath FEC.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "2137298097", "2135302445" ], "abstract": [ "The Gilbert model (1-st order Markov chain model) and the single-multiplexer model are two frequently used models in the study of packet-loss processes in communication networks. In this paper we investigate the accuracy of the Gilbert model, and higher-order Markov chain extended Gilbert models, in characterizing the packet-loss process associated with a transport network modeled in terms of a single-multiplexer. More specifically, we quantitatively compare the packet-loss statistics predicted by the Gilbert models with those predicted by an exact queueing analysis of the single-multiplexer model. This topic is important since low-complexity Gilbert models are frequently used to characterize end-to-end network packet-loss behavior. On the other hand, network congestion behavior is often characterized in terms of a single bottleneck node modeled as a multiplexer. It is of some interest then to establish the relative accuracy of Gilbert models in predicting the packet-loss behavior on even such a simplified network model. We demonstrate that the Gilbert models have some serious deficiencies in accurately predicting the packet-loss statistics of the single-multiplexer model. The results are shown to have some serious consequences for the performance evaluation of forward error correction (FEC) coding schemes used to combat the effects of packet losses due to network buffer overflows.", "Under the current Internet infrastructure, quality of service (QoS) in the delivery of continuous media (CM) is still relatively poor and inconsistent. In this paper we consider providing QoS through the exploitation of multiple paths existing in the network. Previous work has illustrated the advantages of this approach. Here we extend this work by considering a more expressive model for characterizing the network path losses. In particular, we propose a variation on the Gilbert model wherein the loss characteristics of a path depend on an application's transmission bandwidth. Using this model, we show the benefits of multi-path streaming over best single-path streaming, under optimal load distribution among the multiple paths. We use extensive simulation and measurements from a system prototype to quantify the performance benefits of our techniques." ] }
0901.1479
2949121573
We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.
As in @cite_2 @cite_14 @cite_6 @cite_12 we assume the paths to be independent. This can be achieved by detecting correlated paths in end-to-end measurements @cite_16 and treating them as one. Another approach is to find paths that are IP link disjoint, which should be possible if the site is multi-homed. Finally, even if all the available paths are to some extent correlated we can still get some performance benefits @cite_10 @cite_7 @cite_1 @cite_8 , though limited @cite_0 @cite_3 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_2", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2135302445", "", "", "89205691", "2096341944", "", "2887596176", "", "1997901838", "2123260101", "2112721727" ], "abstract": [ "Under the current Internet infrastructure, quality of service (QoS) in the delivery of continuous media (CM) is still relatively poor and inconsistent. In this paper we consider providing QoS through the exploitation of multiple paths existing in the network. Previous work has illustrated the advantages of this approach. Here we extend this work by considering a more expressive model for characterizing the network path losses. In particular, we propose a variation on the Gilbert model wherein the loss characteristics of a path depend on an application's transmission bandwidth. Using this model, we show the benefits of multi-path streaming over best single-path streaming, under optimal load distribution among the multiple paths. We use extensive simulation and measurements from a system prototype to quantify the performance benefits of our techniques.", "", "", "The use of forward error correction (FEC) coding is often proposed to combat the effects of network packet losses for error-resilient video transmission on packet-switched networks. On the other hand, path diversity has recently been proposed to improve network transport for both singledescription (SD) and multiple-description (MD) coded video. In this work we model and analyze an SD coded video transmission system employing packet-level FEC in combination with path diversity. In particular, we provide a precise analytical approach to evaluating the efficacy of path diversity in reducing the burstiness of network packet-loss processes. We use this approach to quantitatively demonstrate the advantages of path diversity in improving end-to-end video transport performance using packet-level FEC.", "Delivery of real time streaming applications, such as voice and video over IP, in packet switched networks is based on dividing the stream into packets and shipping each of the packets on an individual basis to the destination through the network. The basic implicit assumption on these applications is that shipping all the packets of an application is done, most of the time, over a single path along the network. In this work, we present a model in which packets of a certain session are dispersed over multiple paths, in contrast to the traditional approach. The dispersion may be performed by network nodes for various reasons such as load-balancing, or implemented as a mechanism to improve quality, as will be presented in this work. To study the effect of packet dispersion on the quality of voice over IP (VoIP) applications, we focus on the effect of the network loss on the applications, where we propose to use the Noticeable Loss Rate (NLR) as a measure (negatively) correlated with the voice quality. We analyze the NLR for various packet dispersion strategies over paths experiencing memoryless (Bernoulli) or bursty (Gilbert model) losses, and compare them to each other. Our analysis reveals that in many situations the use of packet dispersion reduces the NLR and thus improves session quality. The results suggest that the use of packet dispersion can be quite beneficial for these applications.", "", "", "", "Testbeds composed of end hosts deployed across the Internet enable researchers to simultaneously conduct a wide variety of experiments. Active measurement studies of Internet path properties that require precisely crafted probe streams can be problematic in these environments. The reason is that load on the host systems from concurrently executing experiments (as is typical in PlanetLab) can significantly alter probe stream timings. In this paper we measure and characterize how packet streams from our local PlanetLab nodes are affected by experimental concurrency. We find that the effects can be extreme. We then set up a simple PlanetLab deployment in a laboratory testbed to evaluate these effects in a controlled fashion. We find that even relatively low load levels can cause serious problems in probe streams. Based on these results, we develop a novel system called MAD that can operate as a Linux kernel module or as a stand-alone daemon to support real-time scheduling of probe streams. MAD coordinates probe packet emission for all active measurement experiments on a node. We demonstrate the capabilities of MAD , showing that it performs effectively even under very high levels of multiplexing and host system load.", "Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In our previous work, we have shown that substantial reduction in packet loss can be achieved by sending packets at appropriate sending rates to a receiver from multiple senders, using disjoint paths, and by protecting packets with forward error correction. In this paper, we propose a path diversity with forward error correction (PDF) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes. We propose a scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system.", "Reliability is critical to a variety of network applications. Unfortunately, due to lack of QoS support across ISP boundaries, it is difficult to achieve even two 9s (99 ) reliability in toadyism Internet. In this paper, we propose SmartTunnel, an end-to-end approach to achieving reliability. A SmartTunnel is a logical point-to-point tunnel between two end points that spans multiple physical network paths. It achieves reliability by strategically allocating traffic onto multiple paths and performing FEC coding. Such an end-to-end approach requires no explicit QoS support from intermediate ISPs, and is therefore easy to deploy in today's Internet. To fully realize the potential of SmartTunnel, we analytically derive near-optimal traffic allocation schemes that minimize loss rates. We extensively evaluate our approach using trace-driven simulations, ns-2 simulations, and experiments on PlanetLab. Our results clearly demonstrate that SmartTunnel is effective in achieving high reliability." ] }